Skip to content

The Fastest Node 22 Lambda Coldstart Configuration

Warp Speed Lambda Coldstart Configuration

I've written a few articles on how to get faster coldstarts with Lambda, the AWS JavaScript V3 SDK and the Node runtime. It's time to put them all together and show you the Cloud Development Kit (CDK) configuration I use to achieve the fastest coldstarts with the AWS JavaScript V3 SDK and Node 22.

State of the Coldstart 2025

My journey to achieve faster Node 22 Lambda coldstarts has largely plateaued. Recent discoveries of mine are minor and I'm not able to move the needle by more than a few milliseconds. My coldstarts (measured by Init Duration) are around ~375 ms. ~250 ms to load my application code with 3 AWS clients and an additional 125 ms to make the first https request to DynamoDB. LLRT may unlock the next boost in coldstart performance, but it is still in preview and missing features to run my production workload. With coldstarts for managed runtimes being billed starting August 1, 2025, I've decided it would be useful to package all of my optimizations together so you can achieve the same performance.

What we are building

For the purpose of this article, we are creating a simple Node 22 Lambda which uses the AWS Javascript V3 SDK. It will instantiate a Security Token Service (STS) client outside of the handler and make a single request to the STS service.

Note

This is not a real-world scenario, but it mirrors one. For example, you may instantiate an AWS Secrets Manager client and cache secrets during INIT for use in each request. Doing the work in INIT has the advantage that you get at least a full 2 vCPUs of processing power regardless of allocated memory so your code generally runs faster.

Code

index.mts
import { STSClient, GetCallerIdentityCommand } from "@aws-sdk/client-sts";
const stsClient = new STSClient({ region: process.env.AWS_REGION });
import { statSync, readFileSync } from "fs";
const start = Date.now();
const callerIdentity = await stsClient.send(new GetCallerIdentityCommand({}));
const getCallerIdLatency = Date.now() - start;

let size: number, runtimeBuildDate: Date;

// Get metadata about env and runtime (1)
try {
  runtimeBuildDate = statSync("/var/runtime").mtime;
} catch (e) {
  console.error("Unable to determine runtime build date", e);
}

try {
  size = statSync("index.mjs").size;
} catch (e) {
  console.error("Unable to determine size of index.mjs", e);
}
import packageJson from '@aws-sdk/client-sts/package.json' with { type: 'json' };
const sdkVersion: string | undefined = process.env.sdkVersion || packageJson.version;

var coldstart = 0;

export const handler = async (event: any, context: any) => {
  return {
    statusCode: 200,
    body: {
      requestId: callerIdentity["$metadata"].requestId,
      lambdaRequestId: context.awsRequestId,
      userId: callerIdentity.UserId,
      accountId: maskAccount(callerIdentity.Account!),
      arn: maskAccount(callerIdentity.Arn!),
      requestLatency: getCallerIdLatency,
      size,
      runtimeBuildDate,
      coldstart: coldstart++<1,
      sdkVersion,
      nodeVersion: process.version,
    },
  };
};

function maskAccount(input: string) {
  return input.replaceAll(/\d{12}/g, (match) => "x".repeat(match.length));
}
  1. Getting the runtime build date, sdk version and file size aren't critical, they just provide more information without much overhead. Leave them out if you don't need them.

Benchmarks

For evaluating coldstarts I ran the code in us-east-2 on Node 22 with 512 MB memory and the arm64 architecture. The code was compiled to javascript and used the AWS Lambda node runtime: nodejs:22.v48 with ARN: arn:aws:lambda:us-east-2::runtime:ccd522aa46eeddade4be388ba28af972761953cf91d2745b89d3215c05b412c2 and build date: 2025-06-20T00:28:43.000Z. This runtime has version 3.806 of the AWS Javascript SDK on disk.

I benchmarked 3 versions. Raw Lambda is the code above using the AWS SDK on disk in the Node 22 Lambda Runtime. Bundled is the code above using SDK 3.845 with the following CDK NodeJsFunction bundling config. This is similar to the AWS guidance in this article.

bundling: {
   mainFields: ["module", "main"],
   format: OutputFormat.ESM,
   bundleAwsSDK: true,
}

Finally Bundled w/optimizations is the code above bundled with all of my optimizations.

Measuring Init Duration

These are the measurements using the Init Duration value from the Lambda logs. For p50, my optimizations are 31% (91 ms) faster than bundling and 74% (217 ms) faster than using the AWS SDK on disk.


test # of samples p0 p50 p90 p95 p99 p100
Bundled w/optimizations 207 264.05 293.5827 318.3397 326.3944 378.0525 453.24
Bundled 206 341.29 384.531 411.5747 424.1026 441.4018 492.16
Raw Lambda 206 458.9 510.7512 539.0755 551.6116 591.5872 616.15

Measuring E2E Time

These are measurements made from a primed Lambda client invoking each Lambda function. By measuring latency from the client, E2E time includes the additional Lambda overhead of placement, decrypting env variables, downloading the code and other activities and is a more accurate representation of what your customers experience.

For p50, my optimizations are 21% (98 ms) faster than bundling and 44% (205 ms) faster than using the AWS SDK on disk.


test # of samples p0 p50 p90 p95 p99 p100
Bundled w/optimizations 208 409 465.8802 512.7973 536.9246 577.5651 728
Bundled 208 495 563.8753 616.9501 640.8332 667.6397 694
Raw Lambda 208 591 670.9846 733.4077 756.4877 780.294 811

Optimizations

We will be making the following optimizations.

  1. Using at least version 3.844.0 of the AWS Javascript SDK.
  2. Removing unused credentials providers
  3. Patching the AWS SDK to not import http request
  4. Not using environment variables

Using at least version 3.844.0 of the AWS Javascript SDK.

There have been several optimizations to the AWS V3 SDK since its initial release. The largest being the lazy-loading of credentials providers. The most recent optimization upgraded the remaining CommonJS (CJS) only dependencies to ECMAScript Modules (ESM) allowing better tree-shaking and a smaller bundle size. It isn't much, but it will reduce your bundle size by 18K. Lambda lags the official AWS Javascript SDK by several versions (at the time of this writing it is on 3.806). To use the 3.845.0 version of the STS client you can run:

npm i aws-sdk/client-sts@3.845.0

and add bundleAwsSdk: true to your bundling configuration:

bundling: {
   mainFields: ["module", "main"],
   format: OutputFormat.ESM,
   bundleAwsSDK: true,
}

Removing unused credential providers

As I detailed in this article, credentials for the Lambda execution role are provided as environment variables. All credential providers other than the environment credential provider are unnecessary. Even though these credential providers are lazy-loaded, esbuild still includes them in the bundle because it cannot determine at build time whether they will be used and this increases coldstart time by around 40ms.

To omit the unused credentials providers add an externalModules array to your esbuild bundling configuration:

bundling: {
    mainFields: ["module", "main"],
    format: OutputFormat.ESM,
    bundleAwsSDK: true,
    externalModules: [
        "@aws-sdk/client-sso",
        "@aws-sdk/client-sso-oidc",
        "@smithy/credential-provider-imds",
        "@aws-sdk/credential-provider-ini",
        "@aws-sdk/credential-provider-http",
        "@aws-sdk/credential-provider-process",
        "@aws-sdk/credential-provider-sso",
        "@aws-sdk/credential-provider-web-identity",
        "@aws-sdk/token-providers"
    ]
}

Patch out http support

As discovered in this issue simply importing the http Request class in Node 22 adds about 50 ms of overhead to your coldstart time over Node 20. The underlying reason is changes to the undici library in Node 22. If you don't use http (I only use https) this functionality can be patched out of the SDK.

Warning

Patching out http support will make any http call you make with the SDK fail. Only https requests will work (but https is the default for all services, you need to explicitly set it to use http if you want to). In addition future changes to the SDK may require you to update the patch. Use at your own risk.

To patch packages use the excellent patch-package module. Install it in your project with:

npm i patch-package -D

and add the following to your postinstall script in package.json:

"scripts": {
  "postinstall": "patch-package"
}

In the file node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js

Remove this line:

node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js
import { Agent as hAgent, request as hRequest } from "http";

and replace these lines:

node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js
            httpAgent: (() => {
                if (httpAgent instanceof hAgent || typeof httpAgent?.destroy === "function") {
                    return httpAgent;
                }
                return new hAgent({ keepAlive, maxSockets, ...httpAgent });
            })(),

with this line:

node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js
            httpAgent: undefined,
Then run:

npx patch-package @aws-sdk/node-http-handler

The resulting patch file should look like this:

patches/@smithy+node-http-handler+4.1.0.patch
diff --git a/node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js b/node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js
index 45d86a9..d7805fa 100644
--- a/node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js
+++ b/node_modules/@smithy/node-http-handler/dist-es/node-http-handler.js
@@ -1,6 +1,5 @@
 import { HttpResponse } from "@smithy/protocol-http";
 import { buildQueryString } from "@smithy/querystring-builder";
-import { Agent as hAgent, request as hRequest } from "http";
 import { Agent as hsAgent, request as hsRequest } from "https";
 import { NODEJS_TIMEOUT_ERROR_CODES } from "./constants";
 import { getTransformedHeaders } from "./get-transformed-headers";
@@ -64,12 +63,7 @@ or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler conf
             connectionTimeout,
             requestTimeout: requestTimeout ?? socketTimeout,
             socketAcquisitionWarningTimeout,
-            httpAgent: (() => {
-                if (httpAgent instanceof hAgent || typeof httpAgent?.destroy === "function") {
-                    return httpAgent;
-                }
-                return new hAgent({ keepAlive, maxSockets, ...httpAgent });
-            })(),
+            httpAgent: undefined,
             httpsAgent: (() => {
                 if (httpsAgent instanceof hsAgent || typeof httpsAgent?.destroy === "function") {
                     return httpsAgent;

Removing environment variables

As I discovered in this blog post, removing user provided environment variables saves at least 20ms in your E2E coldstart time. This is due to the overhead of decrypting the environment variables with KMS. Previously, the CDK was setting an unnecessary environment variable, but that was fixed last year. If you need to have user provided environment variables and know their values at build time, consider using the define functionality of esbuild to embed their values directly into your code. This is how I use define in my CDK bundling config to embed values directly in my code.

define: {
     "process.env.sdkVersion": JSON.stringify(
        JSON.parse(
          readFileSync(
            "node_modules/@aws-sdk/client-sts/package.json"
          ).toString()
        ).version
      ),
      "process.env.DOMAIN_NAME": JSON.stringify(config.domainName)
}

Try it yourself

I've created a GitHub repository with all of these optimizations here. To try the code yourself, follow the instructions in the README.

Conclusion

If you are looking for the absolute fastest Lambda coldstart time, apply my optimizations and CDK config to your own project. They will reduce your coldstarts by 91 ms or more if you remove environment variables. Depending on how much additional code you have, this can get you a sub 300 ms billable Init Duration and a sub 500 ms E2E time. This will make your code scale out better under load and reduce the impact of coldstarts to your end users. If you find additional optimizations, I'd love to hear about them. Reach out to me on BlueSky or open an issue on the repository.

Further Reading

  1. This AWS blog covers the new features of AWS SDK V3 and describes how to use esbuild to bundle. I used this for my baseline.
  2. This AWS blog discusses using minification to further reduce cold starts. I've found it doesn't improve coldstarts much, sourcemaps adds some overhead to stack traces and, consequently, I don't use it myself.
  3. The further reading section has a few great additional resources from when I was first trying to optimize coldstarts.
  4. Maxime David's website on Lambda Coldstarts is a great resource to see what the absolute fastest coldstart time is. Keep in mind, these benchmarks do not instantiate a AWS Javascript V3 client or make any service calls. I think the current floor of a Node coldstart that instantiates a AWS Javascript V3 client is around 190-200 ms.