Vercel supports multiple runtimes for your functions. Each runtime has its own set of libraries, APIs, and functionality that provides different trade-offs and benefits.
Runtimes transform your source code into Functions, which are served by our Edge Network.
Runtime configuration is usually only necessary when you want to use the Edge runtime.
Vercel supports these official runtimes:
Runtime | Description |
---|---|
Node.js | The Node.js runtime takes an entrypoint of a Node.js function, builds its dependencies (if any) and bundles them into a Serverless Function. |
Edge | The Edge runtime is a lightweight JavaScript runtime that exposes a set of Web Standard APIs that make sense on the server. |
Go | The Go runtime takes in a Go program that defines a singular HTTP handler and outputs it as a Serverless Function. |
Python | The Python runtime takes in a Python program that defines a singular HTTP handler and outputs it as a Serverless Function. |
Ruby | The Ruby runtime takes in a Ruby program that defines a singular HTTP handler and outputs it as a Serverless Function. |
If you would like to use a language that Vercel does not support by default, you can use a community runtime by setting the functions
property in vercel.json
. For more information on configuring other runtimes, see Configuring your function runtime.
The following community runtimes are recommended by Vercel:
Runtime | Runtime Module | Docs |
---|---|---|
Bash | vercel-bash | https://github.com/importpw/vercel-bash |
Deno | vercel-deno | https://github.com/vercel-community/deno |
PHP | vercel-php | https://github.com/vercel-community/php |
Rust | vercel-rust | https://github.com/vercel-community/rust |
You can create a community runtime by using the Runtime API. Alternatively, you can use the Build Output API.
A runtime can retain an archive of up to 100 MB of the filesystem at build time. The cache key is generated as a combination of:
- Project name
- Team ID or User ID
- Entrypoint path (e.g.,
api/users/index.go
) - Runtime identifier including version (e.g.:
@vercel/go@0.0.1
)
The cache will be invalidated if any of those items changes. You can bypass the cache by running vercel -f
.
When using functions on Vercel, you can choose what runtime you want to use:
- Node.js (Serverless)
- Edge
- Go, Python, Ruby - These runtimes are available in Beta for use with Serverless Functions.
Usually, when writing TypeScript or JavaScript functions, you'll be deciding between the Node.js or Edge runtime. The following sections provide information on the trade-offs and benefits of each.
Node.js-powered functions are suited to computationally intense or large functions and provide benefits like:
- More RAM and CPU power – For computationally intense workloads, or functions that have bundles up to 250 MB in size, this runtime is ideal
- Complete Node.js compatibility - The Node.js runtime offers access to all Node.js APIs, making it a powerful tool for many applications, although it may take them longer to boot than those using the Edge runtime
In our documentation and this guide, we mention Serverless Functions. These are Node.js -powered Vercel Functions. To learn how to implement these functions, see the quickstart .
Bytecode caching in Vercel Functions reduces cold start times by caching the compiled bytecode of JavaScript files after their first execution. This eliminates the need for recompilation on subsequent cold starts, leading to faster execution. Bytecode caching is enabled by default when using Node.js version 20+ and CommonJS.
Edge runtime-powered functions can be a cost-effective option and provide benefits like:
- Lightweight with a slim runtime - With a smaller API surface area and using V8 isolates, Edge runtime-powered functions have a slim runtime with only a subset of Node.js APIs are exposed
- Globally distributed by default – Vercel deploys all Edge Functions globally across its Edge Network, which means your site's visitors will get API responses from data centers geographically near them, typically reducing the overall response time
- Pricing is based on compute time – You're charged for time processing any requests and not for your function is fetching data. This is ideal for querying databases or AI services that may have longer request times
Responses from Edge Functions can be cached and streamed in real time.
In our documentation and this guide, we mention Edge Functions. These are Edge runtime -powered Vercel Functions. To learn how to implement these functions, see the quickstart .
Node.js runtime | Edge runtime | |
---|---|---|
Runtime support | Node.js. Can also support Go, Ruby, Python | Edge |
Location | Deployed as region-first, can customize location. Pro and Enterprise teams can set multiple regions | Deployed global-first, customizable to run as regional |
Failover | Automatic failover to defined regions | Automatic global failover |
Automatic concurrency scaling | Auto-scales up to 30,000 (Hobby and Pro) or 100,000+ (Enterprise) concurrency | Unlimited concurrency |
Isolation boundary | microVM | V8 isolate |
File system support | Yes | No file system support |
Archiving | Yes | No |
Functions created per deployment | Hobby: Framework-dependent, Pro and Ent: No limit | No limit |
Runtime is the environment in which your functions execute. Vercel supports several runtimes for Serverless Functions (Node.js, Go, Ruby, Python), while Edge Functions use the lightweight Edge runtime.
This means that with Serverless Functions you have access to all Node.js APIs. With Edge Functions you get access to a subset of the most important browser APIs.
Location refers to where your functions are executed. Serverless Functions are region-first, while Edge Functions are executed close to the end-users across Vercel's global network.
When you deploy Edge Functions, there are considerations you need to make about where it's deployed and executes. Edge Functions are executed globally and in a region close to the user's request. However, if your data source is geographically far from this request, any response will be slow. Because of this you can opt to execute your function closer to your data source.
You can deploy Serverless Functions to up to 3 regions on Pro or 18 on Enterprise. Deploying to more regions than your plan allows for will cause your deployment to fail before entering the build step.
Users on any plan can deploy Edge Functions to multiple regions.
Vercel's failover mode refers to the system's behavior when a function fails to execute because of data center downtime.
Vercel provides redundancy and automatic failover for Edge Functions to ensure high availability. For Serverless Functions, you can use the functionFailoverRegions
configuration in your vercel.json
file to specify which regions the function should automatically failover to.
The concurrency model on Vercel refers to how many instances of your functions can run simultaneously. All functions on Vercel scale automatically based on demand to manage increased traffic loads.
With automatic concurrency scaling, your Vercel Functions can scale to a maximum of 30,000 on Pro or 100,000 on Enterprise, maintaining optimal performance during traffic surges. The scaling is based on the burst concurrency limit of 1000 concurrent executions per 10 seconds, per region. Additionally, Enterprise customers can purchase extended concurrency.
Vercel's infrastructure monitors your usage and preemptively adjusts the concurrency limit to cater to growing traffic, allowing your applications to scale without your intervention.
Automatic concurrency scaling is available on all plans.
Burst concurrency refers to Vercel's ability to temporarily handle a sudden influx of traffic by allowing a higher concurrency limit.
Upon detecting a traffic spike, Vercel temporarily increases the concurrency limit to accommodate the additional load. The initial increase allows for a maximum of 1000 concurrent executions per 10 seconds. After the traffic burst subsides, the concurrency limit gradually returns to its previous state, ensuring a smooth scaling experience.
The scaling process may take several minutes during traffic surges, especially substantial ones. While this delay aligns with natural traffic curves to minimize potential impact on your application's performance, it's advisable to monitor the scaling process for optimal operation.
You can monitor burst concurrency events using Log Drains, or Runtime Logs to help you understand and optimize your application's performance.
If you exceed the limit, a 429 FUNCTION_RATE_LIMIT
error will trigger. Alternatively, you can explore Edge Functions, which do not have concurrency limits.
In Vercel, the isolation boundary refers to the separation of individual instances of a function to ensure they don't interfere with each other. This provides a secure execution environment for each function.
With traditional serverless infrastructure, each function uses a microVM for isolation, which provides strong security but also makes them slower to start and more resource intensive. As the Edge runtime is built on the V8 engine, it uses V8 isolates to separate just the runtime context, allowing for quick startup times and high performance.
Filesystem support refers to a function's ability to read and write to the filesystem. Serverless Functions have a read-only filesystem with writable /tmp
scratch space up to 500 MB. Edge Functions do not have filesystem access due to their ephemeral nature.
Serverless Functions are archived when they are not invoked:
- Within 2 weeks for Production Deployments
- Within 48 hours for Preview Deployments
Archived functions will be unarchived when they're invoked, which can make the initial cold start time at least 1 second longer than usual.
Edge Functions are not archived.
When using Next.js or SvelteKit on Vercel, dynamic code (APIs, server-rendered pages, or dynamic fetch
requests) will be bundled into the fewest number of Serverless Functions possible, to help reduce cold starts. Because of this, it's unlikely that you'll hit the limit of 12 bundled Serverless Functions per deployment.
When using other frameworks, or Serverless Functions directly without a framework, every API maps directly to one Serverless Function. For example, having five files inside api/
would create five Serverless Functions. For Hobby, this approach is limited to 12 Serverless Functions per deployment.
Node.js runtime (and more) | Edge runtime | |
---|---|---|
Max size | 250 MB | Hobby: 1 MB, Pro: 2 MB, Ent: 4 MB |
Max duration | Hobby: 10s (default) - configurable up to 60s, Pro: 15s (default) - configurable up to 300s, Ent: 15s (default) - configurable up to 900s | 25s (to begin returning a response, but can continue streaming data.) |
Max memory | Hobby: 1024 MB, Pro and Ent: 3009 MB | 128 MB |
Max environment variable size | 64 KB | 64 KB |
Max request body size | 4.5 MB | 4 MB |
Vercel places restrictions on the maximum size of the deployment bundle for functions to ensure that they execute in a timely manner.
For Serverless Functions, the maximum uncompressed size is 250 MB including layers which are automatically used depending on runtimes. These limits are enforced by AWS.
You can use includeFiles
and excludeFiles
to specify items which may affect the function size, however the limits cannot be configured. These configurations are not supported in Next.js, instead use outputFileTracingIncludes
.
Edge Functions have plan-dependent size limits. This is the total, compressed size of your function and its dependencies after bundling.
This refers to the longest time a function can process an HTTP request before responding.
Functions using the Edge runtime do not have a maximum duration. They must begin sending a response within 25 seconds and can continue streaming a response beyond that time.
While Serverless Functions have a default duration, this duration can be extended using the maxDuration config. If a Serverless Function doesn't respond within the duration, a 504 error code (FUNCTION_INVOCATION_TIMEOUT
) is returned.
Serverless Functions have the following defaults and maximum limits for the duration of a function:
Default | Maximum | |
---|---|---|
Hobby | 10s | 60s |
Pro | 15s | 300s |
Enterprise | 15s | 900s |
Serverless Functions can use more memory and larger CPUs than Edge Functions. They have the following defaults and maximum limits:
Default | Maximum | |
---|---|---|
Hobby | 1024 MB / 0.6 vCPU | 1024 MB / 0.6 vCPU |
Pro /Enterprise | 1769 MB / 1 vCPU | 3009 MB / 1.7 vCPU |
Users on Pro and Enterprise plans can configure the default memory size for all functions in the dashboard, or on a per-function basis in your vercel.json
.
Edge Functions have a fixed memory limit. When you exceeds this limit, the execution will be aborted and we will return a 502
error.
The maximum size for a Function includes your JavaScript code, imported libraries and files (such as fonts), and all files bundled in the function.
If you reach the limit, make sure the code you are importing in your function is used and is not too heavy. You can use a package size checker tool like bundle to check the size of a package and search for a smaller alternative.
You can use environment variables to manage dynamic values and sensitive information affecting the operation of your functions. Vercel allows developers to define these variables either at deployment or during runtime.
You can use a total of 64 KB in environments variables per-deployment on Vercel. This limit is for all variables combined, and so no single variable can be larger than 64 KB.
In Vercel, the request body size is the maximum amount of data that can be included in the body of a request to a function.
The maximum payload size for the request body or the response body of a Serverless Function is 4.5 MB. If a Serverless Function receives a payload in excess of the limit it will return an error 413: FUNCTION_PAYLOAD_TOO_LARGE. See How do I bypass the 4.5MB body size limit of Vercel Serverless Functions for more information.
Edge Functions have the following additional limits to the request size:
Name | Limit |
---|---|
Maximum URL length | 14 KB |
Maximum request body length | 4 MB |
Maximum number of request headers | 64 |
Maximum request headers length | 16 KB |
Node.js runtime (and more) | Edge runtime | |
---|---|---|
Geolocation data | Yes | Yes |
Access request headers | Yes | Yes |
Cache responses | Yes | Yes |
You can learn more about API support and writing functions:
- Serverless Functions: Node.js runtime
- Edge Functions: Edge Functions API
Edge Functions are neither Node.js nor browser applications, which means they don't have access to all browser and Node.js APIs. Currently, the Edge runtime offers a subset of browser APIs and some Node.js APIs.
There are some restrictions when writing Edge Functions:
- Use ES modules
- Most libraries which use Node.js APIs as dependencies can't be used in Edge Functions yet. See available APIs for a full list
- Dynamic code execution (such as
eval
) is not allowed for security reasons. You must ensure libraries used in your Edge Functions don't rely on dynamic code execution because it leads to a runtime error. For example, the following APIs cannot be used:API Description eval
Evaluates JavaScript code represented as a string new Function(evalString)
Creates a new function with the code provided as an argument WebAssembly.instantiate
Compiles and instantiates a WebAssembly module from a buffer source
- You cannot set non-standard port numbers in the fetch URL (e.g.,
https://example.com:8080
). Only80
and443
are allowed. If you set a non-standard port number, the port number is ignored, and the request is sent to port80
forhttp://
URL, or port443
forhttps://
URL. - The maximum number of requests from
fetch
API is 950 per Edge Function invocation. - The maximum number of open connections is 6.
- Each function invocation can have up to 6 open connections. For example, if you try to send 10 simultaneous fetch requests, only 6 of them can be processed at a time. The remaining requests are put into a waiting queue and will be processed accordingly as those in-flight requests are completed.
- If in-flight requests have been waiting for a response for more than 15 seconds with no active reads/writes, the runtime may cancel them based on its LRU (Least Recently Used) logic.
- If you attempt to use a canceled connection, the
Network connection lost.
exception will be thrown. - You can
catch
on thefetch
promise to handle this exception gracefully (e.g. with retries). Additionally, you can use theAbortController
API to set timeouts forfetch
requests.
- If you attempt to use a canceled connection, the
To avoid CPU timing attacks, like Spectre, date and time functionality is not generally available. In particular, the time returned from Date.now()
only advances after I/O operations, like fetch
. For example:
export const runtime = 'edge';
export async function GET(request: Request) {
const currentDate = () => new Date().toISOString();
for (let i = 0; i < 500; i++) {
console.log(`Current Date before fetch: ${currentDate()}`); // Prints the same value 1000 times.
}
await fetch('https://worldtimeapi.org/api/timezone/Etc/UTC');
console.log(`Current Date after fetch: ${currentDate()}`); // Prints the new time
return Response.json({ date: currentDate() });
}
Node.js runtime (and more) | Edge runtime | |
---|---|---|
Billing | Pay for wall-clock time | Pay for CPU time |
The Hobby plan offers functions for free, within limits. The Pro plan extends these limits, and charges usage based on Function Duration for Serverless Functions and CPU Time for Edge Functions.
-
Function duration for Serverless Functions is based on Wall-clock time , which refers to the actual time elapsed during a process, similar to how you would measure time passing on a wall clock. It includes all time spent from start to finish of the process, regardless of whether that time was actively used for processing or spent waiting for a streamed response. It is important to make sure you've set a reasonable maximum duration for your function. See "Managing usage and pricing for Serverless Functions" for more information.
-
Edge runtime-powered Functions usage is based on CPU Time. CPU time is the time spent actually processing your code. This doesn't measure time spent waiting for data fetches to return. See "Managing usage and pricing for Edge Functions" for more information.
Edge Middleware can use no more than 50 ms of CPU time on average.
This limitation refers to actual net CPU time, which is the time spent performing calculations, not the total elapsed execution or "wall clock" time. For example, when you are blocked talking to the network, the time spent waiting for a response does not count toward CPU time limitations.
Node.js runtime (and more) | Edge runtime | |
---|---|---|
Secure Compute | Supported | Not Supported |
Streaming | Supported, depending on the framework | Supported |
Cron jobs | Supported | Supported |
Vercel Storage | Supported | Supported |
Edge Config | Supported only in Node.js runtime | Supported |
OTEL | Supported | Not supported |
Vercel's Secure Compute feature offers enhanced security for your Serverless Functions, including dedicated IP addresses and VPN options. This can be particularly important for functions that handle sensitive data.
Streaming refers to the ability to send or receive data in a continuous flow.
Both the Node.js and the Edge runtime support streaming by default. Streaming is also supported when using the Python runtime.
In addition, Serverless Functions have a maximum duration, meaning that it isn't possible to stream indefinitely. Edge Functions do not have a maximum duration, but you must send an initial response within 25 seconds. You can continue streaming a response beyond that time.
All streaming functions support the waitUntil
method, which allows for an asynchronous task to be performed during the lifecycle of the request. For Serverless Functions, this means that while your function will likely run for the same amount of time, and therefore cost the same as waiting for your whole response to be ready, your end-users can have a better, more interactive experience.
Cron jobs are time-based scheduling tools used to automate repetitive tasks. When a cron job is triggered through the cron expression, it calls a Vercel Function.
From your function, you can communicate with a choice of data stores. To ensure low-latency responses, it's crucial to have compute close to your databases. Always deploy your databases in regions closest to your functions to avoid long network roundtrips. For more information, see our best practices documentation.
An Edge Config is a global data store that enables experimentation with feature flags, A/B testing, critical redirects, and IP blocking. It enables you to read data at the edge without querying an external database or hitting upstream servers.
Vercel has an OpenTelemetry (OTEL) collector that allows you to send OTEL traces from your Serverless Functions to application performance monitoring (APM) vendors such as New Relic.
Was this helpful?