Closed PreviewCompiler source opens July 1, 2026. Playground and binary available today.Join Discord →
MetaScript

Serverless

MetaScript's C backend compiles to a standalone native binary. That's exactly what modern serverless platforms want from a "custom runtime" — no language VM to boot, no interpreter to warm up, the binary is the runtime.

Why native binaries for serverless

  • Cold start measured in milliseconds. No JIT, no GC warm-up, no module graph to load.
  • Tiny deploy artifacts. A plain-HTTP handler strips to ~80 KB.
  • Predictable memory. No hidden runtime allocation — 128 MB is plenty for most handlers.
  • Portable. Same binary model works across AWS, GCP, Azure, and self-hosted FaaS.

Target platforms

PlatformMechanismReference package
AWS Lambdaprovided.al2023 custom runtime@metascript/lambda-runtime (v0.1)
Google Cloud Run / Functions gen2Container with HTTP server on $PORTNot yet — pattern is straightforward with std/http/server
Azure FunctionsCustom handler (HTTP server)Not yet — pattern is straightforward with std/http/server
Cloudflare WorkersV8 isolates — different model, use JS backend insteadN/A

The AWS Lambda reference implementation is the most mature; the patterns for Cloud Run and Azure are all well-trodden HTTP-server shapes, but no one has wrapped them in a convenience package yet.

This is an open canvas. MetaScript has its own package registry at pkg.metascriptlang.org. Build @yourname/gcp-run, @yourname/azure-functions, or anything else you need — publish it under your own namespace and let the community use it. No coordination needed, no gatekeeping — just ship it!

Example: AWS Lambda

Using the @metascript/lambda-runtime reference package.

1. Install

msc add @metascript/lambda-runtime

2. Write the handler

import {
  startLambda, LambdaContext, LambdaError,
} from "@metascript/lambda-runtime";
import {
  parseApiGwV2, serializeApiGwV2Response, ApiGwV2Response,
} from "@metascript/lambda-runtime/events";
import { createHeaders, setHeader } from "std/http";

function handler(event: string, ctx: LambdaContext): Result<string, LambdaError> {
  const reqR = parseApiGwV2(event);
  if (!reqR.ok) {
    return Result.err({ errorType: "BadEvent", errorMessage: reqR.error });
  }
  const req = reqR.value;

  const headers = createHeaders();
  setHeader(headers, "content-type", "application/json");

  const resp: ApiGwV2Response = {
    statusCode: 200,
    headers: headers,
    body: `{"method":"${req.method}","path":"${req.path}"}`,
  };
  return Result.ok(serializeApiGwV2Response(resp));
}

startLambda(handler);

startLambda drives Lambda's Runtime API loop: fetch invocation, dispatch to handler, post response (or error), repeat.

3. Build the bootstrap

Cross-compile for Graviton (arm64 — cheapest + fastest on Lambda):

mkdir -p build
msc build index.ms \
  --target=c --os=linux --cpu=arm64 \
  --release --strip \
  --output=build/bootstrap
chmod +x build/bootstrap

4. Deploy with SST

SST v3 is the shortest path — an sst.aws.Function with runtime: "provided.al2023", bundle: "build", handler: "bootstrap", and url: true is enough to get a public Function URL. See the full sst.config.ts in examples/apigw-hello.

bun sst deploy --stage dev
curl "$URL/hi"

Measured performance

From the apigw-hello example, 2026-04-21, us-east-1, 128 MB, arm64, pulled from CloudWatch REPORT lines:

MetricValue
Cold start init6.05–6.94 ms (mean 6.46 ms, n=4)
Warm1.33 ms
Memory used14 MB / 128 MB (11%)
Bootstrap size (stripped)81 KB

Rust/Go Lambda cold-start init lands in the 10–30 ms range for comparison.

Pattern for other providers

Cloud Run, Cloud Functions gen2, and Azure Functions (custom handler) all work the same way: your binary runs an HTTP server that the platform invokes over localhost. Build with std/http/server, listen on $PORT, deploy the binary in a container or zip. No platform- specific runtime API, no special event parsing — just HTTP in, HTTP out.

This is a simpler integration shape than Lambda's Runtime API; the reason there's no reference package yet is just "nobody's needed it enough to ship one." If you build one, let us know.

Next steps