MAISON CODE .
/ Tech · Backend · Serverless · AWS Lambda · Infrastructure · Scaling

Serverless Functions: The Economics of Scale-to-Zero

Stop paying for idle CPUS. A technical guide to AWS Lambda, Vercel Edge Functions, and Event-Driven Architecture.

AB
Alex B.
Serverless Functions: The Economics of Scale-to-Zero

In the traditional hosting model (EC2, DigitalOcean), you rent a computer. You pay $50/month. If nobody visits your site at 3:00 AM, the computer sits idle. You still pay. If 100,000 people visit at 10:00 AM, the computer crashes. You lose revenue.

This is the Capacity Planning dilemma. You strictly have to over-provision to handle peaks, which means you are over-paying 99% of the time.

Serverless (FaaS) flips the model. You do not rent a computer. You upload code. You pay per invocation.

  • 0 visits = $0.00.
  • 1 million visits = AWS spins up 1,000 concurrent functions. You pay for exactly 1 million execution seconds.

At Maison Code Paris, we use Serverless not just for cost, but for operational simplicity. We don’t patch Linux kernels. We don’t rotate SSH keys. We focus on Logic.

Why Maison Code Discusses This

We treat infrastructure as a liability, not an asset. Every server we manage is a server that can break. Our “Scale-to-Zero” philosophy aligns with our clients’ P&L:

  • Efficiency: We migrated a client’s legacy cron jobs from a dedicated $100/mo EC2 to Lambda. Cost dropped to $0.12/mo.
  • Resilience: During Black Friday, our Serverless endpoints scaled to 5,000 concurrent executions without a single timeout.
  • Focus: Our engineers spend 100% of their time on Business Logic (Pricing, Cart, Checkout), not on Docker orchestration.

Architecture: Event-Driven Patterns

Serverless is best when it is asynchronous. While you can run a standard API (GET /users), the real power is in Event Sourcing.

The Buffer Pattern (SQS + Lambda)

Scenario: You run a Flash Sale. 10,000 users checkout in 1 second. If you connect Lambda directly to your ERP (SAP/NetSuite), you will dDoS your own ERP. It can’t handle 10k connections.

Solution:

  1. API Gateway: Accepts the HTTP Request. Returns “202 Accepted”.
  2. SQS (Simple Queue Service): Stores the request in a queue. It can hold millions of messages.
  3. Lambda: Polls the queue. It processes messages at a controlled rate (e.g., 50 at a time).
  4. Result: Your ERP receives a steady stream of orders, not a tsunami. The user gets a fast response.

Failure Handling: The Dead Letter Queue (DLQ)

In a Node.js server, if a request crashes, it’s lost. In Serverless, we configure Retries. AWS Lambda will retry a failed async function 2 times automatically. If it still fails (e.g., Bug in code), it moves the event to a Dead Letter Queue (DLQ). Engineers can inspect the DLQ, fix the bug, and “Redrive” the messages. No data is lost.

The Cold Start Problem

Resources are not free. To save money, AWS shuts down your container if it hasn’t been used for ~15 minutes. The next request triggers a Cold Start.

  1. AWS allocates a microVM (Firecracker).
  2. Downloads your code.
  3. Starts Node.js.
  4. Runs your handler.

Latency: ~300ms - 1s (Node.js). ~3s (Java). Impact: Fine for background jobs. Bad for checkout APIs.

The Solution: Vercel Edge Runtime (V8 Isolates)

Vercel (and Cloudflare Workers) introduced a new runtime. Instead of booting a full Node.js container (VM + OS + Node), they run on V8 Isolates. This is the same engine that runs JavaScript in Chrome.

  • Cold Start: ~0ms.
  • Limits: You cannot use Node.js APIs like fs, child_process, or native binaries.
  • Use Case: Middleware, Auth redirect, A/B Testing buckets, Geo-routing.

The Database Trap: Connection Pooling

This is where 90% of beginners fail. A standard Postgres library (pg) opens a TCP connection. In a monolith server, you open 1 connection and share it. In Serverless, every function is an isolated container. If you have 1,000 concurrent users, you open 1,000 DB connections. Postgres runs out of RAM and crashes. FATAL: remaining connection slots are reserved.

Solution 1: Connection Pooling (PgBouncer) A middleware service that sits in front of Postgres. It holds 1,000 client connections but maps them to 10 real DB connections. DigitalOcean and AWS RDS Proxy off this as a service.

Solution 2: HTTP-based Databases New database providers (Neon, PlanetScale, Supabase) offer HTTP APIs. You don’t open a TCP socket. You fetch('https://db.neon.tech/query'). HTTP is stateless. It scales perfectly with Serverless.

// Using Neon (Serverless Postgres)
import { neon } from '@neondatabase/serverless';

const sql = neon(process.env.DATABASE_URL);

export async function handler(event) {
  const result = await sql`SELECT * FROM users`;
  return { body: JSON.stringify(result) };
}

State Management: There is no Memory

In a normal server, you can do: let requestCount = 0; app.get('/', () => requestCount++); In Serverless, requestCount will effectively always be 1 or 0. The container is destroyed after execution. Global variables are wiped.

Rule: All state must be external.

  • Session Data -> Redis.
  • User Data -> Database.
  • File Uploads -> S3 / Blob Storage. Do not write to /tmp (it vanishes). Do not use global variables.

Cost Analysis: The Tipping Point

Serverless is not always cheaper. It has a “Markup” on raw compute (~2x cost per CPU cycle compared to EC2). The savings come from 0% Idle Time.

  • Low Traffic / Spikey Traffic: Serverless is 90% cheaper.
  • Constant High Load: If you have a process running 24/7 (like a WebSocket server or heavy data crunching), EC2 is cheaper. We perform TCO (Total Cost of Ownership) audits for clients. Often, the operational cost of managing EC2 (patching, security) makes Serverless the winner even at higher volumes.

Code: Vercel API Route (Best Practice)

We structure our functions to be testable, separating Logic from Network.

// app/api/checkout/route.ts (Next.js App Router)
import { NextResponse } from 'next/server';
import { createCheckout } from '@/lib/checkout'; // Business Logic

// The Handler (Network Layer)
export async function POST(request: Request) {
  try {
    const body = await request.json();
    
    // Validate Input (Zod)
    if (!body.cartId) {
      return NextResponse.json({ error: 'Missing Cart ID' }, { status: 400 });
    }

    // Call Logic
    const url = await createCheckout(body.cartId);

    return NextResponse.json({ url });
  } catch (error) {
    console.error(error); // Logs to CloudWatch / Datadog
    return NextResponse.json({ error: 'Internal Error' }, { status: 500 });
  }
}

10. Observability: Distributed Tracing

In a monolith, you grep server.log. In Serverless, a single user request hits API Gateway -> Lambda A -> SQS -> Lambda B -> DynamoDB. If it fails, where did it fail? You cannot grep logs across 5 services. We implement Distributed Tracing (OpenTelemetry / AWS X-Ray). We pass a trace_id header through every service. This generates a “Flame Graph” showing exactly where the latency spike occurred (e.g., “Lambda B took 3s because DynamoDB was throttling”).

11. The Vendor Lock-in Myth

Reviewers often say: “Serverless locks you into AWS”. True. But Docker locks you into the Linux Kernel. React locks you into the Virtual DOM. Lock-in is inevitable. The question is: “Is the Lock-in worth the Velocity?” We can rewrite a Lambda function in Go/Rust in 1 day. The cost of migrating away from Serverless is low because the units of code are small. The cost of not using Serverless (managing EC2 fleets) is high and ongoing.

13. Security: Least Privilege (IAM)

One Lambda = One Role. Do not give your Lambda AdministratorAccess. If that Lambda is compromised (Dependency injection), the attacker owns your account. Give it: s3:GetObject on bucket-a ONLY. This granular security model is superior to a Monolith where the entire server has root access to the DB. Tools like SST (Serverless Stack) automate this policy generation: bucket.grantRead(lambda) generates the strict IAM policy automatically.

14. Frameworks: Why we use SST

Raw CloudFormation is painful. Terraform is verbose. We use SST (Serverless Stack). It allows us to define infrastructure in TypeScript. It enables “Live Lambda Development” (Local environment proxies to AWS). You set breakpoints in VS Code, hit an endpoint, and it pauses inside the Lambda running on AWS. It is the only way to develop Serverless sanely.

15. Conclusion

Serverless Functions are the glue of the modern internet. They allow frontend engineers to become “Full Stack” without needing a degree in Linux Administration. They scale infinitely. They cost nothing when idle. But they require a disciplined “Stateless” mindset.


Paying for idle uptime?

Are you running a $100 server for a job that takes 5 seconds a day?

Hire our Architects.