Claude Code × GCP Cloud Functions Complete Guide | Rapid Serverless Function Development
Streamline GCP Cloud Functions with Claude Code. Implement HTTP/Pub/Sub/Firestore triggers, local testing, and deployment automation with real-world code examples from Masa's experience.
Cloud Run Was Overkill — So I Switched to Cloud Functions
I’m Masa, the developer behind claudecode-lab.com.
When I first started using GCP serverless, I considered Cloud Run. Flexible, container-based — but for tasks like “just receive a single webhook” or “run a batch job once at night”, it was clearly over-engineered. Write a Dockerfile, configure Cloud Build, create service accounts… too much overhead for such simple requirements.
That’s when I switched to Cloud Functions (2nd Generation / Gen2). Deploy a single function and you get an HTTPS endpoint instantly — with zero charges when idle. It feels just like AWS Lambda, with seamless integration into the GCP ecosystem (Pub/Sub, Firestore, Cloud Scheduler).
The problem was I kept referencing docs every time: “How do you write the trigger config again?” “What’s the local startup command for Functions Framework?” Claude Code solved all of that. Now over 90% of my Cloud Functions code is Claude Code-generated, and I focus on review and validation.
This article shares every Cloud Functions × Claude Code pattern I actually use in production.
Why Claude Code Is Perfect for GCP Cloud Functions
Cloud Functions Gen2 runs on Cloud Run under the hood, supporting up to 9-minute timeouts, 16 GB memory, and 1,000 concurrent instances — a huge leap from Gen1’s “5 min / 2 GB” limits.
Three reasons Claude Code pairs well with it:
- Boilerplate-heavy: Trigger definitions, typed request/response handling, Secret Manager integration — these follow the same patterns every time, and Claude Code excels at pattern completion.
- Long deploy commands:
gcloud functions deployoften takes 10+ options. Letting Claude Code manage these as templates prevents omissions. - Tedious test setup: Functions Framework setup and curl-based request simulation are tricky for newcomers — Claude Code handles both in one shot.
Step 1: HTTP Trigger Function
The most common Cloud Functions use case. An HTTPS endpoint is auto-provisioned, usable as an API or webhook receiver.
Prompt to Claude Code
Create an HTTP trigger Cloud Functions Gen2 function in TypeScript.
- Accept { userId: string, action: string } in the request body
- Validate a Bearer token in the Authorization header
- Write a log entry to Firestore
- Return errors in JSON format
- Use the Functions Framework export format
Generated Code
import { http, HttpFunction } from "@google-cloud/functions-framework";
import { Request, Response } from "express";
import { Firestore } from "@google-cloud/firestore";
const db = new Firestore();
const VALID_TOKEN = process.env.API_SECRET_TOKEN;
// Request body type definition
interface ActionRequest {
userId: string;
action: string;
}
// Response type definition
interface ActionResponse {
success: boolean;
message: string;
logId?: string;
}
const handleAction: HttpFunction = async (req: Request, res: Response) => {
// Set CORS headers
res.set("Access-Control-Allow-Origin", "*");
if (req.method === "OPTIONS") {
res.set("Access-Control-Allow-Methods", "POST");
res.set("Access-Control-Allow-Headers", "Authorization, Content-Type");
res.status(204).send("");
return;
}
// Allow POST only
if (req.method !== "POST") {
res.status(405).json({ success: false, message: "Method Not Allowed" });
return;
}
// Bearer token validation
const authHeader = req.headers.authorization ?? "";
if (!authHeader.startsWith("Bearer ") || authHeader.slice(7) !== VALID_TOKEN) {
res.status(401).json({ success: false, message: "Unauthorized" });
return;
}
// Body validation
const body = req.body as Partial<ActionRequest>;
if (!body.userId || !body.action) {
res.status(400).json({
success: false,
message: "userId and action are required",
});
return;
}
try {
// Write log to Firestore
const logRef = await db.collection("action_logs").add({
userId: body.userId,
action: body.action,
timestamp: new Date(),
ip: req.ip,
});
const response: ActionResponse = {
success: true,
message: "Action logged successfully",
logId: logRef.id,
};
res.status(200).json(response);
} catch (err) {
console.error("Firestore write error:", err);
res.status(500).json({ success: false, message: "Internal Server Error" });
}
};
http("handleAction", handleAction);
package.json
{
"name": "cloud-functions-sample",
"version": "1.0.0",
"main": "dist/index.js",
"scripts": {
"build": "tsc",
"start": "npx @google-cloud/functions-framework --target=handleAction",
"dev": "npx @google-cloud/functions-framework --target=handleAction --port=8080"
},
"dependencies": {
"@google-cloud/firestore": "^7.0.0",
"@google-cloud/functions-framework": "^3.0.0"
},
"devDependencies": {
"@types/express": "^4.17.0",
"typescript": "^5.0.0"
}
}
Step 2: Pub/Sub Trigger Function (Async Event Processing)
Cloud Pub/Sub is GCP’s async messaging service. Perfect for “send an email after order completion” or “generate a thumbnail after image upload.”
Prompt to Claude Code
Create a Pub/Sub trigger Cloud Functions Gen2 function in TypeScript.
- Topic name: image-uploaded
- Message data contains { bucketName: string, filePath: string } (base64 encoded)
- Fetch file metadata from Cloud Storage and log it
- Throw an exception on failure to trigger Pub/Sub retry
Generated Code
import { cloudEvent, CloudEvent } from "@google-cloud/functions-framework";
import { MessagePublishedData } from "@google/events/cloud/pubsub/v1/MessagePublishedData";
import { Storage } from "@google-cloud/storage";
const storage = new Storage();
interface ImageUploadedMessage {
bucketName: string;
filePath: string;
}
cloudEvent<MessagePublishedData>("handleImageUploaded", async (event: CloudEvent<MessagePublishedData>) => {
const base64Data = event.data?.message?.data;
if (!base64Data) {
// No data: skip without retry
console.warn("No message data received, skipping");
return;
}
// base64 → JSON decode
const rawJson = Buffer.from(base64Data, "base64").toString("utf-8");
let payload: ImageUploadedMessage;
try {
payload = JSON.parse(rawJson) as ImageUploadedMessage;
} catch {
console.error("Invalid JSON in Pub/Sub message:", rawJson);
return; // Parse errors won't succeed on retry — discard
}
const { bucketName, filePath } = payload;
console.log(`Processing: gs://${bucketName}/${filePath}`);
try {
// Fetch metadata from Cloud Storage
const [metadata] = await storage.bucket(bucketName).file(filePath).getMetadata();
console.log("File metadata:", {
name: metadata.name,
size: metadata.size,
contentType: metadata.contentType,
updated: metadata.updated,
});
console.log(`Successfully processed: ${filePath}`);
} catch (err) {
// Throwing causes Pub/Sub to retry the message
console.error(`Failed to process ${filePath}:`, err);
throw err;
}
});
Step 3: Firestore Trigger Function (React to Document Changes)
Execute functions when Firestore documents are created, updated, or deleted. Great for welcome emails on user registration or audit logging.
Generated Code
import { onDocumentWritten, Change, FirestoreEvent } from "firebase-functions/v2/firestore";
import { QueryDocumentSnapshot } from "firebase-admin/firestore";
import * as admin from "firebase-admin";
admin.initializeApp();
const db = admin.firestore();
interface UserDocument {
email: string;
name: string;
createdAt: FirebaseFirestore.Timestamp;
plan: "free" | "pro";
}
export const onUserWrite = onDocumentWritten(
"users/{userId}",
async (event: FirestoreEvent<Change<QueryDocumentSnapshot> | undefined, { userId: string }>) => {
const userId = event.params.userId;
const before = event.data?.before;
const after = event.data?.after;
// Create event
if (!before?.exists && after?.exists) {
const userData = after.data() as UserDocument;
await db.collection("email_queue").add({
to: userData.email,
template: "welcome",
data: { name: userData.name },
createdAt: admin.firestore.FieldValue.serverTimestamp(),
status: "pending",
});
return;
}
// Update event
if (before?.exists && after?.exists) {
const beforeData = before.data() as UserDocument;
const afterData = after.data() as UserDocument;
const changedFields = Object.keys(afterData).filter(
(key) => JSON.stringify(beforeData[key as keyof UserDocument]) !==
JSON.stringify(afterData[key as keyof UserDocument])
);
if (changedFields.length > 0) {
await db.collection("audit_logs").add({
userId,
changedFields,
before: beforeData,
after: afterData,
timestamp: admin.firestore.FieldValue.serverTimestamp(),
});
}
return;
}
// Delete event — archive to separate collection
if (before?.exists && !after?.exists) {
const userData = before.data() as UserDocument;
await db.collection("deleted_users").doc(userId).set({
...userData,
deletedAt: admin.firestore.FieldValue.serverTimestamp(),
});
}
}
);
Step 4: Cloud Scheduler + Cloud Functions (Scheduled Batch Jobs)
Combine Cloud Scheduler with HTTP trigger functions for “calculate yesterday’s stats every morning at 9 AM” or “sync external API data every hour.”
Deploy Command for Cloud Scheduler
# Create service account for Cloud Scheduler
gcloud iam service-accounts create cloud-scheduler-sa \
--display-name="Cloud Scheduler Service Account"
# Get the Cloud Functions URL
FUNCTION_URL=$(gcloud functions describe dailySummaryBatch \
--region=asia-northeast1 \
--format="value(serviceConfig.uri)")
# Create the scheduled job (daily at 2:00 AM JST = 17:00 UTC)
gcloud scheduler jobs create http daily-summary-job \
--location=asia-northeast1 \
--schedule="0 17 * * *" \
--uri="$FUNCTION_URL" \
--http-method=POST \
--oidc-service-account-email=cloud-scheduler-sa@${PROJECT_ID}.iam.gserviceaccount.com \
--oidc-token-audience="$FUNCTION_URL"
Step 5: Local Testing (Functions Framework)
Functions Framework lets you run the exact same code locally without deploying — dramatically speeding up your dev cycle.
# Build TypeScript
npm run build
# Start Functions Framework locally
npx @google-cloud/functions-framework --target=handleAction --port=8080
# Send a test HTTP request
curl -X POST http://localhost:8080 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-local-test-token" \
-d '{"userId": "user-123", "action": "login"}'
# Emulate a Pub/Sub message
MESSAGE_DATA=$(echo -n '{"bucketName":"my-bucket","filePath":"images/test.png"}' | base64)
curl -X POST http://localhost:8080 \
-H "Content-Type: application/json" \
-d "{\"specversion\":\"1.0\",\"type\":\"google.cloud.pubsub.topic.v1.messagePublished\",\"source\":\"//pubsub.googleapis.com\",\"id\":\"test-id\",\"data\":{\"message\":{\"data\":\"$MESSAGE_DATA\",\"attributes\":{}}}}"
Step 6: Deployment Automation (gcloud + GitHub Actions)
gcloud functions deploy handleAction \
--gen2 \
--runtime=nodejs22 \
--region=asia-northeast1 \
--source=. \
--entry-point=handleAction \
--trigger-http \
--allow-unauthenticated \
--memory=512Mi \
--timeout=60s \
--min-instances=0 \
--max-instances=100 \
--set-secrets="API_SECRET_TOKEN=api-secret-token:latest" \
--set-env-vars="NODE_ENV=production"
# .github/workflows/deploy-functions.yml
name: Deploy Cloud Functions
on:
push:
branches: [main]
paths: ["functions/**"]
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- uses: actions/checkout@v4
- uses: google-github-actions/auth@v2
with:
workload_identity_provider: ${{ secrets.WIF_PROVIDER }}
service_account: ${{ secrets.WIF_SERVICE_ACCOUNT }}
- uses: google-github-actions/setup-gcloud@v2
- uses: actions/setup-node@v4
with:
node-version: "22"
cache: "npm"
cache-dependency-path: functions/package-lock.json
- name: Build
working-directory: functions
run: npm ci && npm run build
- name: Deploy
working-directory: functions
run: |
gcloud functions deploy handleAction \
--gen2 --runtime=nodejs22 --region=asia-northeast1 \
--source=dist --entry-point=handleAction \
--trigger-http --allow-unauthenticated \
--memory=512Mi --timeout=60s \
--set-secrets="API_SECRET_TOKEN=api-secret-token:latest" \
--project=${{ secrets.GCP_PROJECT_ID }}
4 Pitfalls to Avoid
Pitfall 1: Cold Start Latency
Fix: Set --min-instances=1 for user-facing functions. I keep min-instances=0 for batch jobs and min-instances=1 for API endpoints — reduced P99 response time from 4.2s to 0.8s.
Pitfall 2: Exceeding the 9-Minute Timeout
Gen2’s maximum timeout is 540 seconds. For longer workloads, split into chunks via Pub/Sub and process in parallel across multiple function instances.
Pitfall 3: Missing Secret Manager Permissions
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}[email protected]" \
--role="roles/secretmanager.secretAccessor"
Pitfall 4: Out-of-Memory Crashes
Default memory is 256 MB. For image processing or large JSON payloads:
gcloud functions deploy processImage \
--memory=2Gi \
--cpu=2 # Gen2 only
Summary
| Trigger | Use Case | Key Concern |
|---|---|---|
| HTTP | Webhooks, API endpoints | Auth, CORS, timeout |
| Pub/Sub | Async event processing | base64 decode, retry design |
| Firestore | React to data changes | Avoid infinite loops |
| Cloud Scheduler | Scheduled batch jobs | OIDC validation, timezone |
Related Articles
Free PDF: Claude Code Cheatsheet in 5 Minutes
Just enter your email and we'll send you the single-page A4 cheatsheet right away.
We handle your data with care and never send spam.
Level up your Claude Code workflow
50 battle-tested prompt templates you can copy-paste into Claude Code right now.
About the Author
Masa
Engineer obsessed with Claude Code. Runs claudecode-lab.com, a 10-language tech media with 2,000+ pages.
Related Posts
What Are Codex Automations? How to Let AI Run Content Ops While You Sleep
A practical guide to using Codex Automations for analytics, article planning, publishing, and monetization workflows.
Firestore Schema Design with Claude Code: Start from Queries, Not Collections
A practical Firestore design workflow using Claude Code: query-first schema design, indexes, costs, security rules, and working TypeScript examples.
Claude Code × GCP Cloud Run Complete Guide | Serverless Container Auto-Deployment
Speed up GCP Cloud Run deployments with Claude Code. Complete guide with real code examples: Dockerfile generation, auto-scaling, CI/CD pipelines, and Secret Manager integration.