-
Notifications
You must be signed in to change notification settings - Fork 18
Description
Background (why this matters)
I built QuantGPT – a Next.js App Router site on Vercel that answers ~4k monthly questions about a closed-source trading library.
The next milestone is cloud-hosted coding agents that:
- Generate Python back-test code
- Execute it inside an isolated VM
- Return charts to the user
The sandboxes provided by @vercel/v0-sdk
(45-min Firecracker micro-VMs, Active CPU pricing) are almost perfect, but each run needs:
- TA-Lib & SciPy C-extensions
- A private
vectorbtpro
wheel - Occasionally a lightweight Redis side-car for live progress updates
The gap
Today I can solve this locally with a docker-compose.yml
that pulls a pre-baked image and spins up two services, or by tunneling into the user’s existing local virtual environment via a websocket. There’s growing demand for auto-scaled, pro-rata (Active CPU) compute so each user session gets its own sandbox on-demand—and you only pay for the CPU you burn.
Today that isn’t practical without spinning up and managing cloud VMs ourselves:
- Self-hosting or tunneling shifts the burden to the user and expands the security surface area.
- A first-class “bring your own pre-baked image or compose manifest” would let v0-sdk sandboxes scale up and down cleanly while keeping everything inside the Vercel ecosystem.
What would help
- Reference a custom OCI image (Dockerfile-style) or
- Provide a compose-like manifest so multiple short-lived processes launch together inside the same micro-VM managed by v0-sdk.
Why it helps more than just us
Any AI coding agent, notebook playground, or code-grading service hits the same cold-start friction. A first-class solution keeps us all on Vercel instead of off-loading to GCP / AWS for compute.
Rough interface sketch
# sandbox.yaml
image: ghcr.io/quantgpt/agent-base:latest
services:
worker:
cmd: ["python", "agent.py"]
ports: [7860]
redis:
image: docker.io/library/redis:7-alpine
ports: [6379]
timeout: 45m
vcpus: 4