nicool.ai logo
nicool.aiDocumentation

Sandboxes and snapshots

Technical appendix: how nicoolAI separates mounted files, schedule work, and future repo work across just-bash and Vercel Sandbox.

This page is part of the technical appendix.

It explains how nicoolAI separates durable lightweight work from future repo-capable execution, and why snapshots matter for that design.

The runtime split

nicoolAI is designed around one model-facing bash tool, but not one execution backend.

The current and planned path split is:

  • /schedules for lightweight durable schedule work in just-bash
  • /gdrive for mounted Google Drive content exposed through just-bash
  • /workspace for repo and code work in Vercel Sandbox

The model should choose paths, not infrastructure.

Why this split exists

The product needs two different kinds of execution:

  • constrained, durable, low-risk file work
  • real repo-capable execution for code and GitHub workflows

Those jobs want different safety and lifecycle rules, so the runtime keeps them separate while preserving one consistent tool surface.

Google Drive mounts

Google Drive content is not dropped into the runtime as arbitrary ambient filesystem state.

Instead, the runtime resolves the user's access, lists active Google Drive connections, and materializes supported files into mounted paths under /gdrive/<connection-name>.

The practical outcome is:

  • Docs and Slides are exposed as plain text
  • metadata sidecars are available when needed
  • unsupported files fail explicitly
  • missing or ambiguous access defaults to deny

This keeps the live connector legible and makes Google Drive usable with normal bash-style file inspection without widening access silently.

just-bash for durable lightweight work

just-bash is the constrained runtime used for schedule files and mounted drive content.

Important properties:

  • each execution is isolated
  • cwd does not persist across calls
  • command access is constrained
  • schedule changes are tracked through snapshot-style file comparison

That makes it a good fit for low-risk structured file work, not for general repo execution.

Vercel Sandbox for /workspace

The committed sandbox plans use Vercel Sandbox for repo-capable work under /workspace.

The design intent is:

  • real shell execution for repo tasks
  • lease-based lifecycle instead of permanent persistence
  • optional restore from snapshots
  • general outbound network access, with GitHub header injection kept server-side
  • GitHub auth attached by the server, not exposed to the model

Snapshots

Snapshots matter because repo work is ephemeral by design.

The technical goal is not to promise permanent workspace state. It is to make work resumable, restorable, and rebuildable when that helps.

In the current planning model, snapshots are there to support:

  • restoring a workspace after sandbox timeout or expiry
  • preserving useful repo state without keeping every sandbox alive indefinitely
  • keeping the product responsive without pretending sandboxes are immortal

GitHub auth attachment

The committed plan for sandbox auth is explicit:

  • the real GitHub token stays in trusted server-side code
  • sandbox egress gets the real auth header injected only for approved GitHub domains
  • the sandbox itself only sees dummy token env such as GH_TOKEN=dummy
  • the model never receives the real credential in prompt text, env, files, or command args

That is the intended default for GitHub API-style actions inside /workspace.

What is intentionally not solved yet

The plans are also explicit about the current boundary:

  • raw git push is not the primary auth target yet
  • private repo bootstrap should stay server-controlled
  • the model should not choose different bash backends directly

Those constraints are not incidental. They are part of keeping the future GitHub workflow safe enough to trust.