Skip to content

Instantly share code, notes, and snippets.

@brannn
Created February 4, 2026 21:22
Show Gist options
  • Select an option

  • Save brannn/469bb5f0cf7675099164237da71b7d1b to your computer and use it in GitHub Desktop.

Select an option

Save brannn/469bb5f0cf7675099164237da71b7d1b to your computer and use it in GitHub Desktop.
Runcell: AI Coding Assistants in Isolated Workspaces - IDE User Experience

Runcell: AI Coding Assistants in Isolated Workspaces

What Problem Does This Solve?

You want to use AI coding assistants to help write code, but:

  • You don't want AI touching your local machine or production systems
  • You need different AI providers for different tasks (fast inference vs high quality)
  • You want AI to work in clean, reproducible environments
  • Setting up AI tools manually is tedious and error-prone

Runcell solves this by letting you ask Claude (in your IDE) to spin up isolated workspaces where AI assistants are ready to use immediately.

How It Works (The Simple Version)

You chat with Claude Code in VS Code. Claude has access to Runcell's MCP tools.

You: "Create a Go workspace and use Cline to build me a REST API with user CRUD"

Claude: I'll create an isolated workspace with Cline pre-configured, then 
        delegate the task to the inner AI.
        
        [Creates workspace with Cerebras-powered Cline]
        [Runs: cline -y "Create a REST API with user CRUD endpoints"]
        [Reports results back to you]

You: "Add authentication to it"

Claude: [Runs: cline -y "Add JWT authentication to the API"]
        Done! I've added JWT auth. Here's what changed...

You never leave your IDE. You never configure API keys manually. You never touch a terminal.

Real-World Example

The Old Way (Manual Setup)

1. Open terminal
2. Create project directory
3. Install cline globally
4. Run: cline auth --provider openai --apikey sk-xxx --modelid gpt-4 --baseurl https://...
5. Hope you remembered all the flags
6. Hope you didn't expose your API key in shell history
7. Start working, realize you need a different model
8. Reconfigure everything
9. Repeat for every new project

The Runcell Way (In Your IDE)

VS Code with Claude Code

You: Create a Python workspace with Cline configured for fast inference,
     then build me a CLI tool that converts CSV to JSON.

Claude: I'll set that up for you.

        ✓ Created workspace "csv-tool" with Python 3.12
        ✓ Cline configured with Cerebras (qwen-3-32b)
        ✓ Running inner AI to build the CLI tool...
        
        Done! The inner AI created:
        - cli.py (main entry point with argparse)
        - converter.py (CSV parsing and JSON output)
        - tests/test_converter.py (unit tests)
        
        All tests pass. Want me to show you the code or make changes?

What You're Actually Seeing

When you ask Claude to create a workspace and run an inner AI:

  1. Claude receives your request in VS Code
  2. Claude calls Runcell MCP tools to create an isolated Kubernetes pod
  3. The pod starts with AI pre-configured (Cline, Claude Code, etc.)
  4. Claude runs the inner AI with your prompt
  5. Claude reports results back to you in the chat
┌─────────────────────────────────────────────────────────────┐
│  VS Code                                                    │
│  ┌───────────────────────────────────────────────────────┐  │
│  │  Claude Code Chat                                     │  │
│  │                                                       │  │
│  │  You: Build me a web scraper in Go                    │  │
│  │                                                       │  │
│  │  Claude: I'll create a workspace and have the inner   │  │
│  │          AI build that for you.                       │  │
│  │                                                       │  │
│  │          ✓ Workspace created (ws-a1b2c3)              │  │
│  │          ✓ Cline configured with Cerebras             │  │
│  │          ⟳ Inner AI working...                        │  │
│  │                                                       │  │
│  │          Done! Created:                               │  │
│  │          • main.go - CLI entry point                  │  │
│  │          • scraper/scraper.go - Core scraping logic   │  │
│  │          • scraper/parser.go - HTML parsing           │  │
│  │                                                       │  │
│  │  You: Add rate limiting                               │  │
│  │                                                       │  │
│  │  Claude: ⟳ Asking inner AI to add rate limiting...    │  │
│  │                                                       │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│  Kubernetes (invisible to you)                              │
│                                                             │
│  Pod: ws-a1b2c3                                             │
│  ├── Go 1.22 runtime                                        │
│  ├── Cline (pre-configured with Cerebras API)               │
│  └── /workspace (isolated filesystem)                       │
│       ├── main.go                                           │
│       └── scraper/                                          │
│           ├── scraper.go                                    │
│           └── parser.go                                     │
└─────────────────────────────────────────────────────────────┘

Key Benefits

1. Stay in Your IDE

No terminal. No manual configuration. Just chat with Claude.

2. Isolation

Each workspace is a separate container. AI can't access your local files, credentials, or other projects.

3. Pre-Configuration

Claude sets up the inner AI automatically. You just describe what you want.

4. Credential Safety

API keys are managed by Runcell and injected securely. Never exposed in chat or shell history.

5. Provider Flexibility

Claude picks the right AI for the job, or you can specify:

  • "Use fast inference" → Cerebras
  • "Use Claude for this" → Anthropic direct
  • "Use the cheap option" → OpenRouter

Example Conversations

Quick Prototyping

You: Create a workspace and build me a REST API in Go with 
     health check and user CRUD. Use fast inference.

Claude: [Creates workspace, configures Cline with Cerebras, runs task]
        
        Done in 45 seconds! Here's what was created...

Code Review

You: In the csv-tool workspace, have the inner AI review 
     converter.py for potential bugs and edge cases.

Claude: [Runs: cline -y "Review converter.py for bugs and edge cases"]
        
        The inner AI found 3 issues:
        1. No handling for empty CSV files
        2. Unicode characters may cause issues
        3. Memory usage could be high for large files
        
        Want me to have it fix these?

Iterative Development

You: Add support for YAML output too

Claude: [Runs: cline -y "Add YAML output support to the converter"]
        
        Added! New files:
        - formats/yaml.py
        Updated:
        - cli.py (new --format yaml flag)
        - tests/test_converter.py (YAML tests)

Debugging

You: The JSON output has extra whitespace. Fix it.

Claude: [Runs: cline -y "Fix: JSON output has extra whitespace"]
        
        Fixed! The issue was in converter.py line 42 - 
        was using indent=4 instead of indent=None for compact output.

What AI Tools Are Available?

The workspace images come with these pre-installed:

Tool What it does
Cline General-purpose AI coding assistant (works with any provider)
Claude Code Anthropic's official assistant
OpenCode Open-source alternative
Auggie Augment Code's assistant

Claude (outer) picks which one to use based on your request, or you can specify.

FAQ

Q: Do I need to know Kubernetes? A: No. You just chat with Claude. Kubernetes is invisible to you.

Q: Do I need to configure API keys? A: Once, in Runcell's credential store. After that, Claude handles injection automatically.

Q: Can I see what the inner AI is doing? A: Yes, Claude shows you the output and what files were created/changed.

Q: What if the inner AI makes a mistake? A: Just tell Claude to fix it. Or delete the workspace and start fresh - it's isolated.

Q: Is my code safe? A: Yes. Each workspace is isolated. The inner AI only sees /workspace in its container.

Q: Can I download the code? A: Yes, ask Claude to copy files from the workspace, or use the file browser tools.


Runcell - Let Claude orchestrate AI coding assistants in isolated workspaces

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment