Skip to content

Instantly share code, notes, and snippets.

@devrim
devrim / deal_documents.md
Created May 12, 2026 19:47
Lucky Robots / Haptic Labs - Full Deal Package (Docs + Gap Analysis)
@devrim
devrim / deal_documents.md
Created May 12, 2026 19:47
Lucky Robots / Haptic Labs - Deal Docs + Gap Analysis

Lucky Robots & Haptic Labs — Deal Documents

Effective Date: May 14, 2026

⚠️ CRITICAL: 25 gaps identified in gap_analysis.md — do not sign until critical issues are fixed.


Design Partner Agreement (Amended)

@devrim
devrim / gap_analysis.md
Created May 12, 2026 19:45
Lucky Robots / Haptic Labs Deal Gap Analysis

Deal Gap Analysis: Lucky Robots / Haptic Labs

Critical Issues (Block Signing Until Fixed)

1. Strike Price Not Fixed at Signing

Issue: Strike price is "TBD within 10 business days." Contract formation fails; 409A violation — strike must equal FMV on grant date.
Fix: Get 409A appraisal NOW, set specific dollar figure in Warrant Agreement before signing.

2. No Board Consent Attached

Issue: DGCL Section 157 requires board authorization. CEO cannot unilaterally issue warrant. Warrant is voidable without board resolution.

20 Best Bay Area Investors for Apex Compute

Company Profile

  • Business: FPGA-based hardware + software for edge AI
  • Key Advantage: 20x efficiency over NVIDIA Jetson for LLM/vision workloads
  • Target Markets: Drones, autonomous vehicles, robotics, enterprise privacy-focused AI
  • Stage: Pre-commercial (FPGA prototypes, actively hiring)

@devrim
devrim / partfield-image-to-mesh-research.md
Created April 21, 2026 07:51
Image-to-Mesh for PartField — ComfyUI ecosystem landscape + recommendation (April 2026)

Image-to-Mesh for PartField — Landscape & Recommendation

Deep-research note on whether to add a ComfyUI-style image-to-mesh backend to the PartField repo, and if so, which one(s). Written April 2026.


TL;DR

We already have it. The PartField repo has Microsoft TRELLIS-image-large fully wired up via trellis_manager.py and the POST /trellis/image_to_3d FastAPI endpoint (api.py:430), with GLB + Gaussian-PLY output, TTL-based GPU offload, and an async job queue. This is the same model most "serious" ComfyUI image-to-3D workflows use today.

@devrim
devrim / apex-gtm-events.md
Created April 21, 2026 00:35
Apex Compute — 90-Day GTM Events & Bay Area Meetups Playbook

Apex Compute — 90-Day GTM Events & Meetups Playbook

Goal: $1M–$10M in orders by ~July 2026. Pre-silicon chip startup. FPGA dev kits + NRE + silicon LOIs with deposits.

Product: Transformer-optimized edge AI inference chip. <5W, <$10/chip, 20x Jetson Orin Nano, PyTorch-native. FPGA prototype shipping now via GitHub. First silicon Q1 2026 (GlobalFoundries 12nm).

Target segments: humanoid/industrial robotics, defense drones/autonomy, AVs, edge vision, wearables.


@devrim
devrim / standard_engineering.md
Created April 20, 2026 12:47
Standard Engineering Hiring Workflow (mermaid)

Standard Engineering Hiring Workflow

Two-gate review: Nur screens first, then a 3-person panel (Devrim/Harrison/Yan) must unanimously approve before the candidate advances.

flowchart TD
    zapier_source([Zapier Integration<br/>roster.so, LinkedIn, manual])
    sourcing_task[/Source Candidates<br/>nur, recruiter — 10/day/]
    new[New Candidate<br/>+ai_score]
    initial_review{Initial Review<br/>nur}
@devrim
devrim / compose-architecture.md
Last active April 17, 2026 02:53
Compose architecture doc — /compose-2 procedural floor plan system

Compose Architecture

The /compose-2 system generates complete residential floor plans procedurally — including layout, furniture, decorations, and dynamic objects — and renders them as an interactive 3D scene with primitive geometry. Every spatial decision is captured in a deterministic pipeline driven by a seeded PRNG, so the same seed produces the same house. The output is fully described in YAML so the data is portable to other consumers (game engines, simulators, asset pipelines).

This document describes the architecture in depth, including the data flow, the constraint system, the rendering strategy, the validation pipeline, and a roadmap of future additions.


Table of Contents

Building a Cursor-Quality AI Agent for 3D Scene Generation: Architecture Deep Dive

Executive Summary

Building an AI agent that can interpret natural language like "create a scene where I'm going to assemble legos" and produce a fully realized 3D environment with physics-ready objects, lighting, and scripted behaviors is one of the most demanding applications of agentic AI. It combines the hardest problems in the field: multi-step planning over ordered physical constraints, retrieval over structured asset catalogs with physics metadata, code generation in a domain-specific context (C# game scripts), and tight tool integration with a real-time engine.

This article examines how leading AI-powered creation tools—Cursor, Devin, Replit Agent, GitHub Copilot, Bolt.new, and Vercel v0—architect their backends, and distills the patterns that matter for a 3D scene generation agent embedded in a C++ game engine. We compare seven major agent frameworks (LangChain/LangGraph, CrewAI, AutoGen/Semantic Kernel, DSPy, H