This is a pure session-state design, not a hybrid DagQL/session design.
It intentionally picks one of the two mutually exclusive approaches:
- do it through DagQL
- do it through session state
This proposal picks session state.
- no new
Workspace.lockfileDagQL API as part of this change - no nested DagQL calls for hot-path lock reads or writes
- no sync caller-host lockfile reads/writes during lookup execution
- read
.dagger/lockat most once per session - mutate it server-side throughout the session
- write it back once when the session shuts down gracefully
That is the whole point of the design.
Store the current session's lockfile on daggerSession in engine/server/session.go:
lockFile *workspace.LocklockFileLoaded boollockFileDirty boollockFileMu sync.RWMutex
Behavior:
- lazy-init it on first lock access
- read
.dagger/lockfrom the caller host at most once - serve all later lock reads from in-memory session state
- stage all lock writes in that same in-memory session state
- if
lockFileDirty, export it once when the main client shuts down
Expose lockfile access through server methods:
- add methods on the engine server that locate the current client/session
- expose corresponding methods on the
core.Query.Serverinterface - have
core/andcore/schema/callers use those methods
This follows the existing pattern already used by other session/client-scoped engine functionality.
Update the lock-aware consumers to use session-backed lock state instead of direct caller-host I/O:
container.fromgit.headgit.branchgit.taggit.refmodules.resolve
So instead of:
- reread lockfile from caller host
- resolve one lookup
- reread lockfile again
- export one update immediately
They will do:
- read current session lockfile state
- resolve one lookup
- update current session lockfile state in memory
When the main client shuts down:
- if the session lockfile was never loaded, do nothing
- if it was loaded but not modified, do nothing
- if it was modified, export it back once
The natural place for this is probably the /shutdown endpoint.
- Too much chattiness back to the client, especially for
--cloud
- Fix: remove per-lookup caller-host lockfile reads/writes entirely.
- Re-reading the whole lockfile for each lookup is the wrong shape
- Fix: load once lazily into session state, then reuse in memory.
- Sync reads/writes to the client will cause misery with parallel operations
- Fix: one session-owned in-memory lockfile guarded by an RW mutex instead of repeated independent file round-trips.
- Mutable session state is awkward to model as DagQL mutations
- Fix: do not model it through DagQL at all.
- There are two mutually exclusive paths, and the session-state one is probably cleaner
- Fix: pick the session-state path cleanly and stop mixing it with a partial DagQL design.
- Write once when the session ends
- Fix: keep dirty state in memory and export once on graceful shutdown.
- The
Server/core.Query.Serverboundary is the right integration point
- Fix: expose lockfile read/write through server methods instead of schema-local helpers doing direct file I/O.
This proposal is specifically about the ambient live lock path.
It does not try to redesign everything else at the same time.
In particular:
- it does not require a new public DagQL lockfile API
- it does not require expressing lock mutation as immutable DagQL objects
- it does not require resolving the explicit
Workspace.update()/dagger lock updatedesign in the same change
Put the lockfile on daggerSession, lazy-load it once, guard it with an RW mutex, expose it through Server, mutate it in memory during the session, and export it once on graceful shutdown.
Changelog