Byrne Hobart's newsletter has a guest post from a startup called
Antithesis creating a simulation environment for debugging code, with a specific value proposition being its usefulness to trust and certify AI assistant generation of code.
It struck me as an example of some GS AI flavored ideas: "a perfectly deterministic simulation environment (world model) where every deployment (AI action) can be evaluated against a set of test cases (safety specs) and verified."