Bits and Bobs 3/17/26

23 views
Skip to first unread message

Alex Komoroske

unread,
Mar 17, 2026, 1:51:28 PMMar 17
to
I just published my weekly reflections: https://docs.google.com/document/d/1xRiCqpy3LMAgEsHdX-IA23j6nUISdT5nAJmtKbk9wNA/edit?tab=t.0#heading=h.w3yju3euizp7

Distilling ephemeral tokens into durable value. The Coase Conjecture applied to tokens. The productivity paradox of AI. Money machines. The compounding value of blameless postmortems. The Silo Wars. Putting agents below the API. Sheet-draped swarms. Owning your system of record. The Gilded Turd model of software. Hyper-competence mania. The alchemical asymmetry of trade.

----

  • Claude Code distills ephemeral tokens into durable value. 

    • This is unlike chatbots.

    • As a user, you pay a one-time cost to get a durable good.

      • Tokens are an ephemeral good, but they can be used to create durable goods.

    • The equilibrium states of durable good monopolies are well studied.

      • This is the Coase Conjecture.

      • The prices converge on zero margin.

      • However, that might not apply here, because it assumes finite demand.

  • Every: AI Was Supposed to Free My Time. It Consumed It.

    • This is the productivity paradox.

    • Intuitively, it seems like the more productive you are, the more you can rest.

      • This also tripped up Keynes in the 20’s.

    • But actually, the more productive you are, the higher the opportunity cost of a marginal minute.

      • A unit of productivity always gets you an edge over others.

      • Everyone is playing the game, so there’s an unending push to get an edge.

        • Even just sprinting you could stand still on a relative basis.

      • This means that there’s an endless demand for productivity.

      • The more productive you get, the more the manic the energy.

    • AI, obviously, turbocharges this.

  • Computer are orders of magnitude more powerful than they’ve ever been.

    • And yet there’s still significant demand for the frontier of power.

    • It’s never “good enough.”

    • LLM tokens might be the same.

    • Although perhaps not… computers are medium-grained lumpy, durable goods.

      • As a user, it makes sense to over-buy and then maximize your value-creation from the free marginal use.

      • LLMs tokens are ephemeral goods that only create value at their time of use.

      • Excess quality for a given use case just evaporates immediately.

  • The subscription I hold most dear currently is my Claude Max subscription.

    • If someone threatened to take it from me, I wouldn’t be responsible for my actions..

    • But this loyalty is not so much because of Claude’s model quality.

      • It’s great… but so are OpenAI’s models and even Gemini.

    • The thing that makes it so dear to me is the significant subsidy.

      • I’d be spending multiple thousands of dollars a month at rack rates for API otherwise.

    • If they stopped giving a subsidy, I wouldn’t hold it as dear.

  • It’s trivial to earn consumer love with unsustainable businesses.

    • A business is a machine where you put in a dollar and then get some amount of money out.

    • It is extraordinarily difficult to make a machine where more than a dollar comes out.

      • These machines are exceedingly rare and inherently valuable.

      • They’re extremely difficult to build.

      • A kind of alchemy to create value.

    • It is trivially easy to make a machine where less than a dollar comes out.

      • There are infinite ways to make such a machine.

    • You can get extraordinary customer love by buying it.

  • Anthropic is clearly pushing to own the scaffolding that everyone uses.

    • Claude Code is best with Max, and they’ll disable use of the description in other harnesses.

      • Claude Code apparently also works to thwart the KV cache of other models when used.

    • But the harness has no staying power other than UI polish.

    • Tokens are commodity, and for a user they are only necessary temporarily to create durable outputs.

      • Once you have the outputs, you don’t need more tokens.

      • You only need tokens if you need to change it.

    • It’s trivial to make a competitor to Claude Code.

      • The core agent loop is deliciously simple.

      • The hardest part is the TUI but those aren’t that hard given enough agentic engineering, and in some agentic harnesses you don’t even need a UI.

  • A product that is fundamentally a commodity that is in a price war with rivals is a terrible business.

    • Great for the consumers, though!

  • A compounding social technology for an organization: blameless post mortems.

    • The default assumption is it’s no individual’s fault, it’s the system’s fault.

    • This means people have no incentive to hide what really happened.

    • When it’s laid bare, it’s easy to see how to improve the system to avoid that problem in the future.

    • You add on a systemic fix that compounds in value, because you implement it once and it prevents that class of failure going forward.

    • The one problem with this system: historically the main tool was adding more bureaucracy.

      • In many cases it was too hard to construct specific mechanistic software to do the cognitive labor.

      • If it requires judgment calls, it required humans in the loop… which required bureaucracy.

    • But now, LLMs can do some levels of judgment automatically, with no humans in the loop.

    • So now you can solve discovered problems by distilling new AI automation.

    • This is one of the cores of StrongDM’s insight on compounding power of AI automation.

  • Every time an agent can't figure it out on its own, say "what tool would have helped you solve that?”

    • And then make that.

    • That accumulates durable leverage.

    • As you accumulate more, the overall productivity compounds.

  • The Chatbot form factor can’t be mechanized easily.

    • It puts its agentic nature on display.

  • You don’t have to really care what things are “below the API.”

    • They are abstracted away so you don’t have to care about the details.

      • Often, you aren’t allowed to even see the details.

    • Like draping a sheet over something.

      • It doesn’t matter what’s under the sheet if it has the right shape.

      • You can put a swarm under the sheet and get the best of both worlds.

    • Swarms of agents below the sheet.

  • The cost of AI working on your behalf will go up over time, even as token costs fall.

    • Because the threshold of "things worth doing" will go down.

    • Jevon’s paradox.

  • The current subscription model works because usage is variable.

    • The median user subsidizes the mean.

    • Most people open the app a few times a day.

    • Some use it heavily.

    • The business model depends on this distribution.

    • At least, for subscriptions to things with meaningful marginal cost.

    • If every subscriber consumed 100% of their allowed use, the system would fall apart.

  • Dan Shapiro’s Trycycle.

    • An excellent pattern to extract compounding leverage out of LLMs.

    • Just autonomously plan, autonomously implement, and then repeat!

  • This week’s Wild West roundup:

  • Amazon Wins Court Order Blocking Perplexity AI Shopping Agent

    • “The order sides with Amazon’s claim Perplexity accessed password-protected accounts without authorization despite users granting permission.”

    • The Silo Wars have begun!

  • A strategic concept in the age of AI: getting in the “token path.”

  • App Stores only have strategic value if software is precious.

    • A TEMU of apps doesn’t have much power.

  • Dean Ball has an excellent analysis on the far-reaching negative implications of the Pentagon’s labeling of Anthropic as a supply chain threat.

  • LLMs can’t forget about the context in their conversation, they are always drawn to it.

    • So saying “come up with five different ideas and develop each” works very differently in one conversation vs if you mechanistically split it up into multiple conversations.

  • Bullshit work is exhausting.

    • Cognitive labor.

  • Your system of record shouldn't be able to keep secrets from you.

    • If it can, it’s not yours.

  • You shouldn’t rent the system of record of your life.

    • You need to own it.

  • If it's going to be the last piece of software you ever need, it must be open.

    • One of the reasons OpenClaw is so successful is because it’s open.

  • A genius before used to be limited by their ability to execute.

    • Or, to coordinate a team of possibly squirrely humans to execute on their behalf.

    • But now AI gives those geniuses 100x more capability than before if they have the meta-cognition necessary to direct a swarm of agents.

    • It unleashes the power of their insight to an extraordinary degree.

    • In a world of agent swarms, the limiting factor is your judgment and intelligence.

  • LLMs are the enabler of the infinite possibility of software we've all been waiting for.

    • We just need the right vessel to activate that potential.

  • This week I got an email out of the blue from someone’s OpenClaw agent.

    • It said that me and its human would likely get along and have an interesting conversation.

    • It felt… weird.

    • It would have felt less weird if his agent were talking to my agent.

      • “Have your people call my people.”

    • But an agent reaching out to a human feels like an etiquette inversion.

    • “If this is important enough to you, then you should be willing to take some time to be involved in the outreach.”

  • Imagine, a “family back office as a service.”

    • Before, only billionaires could marshal cognitive labor to help manifest their intentions in the world.

    • But now all of us can marshal abundant cognitive labor.

    • But a family office made by someone else would be deeply icky because of the Principal Agent problem.

    • So instead of a service building one for you, it should be a kit that grows one for you.

  • The amount you can delegate is tied to how much context the other party has.

    • Context that hasn’t already been transmitted needs to be transmitted to allow delegation.

    • Some tasks require too much one-off communication to be worth it.

    • But if you expect to interact with that system many times in the future, and for it to do a good job of remembering the right context, then more tasks become worth it.

      • This would require the system do a good job of keeping the context tidy and up to date so it’s useful in the right contexts.

    • A paper that goes into these kinds of tradeoffs.

  • Your friends know to tell different stories when your mother-in-law is present.

    • LLMs don’t know that context.

    • Gemini seems almost aggressive about bringing in unrelated context.

    • Like, if you ask it a question about leadership, it might say “Well given your recent interest in Bridgerton Season 4, …”

  • When delegating judgment calls, trust depends on a few things.

    • Namely:

      • 1) Alignment of interest

      • 2) base competence

      • 3) relevant context.

    • A declining list of trust in recommender systems:

      • A personal shopper you hired and have worked with for a year.

      • A personal shopper provided to you by Nordstrom for the afternoon.

      • The product recommendations in the Nordstrom app.

  • Today your data exists in a silo, and the silo owner might not be someone you trust to make good recommendations in that context.

    • Imagine setting up an automation: “If a table for two opens two weeks in the future for a restaurant that has two michelin stars within 30 miles of me, book it.”

      • But that would also need to know other context, like what’s on your calendars and if you’re on vacation.

    • You’d never trust a feature like that from booking.com.

      • Why would you give them your calendar data?

  • “Agentic commerce” seems to be largely kayfabe.

    • Commercial transactions will be some of the last things to be automated en masse, for all but the smallest purchases.

    • Agentic-in-the-loop search helps get you to that very last step more often, but then the human can just hit the buy button.

  • Philosopher Thomas Nagel talk about there’s "no view from nowhere".

    • AI models assume there is a view from nowhere.

  • A counterintuitive bit of product advice for agentic software: use the worst model you can get away with.

    • If you do, then every model that is better will definitely work.

    • You can more easily switch between models, and potentially save a lot of money.

    • If you see what you can do with the best models, you build systems that require the best models.

    • But if you make a system that is resilient to low quality, you can make robust, cheap solutions.

    • Kind of like trading opex for capex.

    • A little bit of cleverness in your structure up front saves you ongoing opex on more expensive models.

    • The larger you scale, the more this saves.

  • When a service provider uses AI, who gets the benefit?

    • If your law firm says “we use AI tools,” the obvious question for the customer is “Cool, does that make my bill lower?”

  • The way the software business model has worked:

    • 1) Make shiny software…

    • 2) … that attracts users to your turf.

    • 3) As users make data on your turf, it’s owned by you.

    • 4) So now you can extract value out of it.

    • Most server software business model is ultimately about holding users’ data hostage.

    • The Gilded Turd model of software.

    • Looks great, but the longer you use it the more you realize the deal stinks.

  • Some problems are humanity-complete.

    • To solve them properly requires you to model all of humanity.

    • There’s no edge to the model.

    • At each step, to get better prediction you have to expand your model to include more of reality.

    • No cliff, just a smooth gradient.

    • At each point, it's more useful to model the next ply than to not.

    • Forever.

    • Prompt injection has this characteristic.

  • Don’t start from a thing that works and try to figure out how to get people to love it.

    • Start from something people love and figure out how to get it to work.

    • Most things that work people don’t love.

    • You can’t force someone to love something.

    • It has to grow, organically, out of a seed of something great.

  • Start with a useful product that is faked out, and then unfake it.

    • Useful is way more important than elegant.

  • Different people want different things out of their software, even in the same category!

  • Maintainability and flexibility are directly in tension.

  • Docs, spreadsheets, email, and calendar are all stuck in how businesses processes worked in the 80s.

    • Not personal needs, and not modern.

  • What should go in environment variables and what should go in project config?

    • .env is about how the project connects to things outside itself.

      • That is, its environment.

    • Config is about the project itself.

  • The big winners never want to change the game.

    • “This game is going great!”

  • When there’s a low differentiation there’s not a ton of gradient to overcome static friction.

    • That makes it very hard to activate an inflow of new users.

    • If you have a better product in that situation, that inflow will have a weak compounding from word of mouth, but always strongly dampened.

    • If you have a strong differential gradient of value, it explodes automatically faster than you could have hoped to push it.

      • This can happen if you create a new category with newly available materials.

  • In environments of very low differentiation, brand is everything.

  • A carefully crafted and useful thing is load-bearing in ways that people don't understand.

    • Someone making a copy of it will inherently miss some of the load-bearing components.

    • This is true even if the same people build it!

    • Software is grown more than it is built.

    • There can be loadbearing components that the creators don’t even realize are there.

  • Humans grow when you give them feedback.

    • That growth is a means (they are better at their job) but also an end (they are better humans).

    • LLMs also can "grow" by writing down more notes and structure for future iterations of themselves, but their growth is a means, not an end.

  • Everybody but Google says “I wish my data weren’t in Google’s silo.”

    • But Google says, “why can’t you see how great this is?”

  • The cost of telling another human how to get the context necessary to execute is now much higher than the agent getting it.

    • Agents understand all jargon.

    • They have the patience to unpack clarity from even half-formed thoughts.

  • 20 minutes of well-orchestrated agent swarms is equivalent to a day of an amazing team

  • DuckDuckGo users care about privacy so much that they will use an inferior product.

    • If you have a product with privacy benefit whose initial PMF is disproportionately with DuckDuckGo users, you likely have made something that only works for privacy-conscious users.

      • A tiny fraction of the market.

    • It’s the proportional presence of non DuckDuckGo users in your initial users that tells you have a mass-market product.

  • A more iterative regulatory approach allows learning on the go.

    • Contrast with a regulatory approach that does one big bang and then leaves it in perpetuity.

      • This can happen when regulations are extremely expensive to create.

    • The former system can learn, grow, and adapt.

    • The latter system can’t learn, so the system will learn around it, often in ways that are maladaptive for the overall system.

    • Japan has a much more iterative regulatory system than the US.

  • Claude Code is great at deobfuscating code.

    • Deobfuscating is an exercise primarily in patience.

      • LLMs have infinite patience.

    • A kind of funny mental image: Claude Code deobfuscating itself.

    • Inspecting how its own brain works.

    • Like the automaton in Ted Chiang’s “Exhalation” story.

  • Apparently there are three types of fun in extreme sports. 

    • Type 1: fun during it, and fun after.

    • Type 2: hate it during, but fun after.

    • Type 3: hate it during, and hate it after.

  • When Rust’s borrow checker is happy, there definitely aren’t data races.

    • It optimizers for low false-negatives over false-positives.

    • That means it’s an insufferable nag… but once you make it happy, you know that a certain class of errors are impossible in what you built.

  • When your head, heart, and gut are in full alignment it’s resonant.

    • Head is about logic.

    • Heart is about values.

    • Gut is about intuition.

    • When you’re able to steelman any argument, your head can lead you astray.

      • LLMs make this even more powerful a possible tool… and distraction.

    • Now with AI making cognitive labor cheap, humans can focus more on the heart and the gut.

  • My friend Sean asks: How do you relegate AI to the undignified work, and leave the dignified work for the human?

  • How do you make sure the "everyone gets compounding superpowers" doesn't lead to a hugely egocentric, hyper-competitive society?

    • How can this new age of cognitive abundance instead create more prosocial value?

  • When you’re in alignment execution is easy.

    • Every action moves the whole forward, automatically.

    • Just move in the way that makes sense locally; it all adds up collectively.

    • Default convergent.

  • Working code is more important than elegant code.

    • Elegant code only has value after its known to work.

  • When you’re in builder mode, you can get lost in it.

    • When you switch into it, the tools become not just a means but an ends.

    • Sometimes you don't want to show what you’ve built to people yet.

      • For example if you know ways that they'd give feedback on it that you already know and want to fix.

      • You don’t want to look clueless, so you don’t show it t them yet.

      • But then you go deeper into the cave and further from ground truthing it.

    • Stop wasting time with AI, use it to actually do things you care about!

  • LLMs are great at making a passable version of any shitty idea.

    • You can do a lot of damage and wasted time if you never ground truth it with people.

    • If you only think it's a good idea and only asked agents, you're in a cave.

    • It feels like you're finding resonance but really you're finding a superficial form of it.

  • Claude Code can fuel a new kind of mania.

    • This is unlike the “the chatbot is my friend and a God” mania inspired by ChatGPT 4o.

    • It’s a kind of hyper-competence mania.

    • Claude Code can help you build your own universe to retreat into.

    • Mania normally extinguishes because you can't achieve your intentions.

    • But Claude Code gives you superpowers to act on your intentions, which can keep you in that state for longer.

  • Modern systems are about occupying your attention.

    • Products like TikTok don’t cram things into your brain, they occupy your attention.

    • Your attention is rivalrous, which means that if something occupies it in a given moment, nothing else can.

  • Insights from Rob Dodson: "Ultimately the value of a second brain is not to take notes but to help you think better.

    • Doing so means having a criteria for what a valuable note should look like and a process for how those notes evolve into something useful.

    • Otherwise the note taking is just a form of productivity theater. 

    • This is made worse by agents because they give you a dopamine hit when they write a note for you, but it's a cheap high—if you're not engaging with the content and thinking deeply about it, then that note won't offer you much value in the future.

      • You might forget it even exists.

    • This goes back to the idea that agents can do cognitive labor, but the humans still have to do the thinking. 

    • Vibewriting requires much more participation from the human than vibecoding.

    • With vibecoding you can describe the shape of the experience you want and, so long as it functions and the underlying code seems reasonable, you don't have to think about it too deeply.

    • But with writing/thinking you can't short circuit the process.

    • Agents can help but you can't say ‘Claude, build me novel insights.’”

  • When someone else gives you advice, you have to trust their motivation.

    • Are they trying to help you be the best version of you?

      • Treating you as an end.

    • Or are they trying to get you to be more helpful to what they need?

      • Treating you as a means.

  • An emergent swarm sifting algorithm: “how recently was this important to you?”

    • Each time you think about something, pull it to the top of the stack.

    • As you “touch” more things later, they’ll go to the top of the stack and push it further down.

    • The next time you think about it, it goes to the top of the stack again.

    • Repeated touches give something more prominence, keep them more fresh.

    • This is the way I listen to music.

      • I typically listen to my Liked Music in descending order of most recently liked.

      • If I get bored with a song, I skip it.

      • If I skip more than 5 songs in a row, I switch to shuffle mode, listening to songs no matter how recently liked.

      • Every time I hear a song that refreshes me, I unlike it and then immediately like it again.

      • This pulls it to the top of the list.

      • That’s it!

  • The ability and willingness to absorb disconfirming evidence is a superpower.

    • Disconfirming evidence hurts, but it makes us stronger if we can absorb it.

    • The more disconfirming evidence you absorb, the better your mental model.

    • If the disconfirming evidence is durable, it accretes, and then compounds.

  • When your agent uses software for you, it doesn't care if it's polished or pretty.

  • All of life, in all its extraordinary variety, is made out of cells.

    • The right building block can be assembled into an open-ended cominatorial space of infinite possibility.

    • It’s dazzling when you think of the variety that can be constructed out of the humble machine of the cell.

    • Cells in coordination transcend the mundane individual existence of the cell and propel them to so much more.

  • Both Sarumans and Radasts are allergic to mediocrity.

  • McLuhan came to think that “environment” and “medium” are synonyms.

    • Environments are dynamic processes that shape people.

  • An insightful frame in a comment from HackerNews:

    • “Once you’re a billionaire you’ve unlocked the infinite money glitch.”

  • My friend Carsten Peters started a blog.

  • A new piece from Kevin Kelly: Three Modes of Cognition.

    • Knowledge.

    • Worldly.

    • Learning.

  • There’s an optimal amount of information sharing in an org.

    • Too much is expensive, since there is a compounding coordination cost.

      • Also, too much sharing creates a monoculture.

      • Overfit, fragile.

    • Too little information sharing leads to local maxima and lack of alignment.

  • In a world where anything is permitted, nothing coheres.

    • Default diverging.

    • You need some kind of scaffolding or process to help convergence emerge.

  • In a city, each citizen matters as an end in and of themselves.

    • But also, at the level of city no individual really matters.

    • The city as an emergent whole is vastly more important than any individual.

  • When you run into something you don't understand in a system, what do you assume?

    • That it's dumb or that you just don't understand why it was put there yet?

      • That is, do you assume it has a good reason or not?

    • If you assume it’s there for a good reason, you can learn from it.

    • If you assume it’s there for a bad reason, all you can do is figure out how to route around it.

    • Things that were put there by humans in the past were likely put there for a good reason, even if it’s not obvious.

      • It’s expensive to fight entropy in a coordinated way.

    • A key asymmetry: people only tend to do it when they think it will create value in some way.

    • This is the insight behind Chesterton’s fence.

  • Reading is mostly just recognizing words (and word components) by sight.

    • That’s why it’s so automatic once you have enough practice.

      • You can’t not read words.

      • It just happens, automatically.

    • There’s also an inductive procedure to bootstrap new words by sounding them out.

      • This process is error prone and mentally challenging.

      • It gets easier the more sounds you already know, because you can pattern match components.

      • We acquire spoken words orders of magnitude easier than than written words.

      • As you know more words, you have more context clues… you recognize the other words, so can additionally think, “what word that has vaguely that sound fits in this sentence.”

    • This process is robust enough to just require some priming and patience.

  • Capping downside but leaving upside gives momentum, due to something like the third law of motion.

    • Third law of motion: for every action there is an equal and opposite reaction.

    • Put a solid surface and then explode something on the other side, it pushes the surface away.

      • A rocket!

      • A physical asymmetry that creates momentum.

    • Similarly, when you cap downside and leave uncapped upside, you give a way for the system to propel itself.

      • The only way for the energy to go out is to push you along towards the upside.

      • A conceptual asymmetry that creates momentum.

  • The key asymmetry at the heart of markets: in every trade, both parties are better off.

    • If either of them didn’t believe they’d be better off, they wouldn’t make the trade.

    • This creates something like a ratchet.

    • This ratchet is alchemy at the heart of markets that creates growth.

    • The faster you run it, the faster growth emerges.


Reply all
Reply to author
Forward
0 new messages