Bits and Bobs 12/15/25

23 views
Skip to first unread message

Alex Komoroske

unread,
Dec 15, 2025, 11:22:52 AM12/15/25
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.jdu6rzvrw49y .

LLMs as Borgesian Libraries. AI will eat software. Chatbots as agent kayfabe. Vibecoding as a state of inebriation. Deterministic code as exoskeletons for the muscles of LLMs. Humans as "sin eaters" for LLMs. Self-poisoning doom spirals. Extrinsic personalization as manipulation. The missing HyperCard. Inductively knowable UX. The innovation of the 404. The impending end of the Big App era. Innovation as delusion. Autonomy requiring ownership.


---

  • LLMs are a Borgesian Library of knowledge.

    • In that infinite library is every essay that could ever be written.

      • Every piece of software that could ever be written.

    • A lot of gems, but the vast majority of it is useless junk.

    • The work of extracting one of those pieces from the library is non-trivial.

      • The LLM needs to spin them out one token at a time.

    • Just by wishing for it, a user can pluck it out of the latent space of possibility.

    • But once it’s been extracted, it can be cached, and used for others, too.

    • The trick is: which parts are high quality enough to keep?

    • The items that real people choose to extract and then use.

      • If they keep it, and come back to it, that’s a very strong signal.

      • Especially if other people than the first extractor find it useful.

      • A human saying, “of all of the noise, this one is useful” is extremely powerful.

    • In this library, the process of browsing is the process of creation.

      • Out of the whole library of things that could exist, by making that decision to pluck something out is what makes it actually exist.

    • The more you discover, the more you create.

  • Imagine if your OS had a Borgesian Library of software.

    • Just by wishing for it, you could pluck it out of the latent space of possibility.

    • If you can cache useful things, then you can get compounding leverage.

      • You save all future extractions of content from wasting effort.

    • This can be just straightforward caching outputs.

      • But it can also be caching superstitions around getting programs that work.

    • If you can share that cache globally, once it’s been extracted by one human it never has to be extracted by any other human again.

  • This week in wild west round up:

  • AI will eat software.

    • Just like software ate the world.

  • An excellent post from Brendan McCord: A World Unobserved.

    • I heard Brendan deliver this as a speech and loved it so much I encouraged him to post it publicly.

    • Humans are in the world: they have a feedback loop to see how their actions affect the world, to be able to update their mental model.

      • They are situated.

    • LLMs are not in the world, not directly.

    • They have no direct learning loop.

  • Chatbots are mainly agent kayfabe.

    • They don’t learn or change from loop to loop.

    • The most obvious versions of agents… and the least useful.

    • Real agents have to accumulate internal state.

  • Agentic stuff needs to change the context every time through the loop.

    • Otherwise, it’s just a chatbot.

    • The context is things it learned, or things the user told it, or the new tools it created.

    • Each tool it makes for one purpose is a stepping stone to use in the future, too.

    • A little cached toe hold.

    • As you accumulate those toeholds, now and into the future, you get compounding benefits.

  • To use LLMs to their fullest potential today requires working in the terminal.

    • If you aren’t using tools like Claude Code every day, you will have no idea what the fuss is about.

    • Claude Code applied to your daily life is insanely powerful.

      • You can peer into the future of what it will be like when LLMs are plugged into our lives.

    • How do you take the magic of what's possible with Claude Cde on the terminal and make it more mass market?

  • One reason vibecoding is addictive: it makes the marginal value of “just one minute” much higher.

    • Each additional minute really could create levered value.

    • It’s “rational” to do that one more action!

    • It’s just that it never stops.

  • Vibecoding lets you get out over your skis.

    • It reduces inhibition, not unlike alcohol.

    • “Go for it! What’s the worst that could happen?”

      • Fly too close to the sun.

    • You can really get yourself in trouble, because you'll attempt things you never would have attempted before.

  • A study on effectiveness of ads generated by AI.

    • AI-generated ads do better than both human-created and human-assisted-with-AI ads.

    • But if you disclose it’s generated by AI, it does worse.

    • Reminds me of the old psych paper that participants rated random nonsense sentences as more meaningful than human-crafted ones.

  • An interesting point about engines and horses.

    • Engines got better at a steady pace for a hundred or more years.

    • It wasn’t until it hit a certain quality improvement that it led to discontinuous adoption by the car.

    • At a certain point of steady improvement you cross a replacement threshold for most people.

    • A gradual process can still lead to discontinuous adoption.

  • A friend points out that code is better seen as an exoskeleton to LLMs’ muscle.

    • Exoskeleton is both support and containment.

    • LLMs benefit from both.

    • LLMs aren’t great at constraints, but old-school code is great at it.

  • You can't retrofit data flow analysis onto the web, it's too taint-y.

    • It was never designed to not taint.

    • Everything is big chonky black boxes.

    • Chrome’s approach to contain agents is a step in the right direction, but still not enough.

  • Product surface discovery happens much slower than technical capability improvement.

    • The process of discovering useful applications is a diffusion process.

      • More similar to culture or art than engineering.

    • The raw model performance is clearly seeing a plateau.

    • But the interesting applications of LLMs are just at the very beginning.

  • If you have 25 years of intuition about how long it takes to make a given piece of software, it's now wrong!

    • What you thought would take a month will take an hour.

    • The LLMs will say "this will take weeks" and then it's done in 9 minutes.

    • We’re in a totally new world.

    • People who know the tech world deeply and aren’t using Claude Code won’t realize how much has changed.

  • Rohit found that he had a hard time getting agents to decide to trade with each other.

    • This was even when there was a direct advantage to collaborating.

  • Trade is only valuable if you have non-infinite time and different abilities than the entity you trade with.

    • These are trivially, obviously true in real world situations, so we never noticed that “trade is good” is downstream of these assumptions.

    • But for LLMs these assumptions don’t obviously hold.

      • LLMs have infinite patience.

      • Agents that use the same model also have precisely the same capabilities.

    • This calculus does change if agents start storing individual state.

      • One LLM, just by dint of experience, could start storing useful state that gives it a comparative advantage.

      • That comparative advantage could then compound over time.

      • But there’d have to be state.

  • Markets emerge automatically where they are useful.

    • Markets are extremely expensive.

      • They introduce a coordination cost, and competition is, from the point of view of the individual, extremely costly.

    • But in some cases, the ability to get components that automatically get better is worth it.

      • A competitive market pushes quality up and cost down.

      • It’s a brutal, bruising fight for the competitors, but everyone else benefits.

      • Markets need an ability to specify what the user needs and be able to distinguish quality.

  • There's a lot of alpha in actually reading the laws.

    • Now LLMs can do that for you!

    • I heard a story about a collective superstition around CalFresh.

    • Counties across the state believed a certain action was illegal… but it was actually a misunderstanding from a 30 year old defunct administrative rule.

    • But convincing everyone it was wrong took painstaking county-by-county discussions.

  • Last week I talked about humans at “liability meatbag” for LLMs.

    • Also known, in any system, as a “liability sink.”

    • I like Ethan Mollick’s colorful frame: “sin eater.”

    • The entity that eats the sin allows progress to be made.

    • They agree to own the downside if they’re wrong.

    • Now it’s more important than ever before, since LLMs allow brainless execution more cheaply than ever before.

  • As more public spaces are flooded with slop, people will retreat to cozy webs.

    • Places where you know all of the people… or could.

    • Where there’s some gatekeeping happening, some curation of people.

    • A friend predicted that in 2026 we’ll see the first “Human only” spaces.

  • Alive Internet Theory is about resonant spaces online.

    • How can you create cozy, resonant spaces.

    • These spaces resist the Optimization Ratchet by being hidden from it.

  • Extrinsic personalization is manipulation.

    • But intrinsic personalization is empowering.

    • The question is: what is the goal of the entity doing the personalization?

    • Is it to optimize your response towards their goal?

    • Or is it to make it better aligned with your goal?

    • Extrinsic personalization is optimization.

    • When optimization is applied to you, it’s manipulation.

  • Google has a new Disco browser experiment.

    • It asks: what if the browser could generate custom UI?

    • The problem is the same problem as all vibe coded software.

    • In the same origin policy, you need to trust the creator of the software to be responsible with the data you give it.

    • LLMs creating software that could be tainted by prompt injection can’t be trusted.

    • So you get little islands of code without meaningful integration with data.

  • Today we are subservient to the app creator.

    • Everything is on their terms.

    • The only choice we as users have is exit.

    • If we choose to continue using it, they can do what they want with our data.

  • Today if someone makes useful software they own your data.

    • ...wait, what?

  • Creating the same origin policy was similar to creating property rights.

    • It said: “the app owns the data.”

    • Like property rights, that simple concept allowed all kinds of things to be coordinated and emerge.

    • But it also creates all kinds of oddities.

    • Property rights are almost certainly worth it on net.

    • I’m not as sure about the same origin policy.

  • Your data is being sold to hedge funds and quant firms.

    • It's doing work for everyone but you.

  • What if you could aggregate your own data to unlock its value for you?

  • Does the software conform to you, or do you conform to the software?

  • The modern world of software is missing its HyperCard.

  • We know we're giving up data for value, but we don't have legibility into the precise value to make an informed decision.

  • We've combined the "don't need to be a sysadmin" with "the platform owning the data." in the modern cloud.

    • But that doesn't have to be that way!

    • Those two things can be unbundled.

    • With Confidential Compute, someone else can deploy a VM and the user can verify it.

    • That's never been possible before.

  • The SaSS business model is downstream of convincing users to accumulate their data on someone else’s turf and then rent access back to it in perpetuity.

    • It’s a form of a monkey trap.

      • “Want shiny software now? Give me your data so I own it.”

    • For users, works great over short time horizons.

      • But works terrible over long-term time horizons.

    • You have to rent back your own data… forever!

    • Like the cynical PE move to do a levered buyout, then sell the physical stores, and then rent them back to the hollowed out business.

    • Users convert ownership to perpetual renting, for a short-term convenience.

  • The same origin model requires silos.

    • As a creator you can’t say “I opt out of silos.”

    • They’re a load bearing part of the security model.

  • Silos were a quick hack that just so happened to indirectly destroy society.

    • Whoops!

  • In terms of market dynamics, AI is more like microprocessors than the internet.

    • There are no network effects, but there is utility for the very first user.

  • Social residue can accumulate into something much larger than the sum of its parts.

    • The Ouija Board effect comes from social residue accreting.

    • All of the little choices of users in the system leave extremely potent residue.

    • Each bit of residue is a mote of cosmic dust.

    • Not that useful on its own.

    • But enough of them together it can ignite into a star.

    • In a scaled system, the data the residue is attached to is less important than the residue itself.

  • The iron law of reactive systems.

    • In a reactive system, ensure every update gets you incrementally closer to a convergent state.

    • If they have that shape, then the system is default-converging.

      • It might take a few execution loops but it will settle.

    • If the modifications are diverging, then as they happen they could make the system farther and farther away from converging.

    • If your actions don’t have this local characteristic than the global emergent behavior will be runaway.

  • Agents get poisoned and keep confusing themselves.

    • They see the wrong things in their history and get confused.

    • A self-poisoning doom spiral.

    • The more they get off track, the more that's in the context to confuse them.

    • The word “not” is easy to miss, so every bit of “Don’t do X” just makes them see X again.

  • Decentralized systems that try to ensure privacy work well only with data at rest.

    • But data needs to be in motion to do useful things.

  • Two ingredients to unlock the prosocial potential of infinite software:

    • LLMs to write code create the latent potential.

    • Transcending the same origin model will be the catalyst.

    • You need both otherwise nothing interesting happens.

  • Software that is a home-cooked meal can be simple and cozy.

    • It doesn't have to be Michelin Three Stars.

    • It doesn’t need to be possible to be done at scale.

    • It just needs to be situated to its context, resonance.

  • LLMs are kind of like magnets.

    • If you sprinkle little iron filaments around, they snap into beautiful patterns.

    • Those patterns make the hidden patterns of language and human thought, omnipresent but hidden, visible.

  • A touch sensor is less invasive than a camera.

    • Not far-field, only things directly touching it.

    • A camera you don’t know if it’s looking.

    • A touch sensor is only getting information if something is touching it.

  • Someone described the community response to the Resonant Computing Manifesto as an exhale of relief.

  • You feel resonance in your body before you’re even aware of it in your mind.

  • Resonant things are more likely to have been grown than have been built.

  • Resonance requires multi-ply thinking.

    • Social networks were prosocial only in the first ply.

    • But in the other plies they weren't.

  • Hollow things are done mainly for appearances.

    • Whatever the cheapest thing you can get away with.

  • Inductively knowable UX is resonant.

    • Inductively knowable UX is where at each layer you can pull back and understand what it does.

    • There’s no magic, you can peel all the way back to the core of the system.

    • The UX complexity reveals itself to you incrementally as you dig deeper.

  • Great new essay from Ben Follington: Dialogical Design: Humans are world builders

    • Design is not a one way process in a system.

  • Technology should work for all of us.

    • Not just the people who create it.

  • Packy on Daylight Computer: Hardware is a Fruit.

    • Mentions Resonant Computing and Common Tools!

  • Resonance is easy in situated contexts.

    • It’s only at scale that resonance is not the default.

  • Prosocial Design is a great resource for Resonant Computing.

    • They a/b test various UX interventions, to help spread best practices.

  • Coco Krumme has a book: Optimal Illusions: The False Promise of Optimization.

    • Added to my reading list!

  • InstaCart prices goods dynamically to extract more from users.

    • They run experiments to see how “elastic” a given user’s demand is for staples.

    • If you don’t notice that the prices are higher, they keep them higher for you.

    • The definition of hollowness created by optimization.

  • When you over-optimize things they aren't fun.

    • Hollow things aren’t actually fun.

    • This applies to sports, for example.

    • Basketball is considering moving out the three point line because that's where everyone just shoots from.

    • The game has gotten so optimized that it’s not fun anymore.

  • Modern society shows us what would happen if no one realized there could be a downside to optimization.

  • Resonance is creation and consumption folded in together.

  • Zealots can do useful work in an ecosystem even if they don’t care about the ecosystem.

    • They do their work to support their cause.

    • The fact that it happens to be a part of the ecosystem is just a happenstance.

    • The ecosystem benefits from the focused effort of those users.

  • In a world of infinite software, “creator” feels like the right word for people who create code.

    • Developer implies a level of technical expertise and motivation that’s not required anymore.

    • Creator just implies someone who invests enough time to build something.

    • The bar of motivation to become a creator is an order of magnitude lower than motivation bar for developer.

    • Code has never been as easy to create as it is now.

  • The web’s innovation was 404s.

    • SGML had been around for awhile.

    • Systems worked very hard to not have dead links.

    • The web said, “you know what, a dead link isn’t that big of a deal.”

    • That allowed for a loose, good enough, self-healing system.

  • Two main superpowers of the web: the link and the iframe.

    • Link: teleport anywhere.

    • iframe: compose any other page?

    • What if links could be plain language wishes?

    • What if you could compose anything else, safely?

  • Malleable software has been possible for a while.

    • It’s just been way, way too expensive to write software in it.

    • You could automate some custom use case flows in Google Apps in Google AppScript.

    • But you had to write the software yourself.

    • Or, if it did exist, to find it, and trust it with your sensitive data.

    • Even if it did exist, you’d have to think to install it.

    • LLMs can understand your intent and can do the work of integrating it into your life for you.

      • If we can figure out the right exoskeleton for LLMs.

  • Apps are islands.

    • If you're building software, you're stuck within your island.

    • Have to create your whole society inside.

    • If you're using someone else's software, it can't access other things of yours.

  • Patrick Collison notes a vibe shift.

    • Seems to me that we're past peak big app.

  • The Big App era is coming to a close.

  • I learned of something grotesque in this ValTown blog post about a new feature.

    • They casually mention a “leadgen” tool called RB2B.

    • It looks at the anonymous visitors to your site and attempts to figure out their email address.

    • It does this by combining cookies and identifying actions on other sites in the network.

    • Gross!

  • A system that allowed “wishing” used to be blocked on a coordinated ontology.

    • Different components would have to be precise enough about what they intended.

    • In a clockwork system, this required extremely precise enmeshing of gears.

    • That required a fractally precise coordinated ontology.

    • This had the shape of exponential-cost-for-logarithmic-benefit.

    • But LLMs allow sloppy interconnects, as long as a reasonable human could get it, it’s good enough.

    • This makes systems that are tied together with wishes orders of magnitude more plausible than before.

    • You can get emergent folksonomies that interoperate.

  • Don’t train an agent to use Excel.

    • Train an agent to use statistics in whatever way is natural to it.

    • The affordances that humans and LLMs need might be disjoint.

  • If you don’t want to know if your idea works, then you’re just selling a narrative.

    • Style, no substance.

    • Hollow.

  • Insulation from the ground truth lets you take big swings.

    • Most of those swings will just be delusions.

    • But some of them, in retrospect, will turn out to have been innovations.

  • Every innovation starts off as a delusion.

    • An innovation requires doing something that goes against the status quo.

      • A delusion, as far as everyone else is concerned.

    • But some delusions, if they turn out to be useful and actually be viable, are revealed to be innovations.

      • Does the startup idea cohere into an Apple… or a Theranos?

    • Hallucinations, delusions, are where the capacity to imagine something different comes from.

    • Without hallucination, nothing outside of the status quo would ever happen.

    • Everything would just pull towards the heat death of uniformity.

  • Larger organizations are much more insulated from ground truth.

    • They have more volume per unit surface area than smaller organizations.

    • Many more interactions are inside the bubble than outside of it.

    • It’s where the organization meets the external world that ground truth is found.

    • A 50,000 person organization with billions of dollars of revenue a year can be insulated from the ground truth for a very long time.

    • A 5 person seed stage startup can’t be insulated from the ground truth for very long at all.

  • You don’t have to be cynical to not point out that the emperor is naked.

    • It’s rude to point it out.

    • Another relevant question: how invested are you in the emperor being ground-truthed?

    • If he’s just some other random person, then pointing it out is all downside.

    • If you feel tied into his fortunes–you do well if he does well–then making sure he’s ground-truthed becomes more important.

  • Goals are required for an organization to cohere.

    • If you hired 50 brilliant, autonomous people and gave no goal, the organization would just diffuse.

      • A lot of random stuff happening, nothing cohering.

    • A consistent goal is required for the various components to start to accrete into something larger.

    • Once accretion happens it tends to self-accelerate.

      • The emergent goal becomes more obviously the thing to pull towards.

      • The gravitational pull gets stronger and stronger.

    • Goals can also emerge bottom-up, as some random bit of noise turns out to work, and other agents see it work and start orienting toward it.

      • The more that others orient toward it and build on it, the more likely other agents also see it and orient toward it.

  • Autonomy requires ownership.

    • Something that is autonomous has to be liable for downside.

    • The famous policy: “A computer cannot be held accountable, therefore a human must make every management decision.”

  • Clausewitz: nothing has “ever become great without audacity.”

    • Greatness requires audacity.

    • Something that pushes beyond the status quo.

    • Found via this excellent article on castles in the middle ages.

  • There’s an discontinuous difference between mutual knowledge and common knowledge.

    • Mutual knowledge is knowledge that everyone individually has.

    • Common knowledge is knowledge that everyone and individually has… and everyone knows everyone knows.

    • In the mutual state, you don’t know if an action you take will be supported by others.

      • But in the common state, you know it will be.

    • Before it’s common knowledge, spontaneous coordination is hard.

      • After it’s common knowledge, spontaneous coordination becomes the default.

    • The emperor having no clothes is an example.

    • Bubbles are another example.

      • Even if everyone privately thinks we’re in a bubble, if it’s not yet common knowledge, then if you short the bubble you’ll get wiped out.

  • Downside-capping rules tend to accumulate.

    • It's hard to experiment and see if it's no longer load bearing.

    • So it can persist for a long time even if it’s no longer relevant.

    • Chaos monkey testing finds the old vestigial rules.

  • Imagine an outsider visits Eden.

    • Everyone is naked and they don’t know they’re naked.

    • The outsider knows they are naked, and knows what will happen when the outside world inevitably encroaches.

    • The outsider wants to prepare them for it but if they look like the serpent the naive residents will have them killed.

    • But perhaps being an apple farmer can help.

  • A quote from Silicon Valley: "I want to make the world a better place. Better than anybody else!"

  • Cory Doctorow has an interesting thinking-persons’ guide against AI.

    • A few nuggets:

      • Centaur vs. Reverse Centaur: Machine-assisted humans (good) vs. humans as meat peripherals for uncaring machines (the actual AI business model)

      • The Growth Stock Trap: Monopolies must inflate endless bubbles (video, crypto, metaverse, AI) to avoid catastrophic stock revaluation—winning the bet is secondary to keeping the story alive.

    • Daniel Messier has a critique.

      • He compares Doctorow’s piece to telling people in the 90s not to bother learning computers.

  • Ezra Klein in NYTimes: Pay Attention to How You Pay Attention.

  • A friend’s unpacking of the AI backlash:

    • "So much of the rebellion is against the loss of dignity. 

    • Not just paycheck.

    • it’s Musk, Altman, etc. saying ‘you and the work you do are photo-copyable’

    • ‘We’re going to photocopy your work and you’ll be poor and we’ll be rich and we’ll spend our wealth electing people who will rage culture wars against you and when you’ve finally had enough and rebel, we’ll just go to our NZ bunkers and you can live in your rebellion."

    • Bleak, I can see why people would push back on that!

  • Reflecting on the core insight of the original “Worse is Better” essay.

    • When you offer the product you aren’t saying “this is perfect” but “this might be useful.”

    • Users can self-select into using it.

    • If it’s not quite good enough for them then the user can make it better.

  • Consortiums are easier to grow than to distill retroactively.

    • It’s easier to grow an open single thing that works into a consortium than to retroactively make a consortium out of competing options.

    •  The game theory is very different.

  • It’s super unnerving getting an unexpected Granola transcript from a 1:1.

    • “Wait, that was recorded?”

    • Unrecorded 1:1s are magic, because there’s only the two of you.

      • You can talk freely because there’s no record, and in the absolute worst case of them gossiping in a bad faith way, it’s your word against the other person’s.

    • That allows capping the downside if there’s some misunderstanding or tattling.

    • But discovering that it was recorded collapses the magic.

    • It’s like in the movie Onward where if you look down you’ll fall into the chasm.

  • Skeptical and curious is a powerful combination.

    • You find disconfirming evidence, but are willing to be convinced.

    • Skeptical and incurious is a toxic combination.

  • A trick to get yourself to do something you want to want but don’t want.

    • For example: getting out of a warm shower.

    • Do a thing that doesn’t have a downside now but will soon (like making the water colder).

      • Current you takes a cheap action that has little current downside, but has higher downside in the future.

      • Compel future self to do something different than it would have by default.

    • It's easier to start the clock because it doesn't hurt now.

  • Horses and donkeys require very different training regimes.

    • Horses are skittish.

      • They’re hyper-sensitive to everything around them.

      • They’re fearful, aware of the vibe and actions of everyone around them.

      • They want to succeed.

      • You can use that neuroticism to give them a training regime to improve on.

    • Donkeys DGAF.

      • A donkey isn’t afraid of anything.

      • If it doesn’t want to do something, it simply won’t, and there’s nothing you can do to change its mind.

      • So to train it you have to find the minor instances that it happens to do what you want, praise it, and then be done for the day and try to build on that tomorrow.

      • If you try to get it to do something it doesn’t want to do it will look at you: "well if you aren't happy with me not doing it, then I guess you'll have to kill me."

  • Fast twitch and slow twitch in the right proportion are better than either alone.

  • When you think about it it’s kind of crazy that humans can read.

    • That we can recognize and internalize hard edges squiggles.

      • They’re completely unlike the realistic shades and gradients we’d see on the Savannah.

    • That our eyes can saccade so precisely on each word.

    • But in another way, we read so much, that of course it must have fit into something we were naturally suited to do.

    • But if it didn’t work we wouldn’t keep doing it.

    • We had the latent ability to read before we discovered it through a process of diffusion.

    • What other latent abilities do we have, just waiting to be discovered?

  • An old saying: "While goodness without knowledge is weak and feeble, knowledge without goodness is dangerous."

  • Cynicism has to be thawed out for beautiful things to grow.

    • Taking that first step to thaw it out is important.

    • It requires making yourself vulnerable.

    • Without vulnerability, nothing can grow.

Reply all
Reply to author
Forward
0 new messages