Bits and Bobs 4/6/26

11 views
Skip to first unread message

Alex Komoroske

unread,
Apr 6, 2026, 6:32:44 PM (2 days ago) Apr 6
to
I just published my weekly reflections: https://docs.google.com/document/d/1xRiCqpy3LMAgEsHdX-IA23j6nUISdT5nAJmtKbk9wNA/edit?tab=t.0#heading=h.7il90sj3r769

Icebergs and lilypads. Climbing the foothill. Personality hash. Below-the-API objectification. Human in the loop vs on the loop. Working for your agents. You are not a function. Tedium for free. Intelligence tokens. The changing power dynamics of forks. Language as coordination problem. Controlled Loss of Control.

-----

  • As a software provider, are you selling the software, or the thing the software accomplishes?

    • If the former, then your business is now worth much less given LLMs.

    • If the latter, then there’s no problem, and it’s possibly even good.

  • Improving the software production cycle with LLMs is like trying to add a new room to a house that is on fire.

    • The fundamentals of how software gets created are being reorganized as we watch.

  • When trying to grab people’s attention with your writing, you have to stand out from the average.

    • For example, imagine getting a grant application reviewed.

    • Try to think of all of the other things in your category the reviewer will see.

    • Imagine the average of all of those.

    • That’s your landscape you need to stand out from.

      • The priors.

    • Whatever is surprising based on that baseline is your payload.

    • Your thing will feel automatically special to you, but that doesn't make it special to others.

  • When a human understands specific arcane jargon you’re pretty sure they know what they’re talking about.

    • But with an agent those two characteristics are disjoint.

    • It might recognize the jargon but not understand it.

  • Slop you created will feel more convincing and well written to you.

    • It’s already aligned with your conclusion (of course it is, you caused it to be created), so you give it the benefit of the doubt.

      • You look at it from your own perspective, aligned with your beliefs, so the deficiencies are subtle and hard to discern.

    • But another reader sees it from their own perspective, which will likely look at it from an angle.

      • From an angle you can see the deficiencies much more obviously.

  • Once your data is out of your sight, it's out of your control.

  • Icebergs have most of their mass below the waterline.

    • “Below the API” is equivalent to “below the waterline.”

    • Lilypads have almost all of their mass above the waterline.

  • Being able to do more as an individual is great... but also working in teams is part of the human experience.

    • It’s one of the ways we learn.

      • The dynamic push and pull.

    • As we get more individually capable will we be increasingly bowling alone?

    • To coordinate with other humans requires you to convince them.

      • Agents will do what you tell them, happily.

      • That feels better and easier than having to deal with finicky humans.

      • But also the long-term benefit of learning is much less, it’s way easier to get stuck in our ways.

  • “Judgement calls” require different answers from different actors.

    • If every actor agrees on a given “judgment call” then it’s not a judgment call, it’s straightforward and obvious.

    • LLMs are great at tasks that nearly everyone (with enough time and motivation) would agree on.

      • Humans often get bored, but LLMs have infinite patience.

    • The more you can distill a problem in a form that LLMs can knock out of the park, the better.

  • Cognitive labor is a means not an end.

    • You just want to not have to think about it.

    • As long as you trust the system will do a good enough job, you don’t have to worry about it.

    • "Cognitive labor" means "I don't care how it gets done as long as it gets done.

  • There’s a significant difference between a human “in” the loop and “on” the loop.

    • In the loop means that a human is required for the loop to move forward.

    • On the loop means the human is able to correct the loop but it can move forward autonomously.

    • A massive difference in efficiency.

    • If you are in the loop you are the bottleneck.

    • You have to let go of the loop, so it can go arbitrarily fast.

    • When you let go of the loop your system can fly.

      • That requires trust, knowing that it can't hurt you.

    • If you want to maximize the AI system's impact for you, you have to not be a limiting factor.

  • One of the most frustrating things is when something you want to be below the API pops itself above the API.

    • For example, when the internet goes out.

    • When it was below the API you didn’t have to think about it, but when it’s above the API it forces you to think of it.

      • How dare it!

    • In Victorian England, this is how an upper class person might feel about a lower class person.

      •  "How dare the butler presume I care about them as a human?"

      • When you put humans below the API, you objectify them.

        • You make them means, not ends.

      • “How dare my InstaCart driver not pick the eggs I wanted?”

      • Humans are inherently ends.

  • We have limited capacity for things “above the API.”

    • Things that are above the API we have to think about.

    • We have limited mental capacity.

    • That means that there’s a natural (low!) ceiling for things above the API.

    • When you move the agent swarm below the API, you can now have many more items without hitting that scaling factor.

      • The limit now becomes how many tokens you want to invest.

  • Open loops take up mental space.

    • This is one of the insights at the core of Getting Things Done.

    • When you have an open loop, part of your mental energy is spent keeping track of that item.

      • Similar to busywaiting.

    • A system that makes all of the loops closed frees up mental energy.

    • You know that if you put something into that system, it will definitely pop up in the future when you need it.

    • That allows you to stop worrying.

  • At the beginning of the internet it was unclear if you needed chat and email.

    • Both email and chat can be delivered instantly.

    • But the social implications of them are radically different.

    • The chat modality is default synchronous.

    • The email modality is default asynchronous.

    • In the end, we obviously needed both.

    • The same kind of thing will likely happen for sync vs async LLM interaction.

  • When something is realtime, you use it only when it’s the most important thing.

    • We can only focus on one thing at a time, so our focus is inherently rivalrous.

    • Urgency and importance are distinct, but synchronous channels create urgency, which often accidentally overshadows importance.

  • You can give agents a “personality hash” so they know how to work with you.

    • LLMs are excellent at understanding the meaning of arcane jargon.

    • One thing LLMs know well are enneagram types and Myer’s Brigg personality types.

    • “Alex is ENFJ / 3w4” contains a huge amount of meaning in a tiny package.

    • Humans would have to constantly unpack it to make sense of it, but LLMs can do that unpacking immediately and intuitively.

    • When the agent knows your personality, it knows how to present information for maximum receptivity.

  • When you're flying, having to touch the ground is excruciating.

    • When you’re interacting with agent swarms, you’re flying.

      • A very small amount of interaction creates huge amounts of value.

    • When you have to talk to a human, you’re grounded.

      • The leverage is small, you have to convince them and bring them along.

  • The chat UX presumes that the agent can't work autonomously on its own.

    • It has to wait for the human to come back and do their conversation turn before it does its next one.

  • When you have too many agents, they don’t work for you, you work for them.

    • They’re constantly chirping for your attention, and you know that each incremental second of attention you give them will unlock more value, so you feel frenzied to satisfy them.

  • Asking for permission changes the coordination speed by an order of magnitude.

    • A permission check is a sync point.

      • You need to wait for the counterparty to unblock you.

    • Sync points expand the coordination cost super-linearly.

    • That’s one of the reasons with agent swarms, dangerously-skip-permissions is so freeing.

    • It’s also freeing in human organizations, with “Ask forgiveness, not permission.”

    • You trade off execution velocity for safety.

    • Sometimes that tradeoff is worth it.

  • Vitalik Buterin Warns AI Tools Could Become Major Privacy Threat.

  • Bruce Schneier: Cybersecurity in the age of instant software.

    • “As AI advances, the rise of instant, customized, and often ephemeral software solutions will alter the dynamics of vulnerability hunting and patching, and thus the battle between attackers and defenders.”

  • This week in the Wild West Roundup:

  • Excellent piece from Brendan McCord: You Are Not a Function: Why the Race to Stay Useful is a Trap.

    • John Stewart Mill: "a human being is more like a tree than a steam engine."

    • "A tree does not exist in order to produce lumber.

    • You can make lumber from it, and good lumber is nothing to sneer at.

    • But if you look at a tree and see only lumber, you have missed what is standing in front of you.

    • Something is growing there under its own power, toward its own form, and the growing is not a means to some further end.

    • Humboldt’s claim about human beings is the same shape.

    • A person is a self-developing being whose worth is not exhausted by function."

  • Beware climbing the foothill.

    • You’re sighted off of the real peak.

    • But you don’t realize that you’re simply climbing a hill in front of the main one.

    • You’re climbing a smaller hill that you could get stuck on.

      • Once you’ve climbed it, the only way to make progress is to go downhill… something very hard for an organization to do.

    • You’re moving in the right direction, but you will get stuck.

  • With agent swarms it’s possible to be out over your skis and not realize it.

    • Normally when interacting with other humans, someone will point out that you might be wrong.

    • Agent swarms are less likely to do it.

    • You might be in danger and not realize it.

    • We’re so used to “I might be in danger” correlating with “someone will tell me I’m in danger” that we don’t realize they can be disjoint.

    • For example: the reflex to breathe is triggered not by lack of oxygen but by accumulation of carbon dioxide.

      • If you hyperventilate before going underwater to maximize how long you can hold your breath, you expel carbon dioxide more effectively than you pull in oxygen.

      • The result is it’s possible for your body to not realize you need a breath until after it’s too late.

  • A lot of coordination costs inside of organizations are not about "is this a feature we want to support long term" but "is this worth even spending the time to think about?"

    • LLMs can help with the latter, not the former.

    • If you can free up useless energy by automating it, you can spend that energy on higher-leverage things.

    • "Who is willing to invest the time" wins in most coordination scenarios.

    • But now time is cheaper!

  • Maintaining cleanliness is easier than creating it.

    • True for any emergent property in a system, like performance.

  • Tedium is now free.

    • If you can break a complicated problem down into merely-tedious components, LLMs can do it.

  • Ilya’s assertion from many years ago “compression is intelligence.”

    • To successfully compress something, you have to have a predictive model of it.

  • One way to get increasing leverage per token over time; have the LLMs extract useful tools.

    • "Look at our Lessons Learned doc and our commit history then create the tools that would have made it easier, faster and cheaper to solve these problems if we knew then what we know now."

    • At every step, look for a place to remove yourself from the loop and put in a process that you trust more than you.

    • Whitehead: "notation creates intelligence." 

      • For example, the notation of and concept of zero is a massive unlock for mathematics.

    • Notations can help you think better, they aren't just about storing state and recalling it.

      • Agents can help you produce new notations for your use.

  • Humans feel like they're talking to a consistent person when a contractor skims a casefile for 30 seconds.

    • LLMs can skim 100000x more than a human in that time frame.

      • The limit is the context window, but it allows LLMs to read effectively “instantly”.

    • So of course LLMs will be great at the illusion of knowing you.

  • Tasks often require “intelligence tokens.”

    • A unit of attention from a sufficiently-intelligent actor.

    • Before, most cognitive labor tasks required a human.

    • Human intelligence tokens are quite expensive.

      • Not only do you have to find and train the human, but you need to pay them continuously, keep them happy and engaged, etc.

    • LLMs are good enough at many tasks, and their intelligence tokens are orders of magnitude cheaper than humans.

    • Normal mechanistic code has even cheaper intelligence tokens… but the tools by default aren’t useful in a given domain.

  • A lot of emergent value things required a heavily commercialized large actor at the center.

    • That center point is the one whose turf it is; who can make sure it’s safe and stable.

    • It has to be somewhat large, because emergence is a Grubby Truffle; the power of it shows up only at larger scales.

    • The downside is that everyone participating is doing it in the shadow of a powerful actor.

      • That power makes them more likely to become greedy or evil.

    • What if it were possible to have small-scale emergence without any single powerful actor?

  • Will the value in the ecosystem accrue to skills or harnesses?

    • Or possibly something else?

    • Agentic harnesses like Claude Code have a few components.

      • 1) the core agentic loop,

      • 2) a ton of code to produce a janky TUI.

    • As the code leak showed, there’s no magic in Claude Code.

    • The TUI is only really important if the user interacts with the agents via synchronous chat modalities.

  • If you have a giant island, it's hard to copy the cool small things upstarts are doing.

    • The bigger the island, the more likely that adding a feature interacts with another feature, so the harder to incorporate.

    • Emacs has inside of itself not just one but two Vim implementations, apparently.

  • How big your reusable chunks are defines how broadly you can be used.

    • Large usable chunks are less likely to work in any given environment.

    • All it takes is one feature that is incompatible with the environment.

    • The likelihood of that grows multiplicatively.

  • If everything is woven together too tightly, there's no room to adapt.

  • End-to-end encryption in messaging is now table stakes.

    • How did it happen?

    • E2EE has an abstract benefit, and has no concrete downside for users.

    • That created the gradient where it could grow without bound.

    • All it needed was the right seed crystal to kick it off.

    • If it had concrete downside, then that would have dominated the abstract benefit and it likely wouldn’t have taken off.

  • Can you muscle through the process or do you need a compounding process?

    • It all comes down to “how long is the tail?”

    • If the tail is short and stubby, a linear, heroic, Gilded Turd approach might be viable.

      • If there’s no tail, just muscle through as quickly as possible.

    • If the tail is long, then no amount of muscling through will get you there.

      • You have no choice but to find and recruit a compounding process.

  • Two things that are hard about software: creating it and getting people to use it.

    • LLMs make the creating  orders of magnitude easier.

    • But it also makes the latter part easier.

    • Humans need the tool to not only be useful but also enjoyable.

      • If it’s not enjoyable, at any given point they might lose patience and give up.

    • Agents have infinite patience, so they will crawl through broken glass if necessary.

  • It’s important to watch UXR studies in real time.

    • The UI you created is often obvious to you.

      • It must be, you created it!

    • Watching real users fumble with it is an eye opening, grounding experience.

      • You want to shout, “The button is right there!!!”

    • When you watch it live, you sit with the frustration, it’s impossible not to feel it.

      • It’s excruciating.

    • When you watch it later and can fast forward, you don’t have the same visceral reaction.

  • The best UXR comes up with a mental model and then design studies to falsify it.

    • If you experiment without a hypothesis you might overreact to noise.

  • Users adopt products mainly for what it can do today.

    • Not what it might be able to do in the future.

  • The power dynamics of forking software have changed.

    • It's now a possibility to have a recurring agentic process to adapt and pull in upstream changes continuously.

      • The toil is significantly lower than before.

    • Making software forkable might make it less forked, in general, now that forking is easy.

    • Forkability gives pressure to the software to conform to user demand.

      • If the user doesn’t like it, they can fork, at significantly lower cost than before.

  • It used to be that you had to take whatever the software provider gave you.

    • If it had a nice UI and also took your data so you had to rent it back, well, what are you going to do?

      • Software was expensive to create, so you just had to go along with it.

    • But now imagine a piece of software holds your data hostage.

      • It’s just as easy for you to clone the software’s behavior in my own implementation, and own the data.

  • The famed Disney film creation process is about iteratively finding the core of the film.

    • In each round of review, you see what’s working, what is resonant, and lean into it.

    • You repeatedly see the emotional throughline of the story, and then make it more that way.

    • Similar to the oscillating retconning strategy of improving a working product.

    • More of a blossoming process than a building process.

  • Businesses like Tegus use the demand of customers to build out their backcatalog.

    • A customer comes and asks for interviews on a given topic.

    • Tegus sources them, charging the customer the cost to produce it plus margin, gives the results to the customer… and then puts the report in their catalog.

    • Their leverage comes from how many new requests can be serviced by their back catalog.

      • That’s effectively free value that was bankrolled by their customer’s demands.

      • It works better to the extent there’s overlap between different customers’ demands, and also to the extent the analysis is durable and timeless.

  • A process can achieve its goal even if none of the actors executing the process understand the goal.

  • We talk about collective intelligences all the time, naturally.

    • We talk about what “the market” wants.

    • We talk about what “your body” wants.

    • These are responding to the whole emergent system, not individual voices in it.

  • Another thing abundant cognitive labor can be applied to: navigating government bureaucracy.

  • An open system that has meteoric growth is the best.

    • As it gets more momentum, more and more others choose to participate.

      • Coordination costs are high, but it’s a no-brainer for everyone to join in.

    • An open system that's stagnating is the worst.

      • All the downside, none of the upside.

  • At scale, weak but consistent signal is more convincing than strong but inconsistent signal.

    • It’s hard to fake to have every component just happen to consistently be pointing in a given direction.

    • Imagine that someone has to interfere with the signal to fake it.

    • That cost goes up with the number of places they have to interfere.

    • If everything points in the same direction, that implies there’s some fundamental process driving that alignment beneath the surface.

    • All it takes is one or two examples to invalidate such a hypothesis.

    • This process can help you discover massive but hidden forces.

  • An effective writing technique that LLMs have learned to imitate: “Connect nine of the ten dots.”

    • The last dot is obvious and trivial, but the reader connects it.

    • Because the reader connects it, they feel ownership over the insight, instead of it being pushed on them.

    • Chase Hughes, a former behavior analyst for the Navy, describes this technique as “elicitation.”

      • By leaving incomplete information, you invite the other person to fill in the gaps.

      • Useful for building rapport, but also for getting people to flesh out information you only partially have.

  • Default-divergent systems tear themselves apart.

    • Default-convergent systems pull themselves to their resonant throughline, automatically.

  • Naming a project helps it become default-converging.

    • Before there’s a name, there’s just an amorphous mass of possibility.

      • Different team members might have different views on it.

      • Default diverging.

    • Once there’s a name, there’s a capital-T Thing.

      • Everyone can point to it and orient off of it

        • Everyone might have different definitions of what it is, but everyone can agree it exists.

      • Now it becomes much more likely to be default-converging.

  • In creative endeavors, there are sometimes huge numbers of collaborators who need to create a coherent vision.

    • The best practice is to have a “bible.”

    • Only the creative lead is allowed to add things to the bible.

    • Something in the bible is not to be questioned, it’s like the word of god.

    • This pattern is useful for working with agents, too.

  • Coordination cost needs to be paid if there isn't a schelling point.

    • Coordination costs are huge.

    • Schelling points make it go from large to ~zero.

    • All it takes is a point everyone can agree is good enough.

      • What counts as good enough changes as more and more others choose to use it.

    • A schelling point can emerge when there’s a critical mass of entities who can agree on one point, and as it gets momentum, it pulls in others, too.

  • Language is largely a coordination problem.

    • People speak in order to be understood.

    • Words are an emergent folksonomy.

    • We use words we’ve heard from others (as long as they make sense to us) since, all else equal, that’s more likely to work in the future too.

  • Someone pointed out to me that cucumbers are technically fruit.

    • This is one of those gotcha observations that is technically true but fundamentally uninteresting.

    • The culinary concept of fruit vs vegetable is about their sweetness.

    • There is no botanical concept of a vegetable.

    • There is a botanical concept of a fruit, but it’s about the part of the plant.

      • Specifically, the mature ovary.

    • It’s not that cucumber is a fruit, but that it’s the fruit part of the plant.

    • So in the botanical sense it’s like saying “is edible part of the plant the fruit or a different part?” which is obviously uninteresting in any culinary sense.

  • I learned from Ze Frank’s Gecko video that water droplets shoot off a hydrophobic surface.

    • When smaller droplets touch, they are pulled into a larger droplet, with much less surface area for the same volume.

    • Surface tension contains energy, so there’s now energy that is freed up.

    • That energy in the contraction has to go somewhere, and it can’t go against the fixed surface, so it blasts it off away from the surface.

  • Every cell on earth, no matter how different the organism is, uses the same DNA “codec.”

    • That makes every cell “compatible.”

  • We recognize ourselves in the mirror as being us..

    • They’re the version of us we’re most familiar seeing.

    • When we see ourselves how others see us, it feels like seeing a stranger.

    • A disconcerting feeling: totally familiar and yet totally off somehow.

  • I am hyper-aware of time.

    • I have an intuitive sense of “are things converging fast enough,” where “fast enough” is downstream of how much time is allotted.

    • If things aren’t converging fast enough I feel physically strained.

    • Once I’ve set a long-term goal, my steering loop is to continuously minimize that non-convergence strain.

    • One way that manifests is that when I sit down in a meeting I try to orient myself so I can glance at the clock inconspicuously.

    • That allows me to keep an eye on the convergence.

  • Controlled Loss of Control: CLOC.

    • Mario Andretti, F1 legend: "When you think you have everything under control you're too slow."

    • Sometimes you need to lose control, but intentionally.

  • Kids can only simulate one worldview.

    • Adults can hold multiple simultaneously.

    • This is one of the big maturity leaps needed to be an adult.

    • Similar to the leverage you get from leading one agent vs leading a swarm.

  • Gradient descent gets super linearly more effective in multiple dimensions.

    • You only need one dimension to have an active gradient to descend to escape a local minima.

    • The chance at least one dimension has a downward gradient goes up multiplicatively with more dimensions.

    • This is one of the reasons gradient descent for LLMs and evolution is unreasonably effective.

    • We’re used to a puny three dimensions.

    • An excellent video about how proteins can be discovered by evolution so effectively.

  • One of the things that being a parent does is force you to commit your time to something that is not about you.

    • And to do it gladly.

Reply all
Reply to author
Forward
0 new messages