Bits and Bobs 2/23/26

12 views
Skip to first unread message

Alex Komoroske

unread,
Feb 23, 2026, 2:55:27 PM (3 days ago) Feb 23
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.9ssfdwihd6x6

Coding as a means. TEMU of apps. The Gilded Turd vs the Grubby Truffle. Prompt-injecting the kernel. Cognitive debt. LLMs as the universal universal computer. Convergent Emergent processes. Meat appendages. Addictive red-queen races. Enough breadth to see, enough depth to feel. Bless the Broken Road.

----

  • OpenClaw has captured significantly more imagination than usage.

    • And it’s captured a significant amount of usage!

    • But still, the audience of people who are intrigued but intimidated is 10x the size of people who are actually using it today.

  • Claude Code's PMF is orders of magnitude stronger than ChatGPT's.

    • Despite ChatGPT’s being much larger on an absolute basis.

    • Every bit of Claude Code usage is paid.

      • Perhaps with heavy subsidies for top users in Max plans.

    • Most of ChatGPT’s usage is given away for free.

  • When I start tasks now, I often create a new local repo, git init, and run Claude.

    • I find I do this for any task that even possibly might accumulate some code or data.

    • A Claude web app session is about the chat primarily.

    • A Claude Code session is about the durable files and code that accrete.

    • Which is more important, the chat or the data / code it creates?

    • I only need to pay Anthropic when I need to make changes to the local files.

    • That data and code in that project work even if I stopped paying Anthropic entirely.

    • They are mine.

    • Contrast that with Claude web chats, which are owned by Anthropic.

  • The codebase matters less than the process to create it.

    • Just like previously, the compiled binary mattered less than the source code to create it.

    • Now, LLMs can translate specs and ideas into working code.

    • Some of the most valuable information is what the developer told the LLM in the process of creation in Claude Code.

    • Yet Claude Code deletes those transcripts after 30 days today.

    • Those chat logs should be treated with more respect than the resulting code!

  • We’re going to see a ton of security veneer in the next few months.

    • Tons of companies trying to take OpenClaw style experiences and make them “safe.”

  • For most people coding is a means.

    • For some it is an end.

      • Joyful creation.

    • Now with Vibecoding that joyful creation audience has grown 100x .

      • Because the amount of broken glass has gone down by an order of magnitude.

    • But even so, most people will never get joy from coding.

      • Software for most people will always be a means to an end.

  • Means should be as invisible and low friction as possible.

    • The vast majority of people don’t want to think about the features of their software... they just want them to work.

    • Software is a means to an end.

  • Each successful layer of abstraction gives you an order of magnitude more leverage.

    • “Succesful” layer of abstraction means a layer that you could peek under… but never need to.

      • It successfully abstracts over the complicated internals so you don’t need to worry about them.

    • Coding agents getting good enough that you don’t ever have to look at the code is one of the big unlocks for this new order of magnitude of productivity.

  • People don’t want a TEMU of apps.

    • Seemingly infinite apps… all of which suck.

    • People don’t want more apps.

    • They want software to do what they need help with without requiring them to think about it.

    • It should be so well aligned with what they’re trying to do that it just fades away.

  • This week’s Wild West roundup:

  • This week’s “through the looking glass” roundup:

  • LLMs allow you to do more. 

    • Are you doing to do MOAR, and hollow yourself out?

    • Or are you going to go deeper, and create more resonance?

    • Similar to "If thinking is 10x cheaper will you think 10x faster or 10x deeper?"

  • If you aren't using claude code you're being left behind.

  • The Atlantic: The post-chatbot era has begun.

    • Even non-tech outlets get that we’ve already transcended mere chatbots.

  • With agentic engineering, features that were once nice-to-haves are now just haves.

    • It’s so easy to add features, there’s no reason not to.

  • Jenny Wen makes the case that our current best practice design process is obsoleted by LLMs.

    • The process that was the best practice fundamentally assumes that software is extremely expensive to create.

  • Two very different kinds of beauty: the Gilded Turd vs the Grubby Truffle.

    • The Gilded Turd is superficially beautiful, but fundamentally disgusting.

    • The Grubby Truffle is superficially disgusting but fundamentally beautiful.

    • Even an infinite amount of polishing can’t make a Gilded Turd fundamentally beautiful.

    • Closed systems tend to be Gilded Turds, and open systems tend to be Grubby Truffles.

    • In the modern world, superficial appearances are all that people have time for.

    • We’re surrounded by Gilded Turds.

    • Hollow. Shiny. Empty. Gross.

  • The word app implies silo.

    • That makes them clean and easy to reason about.

    • But it also makes them fundamentally non-composable.

    • Disconnected islands of functionality.

    • Each island is its own thing.

    • Nothing greater can emerge out of the collective.

  • Remember: LLMs are able to execute text, which makes all text basically code.

    • Even if you trust the creators of the skills you’re using, any untrusted text from any source you’re working on can screw you.

  • App stores get nearly infinite power because there is no way around them.

    • But an app-store like thing for an open ecosystem (like npm) can’t get the same power.

    • It’s always technically optional.

      • It’s more just a convenient schelling point for trust to accumulate.

    • App stores on mobile OSes are load-bearing, not conveniences.

  • The ecosystem currently assumes an npm-style approach will work for getting good-enough security for skills.

    • But skills can evolve and apps can’t.

    • Skills are open-ended in that they are fuzzy and non-deterministic.

    • Skills are also vulnerable to prompt injection, fundamentally.

    • Even if you trust the skill creator, the data you import can attack you.

    • Npm-style approaches work best for finite software, where there is non-trivial overlap for long-term trust to accrue.

    • It also requires knowledgeable users to look at trust signals.

  • There’s been a vibe shift about AI’s ability to create.

  • NYTimes: Dinner Is Being Recorded, Whether You Know It or Not.

    • Your privacy is being invaded by strangers and their decisions in the public spaces you’re in.

  • Cory Doctorow, Tim Wu, Ezra Klein: The Internet Feels Miserable ‘By Design.’

  • Rob Dodson: How I Built My Mobile Second Brain.

  • Apple could never build OpenClaw.

    • It requires emergence.

    • It requires messiness to the point of recklessness.

  • Imagine if your OS’s kernel could be prompt injected.

    • That should terrify you!

    • The kernel is the laws of physics.

    • It must operate correctly.

    • This is something I think Karpathy gets fundamentally wrong in his sketch of an AI-era OS.

    • Anything with this architecture cannot be made into a safe mass-market product.

    • It can at best be a gilded turd.

  • Data has power.

    • It's not just bits.

    • The bits mean something.

  • Abundant cognitive labor is now possible.

    • So make sure it's working for you, not against you!

  • An important concept is cognitive debt.

    • Cognitive debt is your inability to understand the system and how it works.

    • Cognitive debt doesn’t matter if the abstraction is non-leaky and you don’t need to know how it works.

    • The more you automate creating systems, the less you understand how they work.

      • Each incremental bit of work it does for you, the more remote it becomes.

    • When you take on too much cognitive debt, at some point you hit a wall where you’ve delegated so much you can’t even reason about it.

  • If you delegate too much, at some point the human becomes a copy-and-paste peon.

  • You learn when your brain is engaged.

    • Tutorials work better if they don’t introduce cognitive debt.

    • The tutorials in Halo hold your hand through every single step… the user doesn’t need to turn on their brain.

    • Compare to Minecraft, that drops you right in.

  • The only entity who can decide what features work with your data is some random company.

    • Wait what?

    • This only made sense when software was precious and artisanal.

  • A fascinating paper by some friends: Reasoning Models Generate Societies of Thought.

    • Some of the techniques that human systems have evolved because they create resiliently great ideas (e.g. having diverse teams) show up automatically in LLM architectures, too.

  • LLMs present a coherent, “it’s a person you can talk to” mental model that is easy to grasp and also fundamentally wrong.

    • LLMs work much more like a blackboard system.

      • Bottom up, emergent, unknowable.

    • The comforting mental model gives us a miscalibrated intuition for what they can do and what their failure modes are.

    • When you’re talking to LLMs you’re talking to an emergent system.

      • More like pond scum than a human.

    • Then again, you could say the same about human minds and the easy, comforting, and incorrect mental model implied by consciousness. 

  • Tech people will play with a new product based on its promise.

    • But for the mass market it needs to work and work reliably.

  • If a system is open but so different from everything else, it is effectively closed.

    • It must be possible to adopt a new system across a gradient.

  • Vertical SaaS is a spreadsheet in a trenchcoat.

  • LLMs are the universal universal computer.

    • Code is data in a very particular shape.

    • Now it doesn't have to be in that particular of a shape.

    • LLMs can make sense of just about any data.

    • Even data with an intent to execute.

  • Software eats the world because it can be replicated ad nauseum.

    • Bits are non-rivalrous.

  • The magic to create this software is no longer held by a cabal of magicians.

  • Top-down approaches have logarithmic value for exponential cost.

    • Bottom up / emergent approaches have exponential value for logarithmic cost.

    • The former create value quickly but then hit a fundamental ceiling.

    • The latter take time to get going but then are unstoppable.

  • The most powerful processes in the world are Convergent Emergent processes.

    • Emergence (self-driving) plus convergent (coherent).

    • These processes get stronger the more they scale.

    • Folksonomy is a good example.

    • So is Wikipedia.

    • They are rare but common if you know where to look and how to harness them.

    • They cannot be created in a lab, they have to be grown.

  • Wisdom of the crowds only works in default-convergent contexts.

    • If it’s a default-divergent context, the random noise doesn’t cohere.

  • In a cacophonous environment, the selection pressure is for superficial optics.

    • No one has time to do the deeper check on fundamentals.

  • Token furnaces are where you burn tokens to produce value.

    • Sometimes the value is so large that even if you have to burn insane numbers of tokens it’s still worth it.

  • A 2026 burn about bad writing: “This would have been better if an LLM wrote it.”

    • It’s an insult… but it’s also often true.

    • LLMs are distinctly better than the average adult at writing.

  • Why do we have open source software, but few other industries do?

    • Mostly because software is data, data is bits, and bits are non-rivalrous.

    • You can have the bits, and I can too.

    • Atoms are inherently rivalrous, but bits are inherently non-rivalrous.

    • Open source is easy in a world of bits, and hard in a world of atoms.

  • Things that are easier to measure get more attention.

    • This is a fundamental, inescapable, core asymmetry.

    • It’s why optimization always wins.

    • This asymmetry leads to a fundamental overweight of short-term, direct effects.

  • As a system gets increasingly cacophonous you get more superficial takes.

    • That creates more cacophony.

    • A compounding loop.

  • Most Rust engineers don’t need to know how borrow checking works.

    • They just know it’s protecting them in a deep way.

  • If a system constantly cries wolf with security questions, users will just turn on YOLO mode.

    • Either explicitly, or implicitly by just hitting accept blindly on any dialog.

  • Ads should not happen at the level of the ISP.

    • Ads in an ecosystem are fine… healthy, even.

    • But not at the ISP level.

    • The ISP level should be about serving the bits without interference.

    • The pipes should be dumb, not “smart.”

  • When you enter polish mode, you hunker down.

    • You don’t add functionality to the product, you just add robustness and polish.

    • When it feels like you’re 80% of the way done, you’re actually 20% of the way done.

    • If your goal is to have a perfectly polished thing, you’ll spend most of the effort on polish.

    • You’ll lock in whatever fundamentals you had quickly gotten in place.

    • If the product is complex, it will take time to polish all of it.

    • That means that you lock yourself in place, and it could be up to a year to get it out to market.

    • If the market is moving quickly, by the time you launch, you’ll have a year-old product.

    • In a fast-moving environment, big-feature sets with high polish aren’t viable.

      • You’ll be late by the time you ship.

      • Either have very small feature sets, or low polish.

  • Where data accumulates is the center of mass in a system.

    • Both in terms of where the strategic power is and for bootstrapping a system.

  • Nearly every useful feature of Twitter started first in userspace before being formalized in the platform.

    • Hashtags.

    • @ mentions.

    • Retweets.

  • In the era of agentic AI, we feel the onus of orchestration.

    • We’re always the hold up now.

    • The AI swarm is always ready for our next judgment call.

  • The worst kind of red-queen race is an addictive one.

    • You get trapped in it to start because it’s superficially enjoyable and addictive.

    • But now you’re in a red queen race that you can’t opt out of, lest you fall behind.

    • "I like this superficially but also if I stop I will die."

    • A maximum loop that is impossible to get out of.

    • This is the default state of modern society.

  • One red queen race possibility for the age of agentic AI: a race to get individual superpowers.

    • Everyone gets an edge over their peers if they apply AI more effectively.

    • But then their peers compete too and they have to push even harder to regain their edge.

    • This could end up in a hobbesian hellscape.

  • A piece of software that it’s hard to imagine a company making: Candy Crush, but it only works when you’re offline.

    • The end-user might want it: “only let me play this addictive game when I’m on a flight,” or “only allow me to play this if I turn off the other useful parts of my phone, which gives me a nudge not to use it.”

    • If you build the software yourself, you can align it with your interests easily.

    • But if a company built the software, adding a feature of “it only works offline” makes no sense; they might as well get the incremental use from online, too.

    • Now with infinite software we can make our own software more easily.

  • Is the system optimizing for your goals or a corporation's goals?

    • Corporations only care about the parts of you that align with their business interests.

  • The walled garden isn't evil, it's that it must by construction optimize for its interests above yours.

  • The same origin paradigm is fundamentally about a provider enticing you with nice software so they can hold your data hostage.

    • Before it was hard to do things with your own data at scale, so it wasn’t a big deal that someone else held it hostage.

  • The misalignment of incentives also comes down to the physics problem of the same origin problem.

    • Because you must give your data to someone else to get value out of it.

    • Now it works for them more than it works for you.

  • The optimization ratchet is often turned against you.

    • Companies are very good at optimizing.

    • Their interests will dominate your interests, naturally.

  • Where does the data accrete?

    • That's the most important strategic dimension of any system.

    • Data is state is momentum.

  • Use Protocols, Not Services

    • Protocols can't be taken away.

    • Services can be.

    • A service operates on someone else’s turf.

    • They call the shots, and can take it away from you.

  • StrongDM and OpenClaw  are downstream of where LLMs hit a new scaling threshold of agentic ability.

    • They were inevitable; the time had come.

    • They were at the right place at the right time to surf the wave no one had seen yet.

  • Someone pointed out that “dragon rider” sounds like “dragon chaser.”

    • The latter is apparently slang or someone addicted to a particular drug.

    • The connection feels useful; the dragon rider is extremely capable… and also can’t stop themselves.

  • Some business models use float-based financing.

    • This is one of Warren Buffet’s favorite tricks.

    • Two businesses that look superficially similar but differ in this key dimension will have radically different long-term outlooks.

    • The insight is that growth can be self-funding if your cash flow timing works in your favor.

      • For example, you collect payment before having to pay for the goods.

    • If your business has this shape, the larger you grow, the more leverage you get.

    • The value is duration multiplied by volume.

      • It works for insurance (float of years) and it works for payments (float of hours).

  • In case there were any doubts about the political influence of X’s feed algorithm, this paper in Nature should put them to rest.

    • Remember, the entity that controls what you see controls what you think.

  • This week I learned about Community Memory.

    • It was the first local bulletin board system, in Berkeley, in 1973.

  • How do you get AI empowering real people?

    • Instead of only empowering the smarty pants tech oligarchs who already have so much power?

  • The .ai domain will be this era's .com

    • Just kind of "duh,” the unremarkable default.

  • Modern society is all about scale.

    • And thus transactionalism.

    • Finite has dominated infinite.

  • Axios: Integrity's moment of peril.

    • An article about prediction markets as the apotheosis of modern transactionalism and lack of shame.

    • Another example of our over-optimized society.

    • You know it’s bad if even Axios is calling it out!

  • Modern society optimizes the humanity out of interactions.

    • I ordered a couple of TVs from Costco.com.

    • Our primary office is unit 222, but we wanted them dropped off in unit 224.

    • The delivery contractor told us that he was personally liable if he delivered to the wrong address.

    • Due to that he felt like he was put in an awkward position.

      • What he wanted to do for us as a human was dangerous for him as an employee.

    • Some MBA somewhere thought that making the employee have personal liability would align incentives better.

    • But it does so by taking already marginalized people and pushing them even harder into the meat grinder.

    • The person who made the policy doesn’t have to deal with the awkward, inhuman interactions in person.

  • Kurtzgesagt points out that self-regulating around modern food is just too hard.

    • You have the whole weight of capitalism’s optimization machinery to compete against.

    • It’s not possible.

  • What is the human-centric enablement of AI, vs an AI-centric enablement of humanity?

  • Evolution is an emergent algorithm for innovation.

    • It runs as fast as the substrate it's operating in.

    • It needs variation and some selection pressure.

    • The variation doesn’t need to be “creative".

    • It can be random noise, or systematically introduced variation.

  • First, get the loop to close, then get it to be tight.

    • Getting the loop to close is going from default-divergent to default-convergent.

      • The shift is an infinite difference that in the moment feels mundane.

    • Once it’s default-convergent, tightening is a simple matter of hill-climbing.

  • It’s great when you find a clunky workflow that’s worth doing.

    • It’s worth doing when it’s clunky, so that means it will definitely be worth doing when it’s less clunky.

    • Now, you just need to make it less clunky.

    • This is a core asymmetry.

    • Once you clear the bar of the loop closing and it being worthwhile, you’re in default-convergent.

    • From there, you can ski down hill.

    • The hard part is finding the thing worth doing even when it’s clunky.

  • Immature systems only become mature through load-bearing use.

    • It doesn't have to be perfect but it does have to work.

  • It’s easier to get convergence with a team of line cooks than chefs.

    • Line cooks are hyper competent but don’t have their own vision.

    • Chefs have their own vision and need to do their own thing.

    • Pre-PMF needs convergence, otherwise the entire enterprise diffuses to nothing.

  • Convergence-oriented people have to be in an environment they’re aligned with.

    • If they aren’t aligned with the environment, they’ll either

      • 1) burn themselves out in frustration, or 

      • 2) randomize and tear apart the system they’re in.

    • Which one it is comes down to how powerful the person is in that environment.

  • When someone doesn’t believe in the goal of the collective they try to stick out in a new direction.

    • They bet their direction will be more valuable.

    • But a) it’s more fun to do your thing than the existing thing and

    • b) misaligned energy erodes the strength of the collective's pull.

    • The smarter someone is, the more charismatic, the more they decohere it.

    • Which is like death to the existing collective.

    • Sometimes it really is radically better and everyone is better off.

    • But if it’s not it just grinds down the team.

  • "Yes, and" is not "yes to everything.”

    • It's "yes" to the core thing you think could be great.

      • Sometimes that's the whole thing.

      • Sometimes that's an itty bitty part, so small you can barely see it.

    • But “yes, and” must have curation and discernment.

    • It works better when you have well-calibrated taste.

    • How well it works is tied to:

      • 1) how long your time horizon is,

      • 2) how cheap seeds are to plant for you, and

      • 3) how calibrated your taste is.

    • "Yes, and" on the interesting subset is novelty maximizing.

  • Big teams for successful products don't go slow because they get lazy, it's because they're at a lower pace layer!

    • Things depend on them.

    • They have leverage.

    • If you go fast, you break real stuff.

    • When you get more leverage, you go slower.

    • That’s the fundamental tradeoff of leverage on your product, you must drift to a lower pace layer.

  • A powerful combination: enough breadth to see, enough depth to feel.

  • Intellectually charismatic people can sometimes dazzle their audience.

    • Their audience is convinced, but not because they understand, but because they are overwhelmed into intellectual acquiescence.

    • Understanding is fundamental; acquiescence is superficial.

    • That means that they have to be repeatedly reconvinced in a way they wouldn’t have to if they understood.

  • The technical term for when a tool merges with our mental model is mechanical sympathy.

    • Humans are really really good at tool use.

    • When a tool is predictable, it starts feeling like an extension of our body.

    • It melds into our mental model, seamlessly.

    • Only tools that are predictable can develop this.

  • Investors would rather fund a taco stand than a Mexican restaurant.

    • The taco stand can demonstrate traction more cheaply and then make follow-on investment much less risky.

    • Investors will always pull the entrepreneur back to the taco stand framing.

    • A taco stand can sometimes be a zombie, impossible to grow beyond some small ceiling.

    • But they’d rather box in a potentially great idea into a taco stand cul de sac and miss its major payoff, than to go all in on an expensive Mexican restaurant that fails.

  • People who know how to do abundance can maximize upside.

    • People who know how to manage scarcity can cap downside.

    • They’re different skillsets and orientations.

  • You could live in a library, or study in a dive bar.

    • But you’d have to fight the structure to do so.

  • The Saruman mindset sees itself as anti-authoritarian.

    • But actually it’s against others being authoritarian.

    • The Saruman worldview is powered by the complete and total absence of self doubt.

    • So when they themselves are the authoritarian, everyone should just recognize how great that is.

  • Sarumans have a slight edge over Radagasts.

    • We all love narratives with clear stories.

    • The Radagast worldview is indirect; harder to distill into a story.

    • The Saruman worldview is easier to distill into a story.

    • So everyone is always looking for "who was the person who made this whole thing happen this way?"

  • N personality types get exhausted by tedium and details.

    • The routine details don't have any joy in them.

    • Some details are routine.

    • Some are distinctive.

    • N personality types may care about the distinctive ones but cannot bring themselves to care about the routine ones.

  • Public recognition is an order of magnitude more motivating than private recognition.

    • Knowing that everyone knows that you're high-status is really important to us social monkeys.

  • Zero based thinking helps you not get stuck in your confirmation bias.

    • Otherwise, that bias is an asymmetry that keeps doing what you were doing, all else equal.

    • Zero-based thinking helps you navigate your underlying assumptions much more effectively.

  • When you use a thing that resonates with you, you don't just like it, you feel compelled to share it, because it feels like the world will benefit.

  • Someone who can move in additional dimensions will be inscrutable to the people confined to fewer dimensions.

    • The lower-dimensional entity will see the higher-dimensionality entity seem to appear and disappear at will, almost teleporting.

    • Each ply of thinking you can effectively do is another dimension.

  • The meta is never urgent, but it's where the leverage comes from.

    • Not just incrementally better, orders of magnitude better.

    • The meta is a form of being able to pop up an additional dimension.

  • Thinking multi-ply takes time.

    • If you do it constantly you’ll always be on the back foot, late to the game.

    • You’ll be beat by

      • 1) the swarm of one ply thinkers, one of which lucks into the right move, or

      • 2)  the seasoned high judgment operator who has a honed one ply intuition to make the right move quickly.

    • Better is to be ready for when the right opportunity presents itself and then pounce faster than anyone else can realize the opportunity.

    • Most of the time you will look like a surfer dude lounging around, but in that moment you will strike like a viper.

    • Bruce Lee: “Water can flow or it can crash. Be water, my friend.”

  • When the house is on fire don't plan rose gardens.

  • The tension between past you and future you is directly demonstrated in the insight that "It's never urgent to plant a tree,"

    • The world is always short-term thinking all the time.

    • Everyone is trying to get an edge tactically on everyone.

      • Run their OODA loop just a bit faster.

    • A red queen race situation that we all lose.

  • Find the seeds of greatness and then grow them.

    • Greatness cannot be imposed.

    • It can only be grown.

    • Greatness doesn’t happen later.

    • If you aren’t great by now you won’t be great.

    • So that implies, find the seeds that are great, and focus on those.

    • If you look carefully there are seeds of greatness everywhere.

  • There’s a country song: Bless The Broken Road.

    • All of the previous hardships you’ve faced put you on the road you’re on.

    • All of the lucky breaks in this specific road can’t be separated from the earlier hardships you’ve experienced.

    • Would you rather not have had a given hardship if it meant not having this daughter of yours?

    • The specific things you cherish in your life are inseparable from the hardships on the path you’re on.

Reply all
Reply to author
Forward
0 new messages