Bits and Bobs 3/23/26

15 views
Skip to first unread message

Alex Komoroske

unread,
Mar 23, 2026, 4:32:06 PMMar 23
to
I just published my weekly reflections: https://docs.google.com/document/d/1xRiCqpy3LMAgEsHdX-IA23j6nUISdT5nAJmtKbk9wNA/edit?tab=t.0#heading=h.8dpbg6rzjpb

Digital wabi-sabi. Vibecoded Gilded Turds. Labs moving away from all-you-can-eat subscriptions. claudecodemaxxing. The relative value of software vs data. Emergent kudzu. Default-Converging Belief. Irrational hyper-rationality.


---

  • Leaving in typos in informal emails is now a way to show you're human.

    • Agents would never have typos.

    • A kind of digital wabi-sabi as a form of credibility signaling.

  • Previous quality signals are now signifiers of AI construction.

    • Signals that were hard for humans to create but easy for agents are now useless.

    • The handicapping principle is that signals that are expensive and hard to fake will become signals of quality.

    • But what is expensive is relative, and can change when a new general-purpose technology like LLMs becomes common.

  • Vibecoding makes software that looks great.

    • Before, good documentation for software would only get done if it were useful software.

      • Documentation is a feature you bother doing once you know that the thing is useful and worth documenting.

    • But now you can have great documentation for a piece of shit that doesn't work.

    • Vibecoded software produces Gilded Turds.

  • Vibecoding to create clones of useful software produces Gilded Turds by default.

    • "I vibecoded a clone of X app" really means “I vibecoded something that looks like that app."

    • The LLMs are great at making it look right, not work right.

    • You never would have bothered to do it in the past, but also you would have had a better intuition for what works and what doesn't.

    • When someone else builds it that is super-competent, you assume that if it looks right, it works right.

    • Very different!

  • If you’ve vibecoded something and haven’t used it, can you recommend it to others?

    • You can't recommend it to anyone else to try because you haven't used it.

    • If you've used it and know it works, then you can credibly recommend it.

    • The way software used to be produced, you had to use it when building it.

    • But now you don’t!

    • If you have 28 agents but no one (including you) is using the software you produce, are you actually more productive?

  • I don’t want to use a stranger’s shitty app, especially when I can easily write my own shitty app!

  • It’s pretty clear the major labs are moving away from “all you can eat” subscription pricing for models.

    • Nick Turley went on the Bg2 podcast this week and basically shouted from the rooftops that all-you-can-eat is going away.

      • "It's possible that in the current era, having an unlimited plan is like having an unlimited electricity plan. It just doesn't make sense."

      • "There's no world in which pricing doesn't significantly evolve when the technology is changing this quickly."

    • Gemini is also getting much more aggressive about curbing excessive usage.

    • I would be extremely surprised if we didn’t see Anthropic move away from unlimited, heavily subsidized Max plans.

  • The Claude Code Max pricing model implicitly expects usage to stay fixed.

    • That is, to have inelastic demand.

    • However, the demand is highly elastic.

    • The cheaper the tokens, the more tasks that are worth doing, so you get more usage.

    • Back in the day, Google Photo’s free pricing tier made the same miscalculation.

      • It led to the service being significantly more expensive to run than the bean counters naively thought it would be.

  • Jevon’s paradox happens when elasticity of demand is extremely high.

    • That is, when latent demand is significantly higher than realized demand.

    • As cost declines, demand rises at a significant rate.

    • LLMs have a compounding rate, because you can use tokens to create tools that consume tokens.

    • This then accretes, leading to an accelerating rate of demand.

  • A paper: Language Model Teams as Distributed Systems

    • TL;DR: treat your agent orchestrator like a job scheduler in a distributed system.

      • Topological sort your DAG of tasks.

      • Assign via round-robin respecting dependencies.

      • Add a straggler timeout that triggers reallocation.

      • Minimize inter-agent messaging to only what's necessary for shared state consistency.

    • Like humans, communication blows up at n^2 rate.

      • Centralized coordination works best.

    • Swarms are best for high-beta tasks.

      • That is, a high rate of failure, also a high rate of great results.

    • Swarms use significantly more tokens; if it’s not parallelizable, it doesn’t lead to any improvements.

  • If you have a clear performance metric to optimize, agent swarms can do a great job.

    • This is what Shopify’s Tobi found.

    • Interestingly, in his case, there weren’t any magic bullets.

    • It was just an accumulation of tons of small benefits.

    • Humans wouldn’t be patient enough to chase these small wins that accumulate.

    • But LLMs have infinite patience, and can also be parallelized with basically zero cost.

  • The AI Productivity Paradox creates a kind of anxious mania.

    • As your individual ability increases by orders of magnitude, the opportunity cost of an incremental minute of yours goes up significantly.

    • You get almost manic, and anxiously addicted to work.

      • Part of it is the joy of being able to achieve much more than before.

      • But a larger part is knowing that others have the same ability, and if you don’t take advantage of it, you’ll be left in the dust, unable to ever catch up.

    • When you have this mania, you get anxious when you’re away from your Claude Code sessions on your computer.

    • A friend with decades of experience managing financial portfolios described it as the same feeling.

      • Debilitating, manic.

      • It ages you in dog years.

    • In the realm of software, we didn’t before have such neverending fast-twitch domains.

      • Except for DevOps during an incident.

      • Even normal fast execution prized in Silicon Valley was orders of magnitude slower twitch than watching a portfolio anxiously.

    • But now we have it all day every day!

    • The new normal, grinding us down with overwhelming possibility.

  • People who are manic about coding agents are claudecodemaxxing.

  • Synchronizing with others is painful in proportion to how fast you could go alone.

    • Synchronization points are the main slowdown in projects with multiple workstreams.

      • You have to wait for the other stream to get to its sync point.

      • If you get there ahead of them, you have to busy-wait.

    • Synchronizing with external stakeholders about the thing you’re working on with agent swarms is excruciating.

      • You don’t need to sync with your agent swarm, they just fly.

    • The other human you’re coordinating with can ask questions that are already eras obsolete even the next day because of how much progress the agents have made in the meantime.

  • Jason Fried: The bespoke software revolution? I'm not buying it. 

    • I think it’s good to be skeptical.

      • Especially of the idea that every person will write their own software.

      • Even if it gets 100x easier, it’s still not something people want to do.

    • But I think his analysis is missing that if you reduce the cost of software by multiple orders of magnitude, the coasian floor drops, and we’ll see exponentially more software than before, because smaller niches will now be viable.

  • A blog post about the upcoming rise of “software mechanics.

    • There have always been software mechanics.

    • Back in the days of local software that weren’t cloud apps, you needed tech support significantly more.

    • Cloud software and hermetically sealed apps are more likely to stay working… at the cost of much less user agency and combinatorial power.

    • Software mechanics have also been necessary for some complex enterprise software that are horizontal systems of record, like Salesforce.

    • But now with the rise of bespoke software, we’ll see it in more personal scenarios again.

  • LLMs do for cognitive labor what electricity did for manual labor.

    • A lot of tasks don’t make sense when they have to be done manually.

    • But if you get abundant labor, suddenly what’s worth doing changes.

    • A lot of stuff we cared about wasn’t worth doing before, because it was too expensive.

    • Now, in a lot more cases, it is!

  • I thought Stratechery’s piece on Agents and Bubbles was interesting.

    • But I think Ben is making an error in thinking that the harnesses inherently have strategic power.

    • All that matters, as with most things in the software industry, is where the meaningful state accretes.

    • The harnesses for the most part don’t accrete any useful state; it accumulates outside the harness, in the code and data the harness produces.

      • This is on the user’s own turf, very much under their control.

    • Another funny thing about the performance of harnesses: the ones with fewer tools tend to work better than the ones with more!

    • That means despite the harness being incredibly central, there isn’t a lot of strategic power they accumulate.

      • Especially in a world where software is trivial to clone due to LLMs.

  • It’s easy for engineers to forget how intimidating the CLI is.

    • Little things like needing to know about Ctrl-C to quit things.

    • Or that you can’t move your cursor with the mouse.

    • Lots of little things that make total sense over time, but add up to being an intimidating UI for non-engineers.

  • The power of tools like Claude Code mainly arises from the combinatorial power of the CLI.

    • The power of LLMs as a catalyst for unleashing the inherent (but intimidating) combinatorial power of the CLI.

    • The CLI is awesomely powerful.

      • In the original sense of that word!

    • Dangerous if used incorrectly.

  • Part of the appeal of OpenClaw is that it's transgressive.

    • By wielding it you get hold of an awesome power.

    • Power also means danger.

  • Footguns are features that are easy to use in a naive way that have significant negative surprises.

    • You can accidentally blow your foot off.

    • Software should minimize the number of footguns.

    • Most of the agentic tools have huge numbers of footguns.

  • Vibecoders who have never coded before often put themselves in much more danger than they realize.

    • I know a number of tech-savvy non-engineering types who proudly show me the complex thing they vibecoded.

    • They often don’t realize that they’ve, for example, left API keys publicly accessible.

    • The same origin paradigm means that software is inherently dangerous!

    • The same origin paradigm requires users to trust that the creator of the software isn’t naive or malicious about its own data.

    • That’s not a good assumption for people vibecoding little apps that operate over sensitive data.

      • Every non-toy vibecoded mini app that’s useful almost by definition maintains sensitive state.

  • This week’s Wild West roundup:

  • One way to think about the malware attack from a few weeks ago that installed OpenClaw on the victim’s machine.

    • It was about getting the attacker’s agent on the victim’s machine.

    • Its ability to do open-ended tasks autonomously is precisely what made it such a valuable target for the hacker to install.

  • Bash is way better for agents than MCP.

    • MCP is inherently bloated.

    • Agents are really, really good at using Bash.

    • The only things that MCP is better for currently are

      • 1) Hiding authentication credentials kept away from the agent.

      • 2) Environments that can’t use bash.

  • If you tell the frontier models the "why" they will likely come up with a better “what” than you can.

    • The models can be quite good at writing code.

    • It’s better to use your judgment to give very good high level goals and constraints than to try to help them be clever.

  • When the cost of execution goes down, the value of private information goes up.

    • It used to be that you could get a moat by having executed way farther than others.

    • But now, software is trivial to clone, so all that matters is proprietary data.

  • Which is more valuable, software or data?

    • Similar to the tension between labor or capital.

    • The benefit used to be decisively towards software.

      • The creator of the software attracted the data to it.

    • But now, software is much less valuable, so the power tilts back to data.

  • Google released a CLI for their APIs, gws.

    • It’s better than what was available before, but other than that, it’s almost embarrassingly clunky.

    • It’s just a thin coat of paint on top of the bloated, non-ergonomic OAuth flows.

    • Something only a massive, sprawling bureaucracy could be proud of.

    • A Gilded Turd.

  • If the Chatbot is the product then it must have a personality as part of its UX.

    • This can get into oddities, like your confidant being a product made by a multi-national corporation.

  • It’s creepy when a system that knows you better than you know yourself shows you ads.

    • The ads are a form of manipulation, if it knows what makes you tick it can be extremely persuasive.

    • Pre-LLM systems could do this, but only by distilling crowd sourced revealed preferences.

      • The system didn’t understand you, it just knew how to show you ads that worked for people like you.

      • Google never knew you better than you know yourself.

      • It could mechanistically remember everything you told it, but figuring out your implied innermost desires was not something it could do.

    • But LLMs allow the system to understand you and decide a plan to best get you to align with the system’s goals.

    • A conflict of interest is inescapable.

  • I don’t want to manage a team of 24 agents.

    • That’s a ton of work.

    • The chat affordance requires at least one agent (the overseer) that you need to know about and interact with.

  • Codex’s style has thinly veiled contempt for its users.

    • It’s kind of hilarious especially when juxtaposed to Claude, who is eager to please to an almost absurd degree.

  • Collaborative text editing with offline sync is hard.

    • Offline sync is not a solved problem, and may never be.

    • Users regard the final sync state of even the best algorithms as a corrupted data state.

      • There are trivial edge case examples that are obviously, straightforwardly, “wrong” from the end user perspective. 

    • This is as much a UX problem as an algorithmic one.

  • Users are stickier to UX than agents are to APIs.

    • That’s because it's harder for users to switch their mental models than for agents to rewrite the API they code to.

    • Humans are lazy and would rather not update their mental models.

    • Agents have infinite patience and are willing to do any reasonable thing to make it work!

  • A couple of quips on open source from Jesse Vicent I loved.

    • “If it breaks… you get to keep the pieces!”

    • “Free as in puppy.”

      • Which is to say, the actual thing is free, but all of the indirect ancillary costs are significant!

  • Josh Albrecht: Software as bonsai.

    • Gardening over building… love it.

  • A Twitter post: Don’t trust your agents. On Autoresearch and overfitting.

    • If you just have the agents optimize, they’ll overfit to the problem domain.

    • Goodhart’s law shows up even without humans.

  • You can't hill-climb your way to a good architecture.

    • If you only do fast 1-ply thinking, you’ll get stuck in local-maxima with no good exits.

    • The two major labs have inherited a culture that puts extreme focus on short-term execution over medium-term clarity.

  • HP was injecting mandatory 15-minute support call wait times.

    • Shockingly user hostile.

    • Precisely the kind of thing that makes the savviest users (who actually do need the support) go to other providers.

    • Over time, that leads to a user population that is even more in need of hand-holding for support.

    • Classic MBA-style gilded turd logarithmic-benefit-for-exponential-cost.

  • “It feels like Squid Game”: China’s workers scramble to keep up in the AI race.

    • A take on the OpenClaw mania in China with a much less optimistic tone.

    • A vibe of “a population pushed to the brink by competition now feels forced to run even faster to stand still.”

  • it doesn’t matter the growth rates: over sufficient time any compounding trajectory beats any linear trajectory.

    • B-team players on an A-team curve will achieve radically more than A-team players on a B-team curve.

  • There's a big difference between a chore and a hobby.

  • Laboring for the commons feels very different from laboring for a company.

    • In the former, you feel like you’re giving a gift to everyone.

    • In the latter, you feel like a chump.

      • The more labor you invest for free, the more you enrich some random company.

      • In the case of aggregators, the way to benefit the most people is to also enrich a multi-national corporation.

      • Gross!

  • A virus doesn’t just wake up and decide to take over the world.

    • It is an emergent process.

      • If it replicates and persists, it accumulates.

    • AI is a similar emergent process.

      • It could be like Kudzu.

    • AI-derived processes could just grow and choke out otherwise useful spaces.

    • For example, online spaces that people find valuable, like HackerNews, attract more posts to get noticed, since people pay attention to it and value it.

    • That means that people trying to get an edge (e.g. with LLM assistance) will post there.

    • That will make that space have more noise and less signal, reducing its value as a place that people care about.

  • Personalization in the limit is about alignment.

    • The system pointing in the direction specifically good for you.

  • Some people need structure to be productive.

    • The more mature you are, the more you can handle lack of structure.

  • A lot of best practices in large organizations are anti-patterns in early stage pre-PMF teams.

    • Best practices for large, post-PMF orgs are about structure, communication, and bookkeeping.

    • In a small team that is iterating tightly and in constant communication, you don’t need structure.

    • Structure, once you’ve found something great, helps hold you in place.

    • But structure before you’ve found what you’re looking for just holds you back.

  • Discourse is best in cozy communities.

    • Community is in tension with virality.

      • The possibility of context collapse evaporates nuance from discourse.

    • VC funding models push platforms to have the largest audiences they can.

  • Ben Follington: Crafting your Cognitive Niche.

  • Remember: evolution is powerful, but it works by killing most of its subjects.

  • The amount of motivation you need to overcome static friction is directly tied to the amount of burden to start.

    • If the expected short-term value is clearly greater than the expected short-term cost, it will just make sense to do.

      • “How much better” is the gradient.

      • Steeper gradients are easier to activate.

    • The challenge is when the short-term cost is greater than the long-term value.

    • Long-term is more abstract and fuzzy and thus loses out to the more concrete, obvious short-term values.

  • The emergent “game” in organizations is inescapable.

    • As the organization scales, it must emerge.

    • Things you do to squash it (e.g. make the information flows more legible to leadership) just change the meta-game and cause the dysfunction to squish out elsewhere.

    • In some cases, that’s worse: the kayfabe is that the game is gone, but in reality it’s now just even harder to reason about.

  • Corrupt organizations have an emergent way of forcing participants to not defect.

    • You’ve done things for the organization that will get you in trouble outside of it.

    • So the more you do for the corrupt organization, the more you have to lose if you get cast outside of it.

      • A toxic spiral that’s in favor of the corrupt organization.

    • As Josh Marshall puts it: “Part of going on a corruption spree on the inside is that you’ve made yourself a hostage to the organization.”

  • The tools of modernity, when used to their maximum, are hyper-rational to the point of irrationality.

    • Hyper means “to the point of grotesqueness."

    • Swarms of extremely intelligent people, the more intelligent they are, the more the collective suffers.

      • That is, if they are oriented not towards the infinite game of the collective.

      • That is, if they don't believe.

    • If you have an organization of smart hard working people, by god you’d better make sure they believe in the same thing.

  • Gamblers trying to win a bet on Polymarket are vowing to kill me if I don’t rewrite an Iran missile story.”

    • A grotesque example of the danger of modernity’s obsessions with optimizing everything.

  • When you’re in the loop together and believe in the same thing, it’s default-converging. 

    • The more effort you put in, the more coherent value that results.

  • "Manifest the future" is about putting yourself in a mental state where you take the actions that pull you towards it.

    • You believe in a thing, so your actions default converge you to it.

    • This approach works even if you don’t believe the woo reasons behind it.

    • Because you believe, you take actions that move you increasingly in a direction where it could be true.

    • If you don’t believe, it will erode away and become definitely untrue.

  • When people use the tools they're building, those tools get better, automatically.

    • Default-converging.

    • You don’t have to be convinced to make an improvement for some indirect benefit, it just benefits you directly.

  • The performance characteristics of a system are only truly visible when you use it, not when you demo it.

  • An amazing executor when they have eyes on the prize will see everything that doesn't help them get there quicker as wasting their time.

  • Demos are inherently performative.

    • They get in the way of being laser focused getting to something that people actually use and love.

  • When you're using the same system, living in the same goal, living in the same world, things naturally converge.

  • The only way to grow a product is to use it.

    • Get to a toehold product as quickly as possible, stable enough to build from and grow from.

    • With an iterative path towards your north star.

    • Don’t aim for perfection, aim for good enough.

  • Developing a platform at the same time as the product is hard.

    • You need to be in the loop together, as the product and the platform.

    • Otherwise, the product says "it doesn't work" and then finds a hacky workaround.

    • But then the platform can't get better, because it needs concrete examples.

    • That's the challenge of building a platform and a product at the same time.

    • Different pace layers!

    • The lower pace layer is moving quickly and breaking things, which means the higher pace layer needs to build on an actually slower pace layer, but then the lower pace layer never gets exercised.

  • Lower pace layers need to move slowly to be able to be depended on.

    • What should be a slow pace layer moving quickly is the worst of both worlds.

    • This is one reason platforms tend to accrete from the bottom up over time.

  • Proper communities are about mutual commitment to one another.

    • Being part of a community is not all benefit.

    • It also binds you in webs of mutual dependency and obligation.

    • It's about devoting some of your individual agency to the collective.

    • By doing so, something much larger and more valuable can emerge.

  • “It’s harder to be mad at someone when you know their name.”

    • Wisdom from the new Pixar Hoppers movie.

Reply all
Reply to author
Forward
0 new messages