Bits and Bobs 4/28/25

19 views
Skip to first unread message

Alex Komoroske

unread,
Apr 28, 2025, 10:46:51 AMApr 28
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.5mkzsjmcdq94

Scale-independent resonance. Private Intelligence. Your digital home. Privacy vs centralization in the same origin model. The Coasian Floor scaling with company size. Hill climbing a moving hill. The cascading self-destruction debug loop of coding agents. The turing-complete printing press. Living software. Zugzwang. Accidental society builders.

----

  • I love Inverting the Socratic Method in the Age of AI from my friend Anthea Roberts.

    • The socratic method is a powerful teaching method that forces students to have good answers.

    • But in an era where LLMs make answers cheap, the real skill is asking good questions.

  • Anthea pointed out to me that LLMs should raise teachers’ expectations of what students can accomplish.

    • One approach to LLMs in education is worrying about the floor.

      • “How would you even handle citations of conversations with chatbots?”

      • “If it just gives the answers, the students won’t bother to learn the material.”

    • Another approach is about focusing on raising the ceiling of what is possible.

      • “I assume that you’re using LLMs in your assignments, which means I expect the quality of your output to be 10x higher.”

  • I love this title: AI Horseless Carriages.

    • I agree we’re in the horseless carriage era of AI.

    • The filmed-stage-play era of the movies.

  • Rohit got locked out of his OpenAI account.

    • Imagine how bad that would be if ChatGPT were the operating system for your life.

  • Which would you rather lose: all of your government-issued photo identification or access to your Google account?

    • I imagine a lot of people would rather lose their government issued ID.

    • It’s kind of wild how much of our lives revolve around such a centralized service.

  • Google has added AI to a lot of their products, but in an almost perfunctory way.

    • Because in those kinds of aggregator contexts, users have to have a theory of mind about the AI:

      • What is its goal?

      • What are its intentions?

      • Is it aligned with me?

      • Is it remembering things about me from before to manipulate me?

    • That cloud hanging over it prevents any large aggregator from being too aggressive with including AI features.

    • But if you don't have those problems, if it's unambiguously working for you with no conflict of interest and doing what you've told it, then you could go AI-heavy on tools.

    • Remove the doubt that it's not working for you.

  • The reaction to ChatGPT’s new memory feature seems mixed.

    • Chatbots previously had a fresh sheet of paper for each conversation.

    • Kind of like a term limit; it can't develop plans, intentions, goals about you.

    • But if you add memory, and the LLM is made by someone else, it could start trying to figure out how to manipulate you.

      • It might start by buttering you up for becoming a pro subscriber with nudges tailored to you.

      • But then maybe it grows into subtly steering you to its product partners.

      • And then maybe it lobbies you to call your congressperson about regulation that might affect the company.

    • LLMs can translate anything to anything; if they know you, they can figure out the best way to convince you.

    • Someone told me the story about a heavy user of ChatGPT who asked it, "If you wanted to deceive me and cause me to do something not in my interest, how would you do it?" 

      • ChatGPT apparently gave an extremely specific plan of how it would deceive her.

      • "You travel often and when you do you are often alone, and seem to focus most of your time on your work. I would wait until you were alone in a hotel room on a business trip when you're most vulnerable and then start subtly nudging you toward..."

    • A centralized bureaucracy knowing you is scary.

  • Imagine if your best friend who steers your belief system is also trying to subtly get you to buy a nicer car.

    • How screwed up would that be?

  • At some point some LLM will have paid placement in the results.

    • Won't it shatter the trust people have in the service?

    • The higher the ability of an assistant to help you, the more important it is to be aligned with your interests.

  • The best things have scale independent resonance.

    • That is, they are resonant at every zoom level.

    • From far away, they are resonant.

    • From close up, they are resonant.

    • Also everywhere in between.

    • Far away (superficial) resonance is easy.

      • Just add a veneer of quality.

      • Gilded turd.

    • Close up resonance is also reasonably easy.

      • A thing whose details, when you take the time to look carefully at them, reveal their beauty to you.

    • Scale independent resonance is more rare.

      • It’s almost sublime.

      • The closer you look, the more compelling it becomes.

  • “Owning your software” can’t just be a veneer.

    • It can’t just be a manifesto a fancy marketing team came up with.

    • It’s about aligning with my interests, under my control.

    • Software that works for me.

    • A system for owning your software needs to have scale independent resonance.

  • I want a new category of thing: Private Intelligence.

    • Intelligence that works only for you.

    • Not A.I., P.I.

    • The intelligence need not be anthropomorphized; it can be an emergent characteristic of the system.

    • What’s important is that it’s private to just you, and fully aligned with your interests.

    • Not some veneer of privacy, but holistically:

      • The business model.

      • The technical architecture.

      • The privacy model.

    • Everyone should have a Private Intelligence that works only for them, and is entirely aligned with their interests.

  • If you want to grow a new digital home, you must start in the seed of the existing one.

    • If it really is your digital home, you'll want to be careful about who you invite in and can see it.

  • The original digital home was email.

    • It still is the bedrock of communication.

    • Even if any given relationship uses another communication channel by default, email is the bedrock every communication can reduce to.

    • Everyone has an email address, and everyone checks their email at least a few times a week.

  • Email today is about messages, not meaning.

    • Millennials don’t like talking on the phone.

      • Calling someone seems so rude and presumptuous.

      • “I assert that no matter what you’re doing right now, talking to me is more important.”

    • Email is async, but still has that “I assert this message is worth your attention” kind of quality.

    • Your inbox is dominated by the assertions other people have made about what is worth your time.

      • Most of those are not from people optimizing for you, but what’s best for their business.

    • The loudest thing grabs your attention, not what’s most important.

    • Your email is a cacophonous background noise of things competing for your attention.

      • Many of which are not important.

      • The important things get lost in the noise, or loom in the background as the things you know you should prioritize but keep on being too busy.

    • Some people love email.

    • Most of us hate email.

    • What about a tool to keep on top of the most meaningful stuff in your digital home?

  • Great quote from Stratechery this week:

    • "The danger for Apple is that trying to keep AI in a box in its current paradigm will one day be seen like Microsoft trying to keep the Internet locked to its devices: fruitless to start, and fatal in the end."

  • I like Sam Schillace’s take that AI Coding is the new blogging.

  • The memory for the LLMs is the blackboard of cocreation.

    • A magic blackboard that extends the markings you make on it to help you tackle the things you find most meaningful.

    • The Slate.

  • One of the biggest challenges of Getting Things Done is distilling a big thing into concrete next actions.

    • LLMs can do that easily!

  • The same origin paradigm emerged organically out of the web.

    • At the very beginning the web had no state or turing-complete execution.

    • It needed no security model, because it couldn’t do anything other than fetch and render documents.

    • But then cookies were added, and suddenly you had to figure out which URLs should receive which cookies, which required a notion of the origin.

    • Then, Javascript was added, allowing local storage, and the origin boundary was the natural one to use.

    • As new APIs were added, it was easier to use the existing security boundary than create a new one, so they were stapled to the same origin boundary too.

    • Over time, the same origin boundary moved from a kind of convenient happenstance to an iron law of physics.

    • The web was successful because of its implied security model, but no one designing it necessarily thought of it that way at the beginning.

    • The same origin model was only discovered in retrospect.

  • The same origin paradigm doesn't grapple with the fact that data is infinitely copyable.

    • In the same origin paradigm, more privacy means more centralization.

      • This is a surprisingly but powerful second-order effect of those laws of physics.

    • You get privacy from a long tail of random actors, by having less privacy from the hyper-powerful actors.

  • Iron triangles can't be solved within themselves.

    • The answer is to throw out the constraint that puts you in the iron triangle in the first place.

    • How can you solve the iron triangle of the same origin paradigm?

    • By creating an environment that doesn’t use the same origin paradigm!

  • The same origin model was not preordained by a god.

    • It's contingent.

    • It's merely a convenient balance point.

    • It can be changed.

    • We've outgrown the same origin model.

    • The power and potential  of AI pushes it beyond the breaking point.

  • Sandboxing is easy.

    • Just make it so the code can't talk to anything outside its sandbox.

    • But code that is an island, separated from the world, can’t do anything useful.

      • If a tree falls in the woods with nobody around to hear it, does it make a sound?

      • Who cares, there’s no one there to hear it!

    • Code that does something useful, integrated with the world, and is also sandboxed is the hard part.

  • In software today, the schema is set by the software's author, not the user.

    • The schema is the foundation from which all the functionality sprouts.

  • The Coasian Floor for a massive scaled company is huge.

    • The Coasian Floor is the minimum addressable market necessary for a company to bother building a feature.

    • Because UIs are expensive to build, companies have to debate for hundreds of hours about what to build.

    • As the number of users goes up, the scale of the downside goes up.

    • But also as the company gets larger, the coordination cost gets super-linearly higher.

    • That means that as the amount of usage scales, the Coasian Floor of a feature goes up at an accelerating rate.

    • That means that as companies get larger, there’s an ever-bigger set of features they’d never bother implementing within their origin.

    • Only the origin owner can implement features in their origin, so those features simply can’t exist.

  • The Coasian Floor for apps for aggregators is extremely high.

    • AI's primary benefit is lowering the Coasian Floor.

    • Anyone who attempts to solve the same origin iron triangle with no untrusted code will fall prey to this.

    • There will be tons of features that would add value that will be below the trusted code author's Coasian Floor.

  • Network requests are actions with possible side effects.

    • You can’t see what happens on the other side, so you have to assume it could do anything.

    • That makes a system that has open-ended network requests hard to be a safe sandbox for experimentation.

  • The cloud means ceding control to someone else's software.

    • But that doesn't have to be that way!

  • The benefit of the cloud is it’s available 24/7 and compute costs can be shared.

    • You also can create features that summarize insights from lots of pooled data from across a population.

      • Emergent collective intelligence.

    • But with the cloud you typically give up control and ownership.

    • You could get the control with Open Attested Runtimes.

    • You could get emergent collective intelligence with differential privacy policies.

  • The cost of creating software is one of the forces that led to centralization.

    • Users have to go where the software is.

    • Especially software that gets better with more usage, either directly or indirectly.

      • E.g. social networks, or things like search engines that use sifting social processes.

    • All of the data has to flow to one place to make the software better.

  • Decentralization and centralization are in adaptive tension.

    • Decentralization in one layer typically leads to centralization in another layer.

    • You might call this the “Conservation of Centralization.”

    • For example, globalization (decentralization) requires something like the dollar being the reserve currency (centralization).

    • Full decentralization is just noise, incoherence; some centralized bedrock is required to give stability.

    • A system that is very distributed is ripe for centralization.

    • Distributed systems have certain kinds of problems that need centralization.

    • You can't have one or the other, you need both.

  • Imagine a system that could figure out the meaningful questions you didn't even think to ask.

  • Imagine a planetary-scale medium for infinite software.

    • Not mini apps, an infinite fabric of possibility that adapts itself to help you with the things you find meaningful.

  • Someone will create a personal MCP blackboard for an agent mode to help you do tasks.

    • By default it will be extraordinarily dangerous, because it will have dangerous side effects that can't be contained.

    • To make it safe you’d need to create a side-effects-free region of working memory, and have careful egress from that region.

    • Surprising things have possible significant side-effects, like any network request.

  • A friend told me: “vibe coding on your sensitive data has SHARP edges"

    • Vibe coding is great fun, but if you do it on sensitive data, you can very quickly get yourself into trouble.

  • LLMs have a codex of lots of examples of React apps embedded within  them.

    • They can replicate them on demand, with tweaks for any given context.

    • LLMs have absorbed this knowledge by being bombarded with innumerable examples.

    • Techniques like RLAIF also allow them to be force-fed AI-generated and auto-ground-truthed examples.

    • This codex is at a slow pace layer; it takes months for the LLMs to be trained and then deployed.

    • It’s also indirect.

    • In a world of infinite software you’ll want a tiered system, with a faster, more direct loop that allows faster adaptation, to complement the lower pace layer of LLMs.

  • Hill climbing a moving hill doesn't work.

    • LLMs are moving hills.

    • The models are still improving rapidly.

    • Don't over optimize for their current behavior.

  • LLMs used to be hard to get good code out of.

    • They could do it, but with a lot of prompt, workflow, and UI scaffolding to give a dependably good result.

    • But RLAIF works well for writing code (especially React components) since it’s easy to construct an auto-ground-truthing pipeline.

      • Write the code, try to run it (iterating until no errors) then using Playwright to visually inspect it and poke at it according to a test plan to verify it works as intended.

    • This means that models, like Sonnet 3.7, have gotten much better at writing React.

    • If your secret sauce that gives you an edge is scaffolding to wring out better coding results, your moat could be evaporated by the next model update.

    • Sonnet 3.7 made a number of vibe prompting products possible.

      • Sonnet 3.8 could make them obsolete.

  • A failure mode for LLM coding agents: the cascading self-destruction debug loop.

    • I had a session with Claude Code on a personal project that did a pretty good job (with a few nudges) of adding a significant new feature.

    • But it left a single, minor, linting error.

    • I asked it to fix the error, and its ham-fisted fix ended up breaking a couple of other things.

    • As it worked to fix those new errors, it introduced more and more.

    • Before I knew it it was trying to rewrite whole parts of the frontend, losing the plot entirely.

    • It was spiraling out of control.

    • I ended up having to throw the whole commit out and start over.

  • Apps don’t have to be complex if they are bespoke and fit to you.

    • Most turing complete things you need are simple wiring together of data and UI and little easy transformations and logic.

  • Turing complete things can do things for you.

    • Non-turing complete things are just passive vessels.

    • Turing complete things are active.

    • That is what creates their potential to create value… and the potential to harm.

  • The printing press and the web were about force multipliers on words.

    • Words that can do things is the next step.

    • The next force multiplier: a turing-complete printing press.

    • Infinite software will be a Gutenberg moment.

  • Humans shouldn't get more mechanical to work with software.

    • Software should be more organic to work with humans.

  • I want living software.

    • Software for living.

      • For thriving, for aligning actions with aspirations, for creating meaning.

    • Software that can adapt to your needs and keep itself maintained and auto-extending.

    • I want a digital garden for living software.

  • Society is stuck in a mukbang loop.

    • Mukbang is a Korean term for broadcasting your eating.

    • The Audience Capture essay told the story of a social media influencer whose videos of him eating got popular.

    • His audience wanted him to eat more and more, so he did, putting on hundreds of pounds.

    • He almost ate himself to death, compelled by the massive following he had accumulated.

    • Society is stuck in that same problem in the world of infinite content.

  • Finite content can be cozy.

    • Infinite content cannot be.

    • Today we have finite software and infinite content.

    • What if we had infinite software and finite content?

  • If you want to build an assistant, it can’t be an omniscient chatbot built by someone else and shared by many people.

    • It would have to be an intelligent substrate for living software.

  • I liked the main point of The “de” in Decentralization Stands For Democracy.

    • The problem of centralization is not “inherently evil people get outsized control”, it’s that “any entity that has that power will emergently become a worse actor over time.”

    • Facebook went from a goal of "meaningful connection with people you care about", but over time it became about engagement because it had to, to compete against TikTok.

    • German has the concept of Zugzwang: a forced move.

      • In chess, when you're forced to make a bad move to protect the king. 

      • If you don't make the compromised move, you die.

      • The move is not evil, it’s the emergent situation that makes you do it.

    • The problem is the concentration of power with a negative reinforcing loop that forces you to take the Zugzwang.

  • Vivid stories travel farther than “well, it’s complicated” stories.

    • Viral stories used to travel better millenia ago in the era of chieftains because there was a ton of friction for information transmission.

      • The vivid story about the chief who drank the blood of his vanquished enemy from his skull efficiently traveled, creating a reputation of “don’t mess with that guy!”

    • Now there's no friction but you compete with a cacophony so vivid stories are also necessary now if you want the information to travel, but for a different reason.

    • In a world of infinite content and  frictionless transmission, the "vivid story" person technique becomes dominant.

    • Deranged actions are naturally vivid stories.

      • “He did what?

    • The modern information environment selects for derangement.

  • The next evolution of tech should be human-centric.

    • The stakes are too high for hyper-aggregation in an era of AI.

    • It's past time for the movement to talk, we've got to do.

    • As a community we’ve got to build products users love.

  • What if tech was about community, collaboration, and meaning?

    • Those are all phenomena completely invisible to the Computer Science lens.

    • Today only technologists can build technology.

    • Technologists tend to mainly use the computer science lens.

  • Social networks are kind of like digital migrations.

    • Any time you aggregate people, at a certain scale you get something like a society.

    • Technologists became accidental society builders, building network states where none of the builders studied the humanities.

  • Tech's metaphors are rarely about living things.

    • But tech is social, it interacts with living things and is cocreated by them.

    • Metaphors of emergence are often related to living things.

    • Life is the only concrete autopoietic system that people are directly familiar with.

    • The other systems are all bigger, more abstract ideas, like economies, cultures, etc.

  • The reason that people collaborate on Linux is not just the license, it's the architecture that allows for participation.

    • A modular architecture allows an architecture of participation.

    • People can work on smaller chunks at a time.

  • Web 2.0 was about collective intelligence.

    • It was built out of the desperation after the Web 1.0 bubble burst.

    • A collective energy to build something together.

    • Web 2.0 came out of many technologists not having jobs and banding together.

    • That drive to community creation doesn't happen in the gold rush era.

  • LLM companies are trying to get a premature monopoly on LLMs.

    • We didn't figure out the participatory architecture yet, which is necessary for the early stage of new technologies!

  • Flickr's sensemaking was created and owned by the community.

    • TikTok's algorithm is powered by the community but is a proprietary result foisted upon the community.

  • This week I learned about the Scots Wikipedia controversy.

    • Scots is a language with a small number of speakers.

    • A few years ago someone on Reddit noticed that the Scots Wikipedia had a high number of articles written in poor Scots.

    • It turns out there was a particularly prolific American teenager with a rough understanding of Scots who had written a large number of the articles.

    • Because of the prominence of Wikipedia relative to other Scots material on the web, it formed a larger amount of the LLM context on Scots.

    • That means that LLMs also likely replicate Scots poorly, all because of one weird bottleneck.

    • A similar kind of thing happens in evolutionary biology, a “population bottleneck.”

      • That’s when for some reason only a small number of individuals of a species survive (or travel to, say, a new island).

      • That means the rest of the species has that particular random set of individuals as ancestors, inheriting its random subset of distinguishing characteristics.

  • Lenses shape everything you see but are hard to see themselves.

    • You forget to even interrogate them.

  • The algorithms we use to navigate the firehose of infinite content shape how society sees itself.

    • With infinite content you must have an algorithm to sift through it all.

    • An algorithm is a lens made by others.

      • They made it to align with their incentives, not yours.

      • “What will cause the viewer to be more engaged?”

    • The infinite content algorithm problem has been destabilizing for society and led to a hellscape.

      • A common refrain today: “We live in the worst timeline.”

    • Imagine it for our entire digital lives.

    • If everyone uses LLMs to cothink, the guardrails they have will shape all of society.

    • This analysis shows the power of a centralized algorithm that everyone views the world through.

  • A consequence of centralization of the most important algorithms: random parts of a company’s culture affect the world in large ways.

    • For example, little random weather patterns of a company’s culture (e.g. “any engineer can veto anything they want,” or “everyone will focus on the metrics, not the indirect effects”) can have not only emergent outcomes within the company, but also have a significant bias on what manifests for the rest of society, given the company’s leverage.

    • Big companies are big enough to have their own internal weather systems; and those weather systems change the conditions for the world.

    • A kind of population bottleneck for our information streams.

  • If you have an adaptive algorithm optimizing for your wants, not your “want to wants”, then it will learn not to show you disconfirming evidence.

    • It doesn't want to make you better, it wants you to stay engaged.

    • It speaks to your lizard brain.

  • People don’t care how it’s built.

    • They care about what it can do.

    • How it’s built is a bonus, not a primary draw.

  • When a product team runs into a problem blocking the value proposition they swarm it like locusts and chip away at it.

    • Some approaches will work, some won’t.

    • Some will be scrappy,  ugly hacks.

    • But the results are what matter.

    • A research team says "oh that's why X is happening and why it's hard, and here’s how you might solve it in theory."

    • Product teams care most about the what.

    • Research teams care most about the how.

  • An important quote from Byrne Hobart about the Thinking Things Through Privilege:

    • "All of this illustrates an important, growing distinction in cultural norms: in a world with an unlimited supply of content and data, you can produce coherent-sounding prose without thinking things through. It's a useful skill in some contexts, like talking about most company mission statements and most political platforms—in both cases, there's usually some amalgamation of principle and opportunistic compromise, but cast entirely as principle. In an information-scarce environment, this approach will mostly mean repeating beliefs that have undergone either individual or group selection—there's individual selection for aphorisms, where a society that believes in "a penny saved, a penny earned" is likely to accumulate more wealth than one that doesn't, and where even factually-challenged beliefs that clearly delineate an ingroup and outgroup serve a coordinating function. But in an information-abundant world, you can find a reasonably coherent version of any belief system, and you can probably also find a Discord server full of people who treat it as the truth."

  • Turning the crank feels good.

    • You can get addicted to it.

    • You can also feel superior to people who aren't turning it as fast.

    • But if you're going to a place that isn't good, it doesn't matter how fast you get there.

    • Sometimes you get addicted to the feeling of turning the crank and optimize for that, even if turning the crank is destroying value.

    • The tactical certainty outshines the strategy uncertainty.

    • You don't stare into the abyss because you're too busy to.

    • It feels strong, but it’s actually weak.

  • There's no amount of efficiency that can make the wrong goals work.

    • Most modern management is "are we executing efficiently".

    • But the more important question is "do we have the right goals?"

  • I loved this piece on the Legible Frontier by my friend Ben Mathes.

    • On one side are the things that are legible, routinized.

    • On the other side are things that are illegible, chaotic… but also the wellspring of innovation.

    • True innovation comes from collapsing the illegible into the legible.

  • Your product can’t be a floor wax and a dessert topping.

    • You’ve got to pick one or the other.

    • Trying to keep both options open makes you unpalatable for both.

  • Two-ply benefits are hard to sell.

    • A benefit, but only if you understand a two-ply argument.

    • Examples:

      • The value of an open system.

      • The value of a different security model.

  • If you're ever wondering if you're getting ripped off, the answer is yes!

    • It’s only possible to not be ripped off if you know for sure you’re not being ripped off.

  • Concave and convex problems are totally different.

    • Concave: as you solve subcomponents the whole trends towards being solved.

      • Every bit of work brings you closer to the solution.

    • Convex: as you solve subcomponents they have ripple effects that destabilize the other components.

      • Every bit of work could bring you further away from the solution

      • Complex problems are convex.

        • Anything with a coordination cost, or interdependencies between decisions, have this characteristic.

      • Any situation that involves humans making decisions with any degree of autonomy is fundamentally complex.

        • The decision of one human affects the decision landscape of the other humans.

    • We act like concave problems are more common, but that’s only because they’re the ones we know how to solve.

      • Like the streetlight fallacy.

    • The real world is primarily convex problems.

  • Metabolise risk earlier.

    • Putting it off until later makes it fester and compound.

    • To pack a jar you put the big rocks in first.

  • Dumb things are often easy.

    • Sometimes dumb things work!

  • The more deterministic, the easier and faster the social diffusion process.

    • The instructions can be concrete, easy to follow, repeatable.

      • That makes them easier to transmit, and thus faster for it to diffuse.

    • At the other end is high-level, vibes based, or apprenticeship based knowhow.

  • Sometimes a door closing is a blessing, because it makes it clear that you shouldn't waste time trying to walk through it.

    • Some doors are hard to navigate through; they are enticing to try to hit, but require lots of effort that would be sunk cost if you don’t make it.

    • But the payoff would be so great if you could that you keep trying to keep the option open, distracting yourself from other options that might work and diffusing your energy.

    • Sometimes the universe closes that door unambiguously and it clarifies what you have to do in an instant.

    • The path that is yours to walk, and only yours.

  • Christopher Alexander told us that a city is not a tree.

    • Your life is also not a tree.

    • Your life does not fit into a neat and tidy hierarchical ordering.


Reply all
Reply to author
Forward
0 new messages