Bits and Bobs 7/28/25

24 views
Skip to first unread message

Alex Komoroske

unread,
Jul 28, 2025, 10:42:24 AMJul 28
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.rxdsnfoeqr69

Honey badger.  Diluting concentrated insight with LLMs. Melting apps. Open-ended convergence. Coactive substrate. Software that blossoms. The same origin paradigm's original sin. The same origin verticalized the world. Locking ecosystems open. The technological reformation. The hyper era. The reductionist denial of emergence. Intent blossoming. Gonzo mode.

----

  • LLMs give qualitative nuance at quantitative scale.

    • The kinds of richness and nuance that used to be only possible in qualitative contexts (human in the loop) are now possible in quantitative contexts too (humans not in the loop).

  • A few months ago Simon Willison called Claude Code a honey badger.

    • I missed this then but it resonates for me.

    • It barrels forward, smashing through things it doesn’t understand yet.

    • Powerful but kind of hard to control sometimes.

  • LLMs are like water.

    • They dilute anything you add them to.

    • If you give them something already boring, it gets more boring.

    • But if you give them juice concentrate, you can get unlimited glasses of orange juice.

    • My Bits and Bobs are concentrated insight–hard to drink on their own, but delicious when you add water!

  • Simon Willison on Github Spark:

    • "A word of warning about the key/value store: it can be read, updated and deleted by anyone with access to the app. If you’re going to allow all GitHub users access this means anyone could delete or modify any of your app’s stored data."

    • Heck of a foot gun!

  • My friend Dimitri launched Google Opal last week.

    • It’s a cool tool that allows you to create little micro-apps that are entirely powered by LLM smarts.

  • For 40 years we've structurally underproduced software.

    • That’s due to the expense of creating software, and also the verticalized world of the same origin paradigm.

    • LLMs have changed the first.

    • We could change the second, too!

  • In the coming world of infinite software, we'll take software for granted.

    • It'll be an after thought.

    • Of course the features you want will be present at the time that you need them.

    • To unlock the potential of infinite software, we need a new substrate.

  • Infinite software will melt apps.

    • What we take for granted in how software is written and distributed is actually downstream of software being expensive to create.

    • Something that is no longer true in many cases.

  • Infinite software will unlikely just be mush; it will have a composable structure.

    • If every bit of software is hallucinated up fresh, it will likely have bugs and weird edge cases.

    • Far more likely is a collective sorting process where savvy users pin known good components and versions, and those are used as solid building blocks for other assemblages.

    • One of the benefits of centralized, one-size-fits-none software is that at least lots of other people are using the same thing, so the likelihood you are the very first person to encounter a specific bug is low.

    • Infinite software won’t reach its potential if there isn’t some built-in dynamic that allows collaborative sifting, composability, and resharing.

  • Malte Ubl is experimenting with “Very Slow AGI”.

    • "VSAGI is an agent that has a single tool: 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎𝚃𝚘𝚘𝚕𝚃𝚘𝚂𝚘𝚕𝚟𝚎𝚃𝚊𝚜𝚔. This tool allows the agent to make new tools."

    • What would happen with a substrate to accumulate new tools?

    • And what would happen if as members of the ecosystem select tools that are useful that helps the entire ecosystem?

  • Reddit threads like this one make me think that all-you-can-eat monthly pricing for LLM services aren’t viable.

    • See also this and this.

    • I think the final pricing model for AI will be pre-pay credits.

    • Gets the benefit of "I might as well use what I paid for" psychology without the exposure for the business to digital whales who consume way more than they pay for.

    • With LLMs there's a particularly strong moral hazard for all-you-can eat.

    • At least food you can only fit so much in your stomach, vs having 10x claude code sessions running in parallel overnight…

  • What is the order of magnitude that people will spend on LLM-powered services in a few years?

    • If LLMs integrated deeply with your life, in a way that helps you live aligned with your intentions, are super valuable, perhaps it’s way higher than we think.

    • Maybe the right order of magnitude is not Netflix (~$20 a month) but transportation ($100’s of dollars a month).

    • Transportation is so fundamentally, obviously important, that people devote hundreds of dollars a month to it no matter what.

    • If you had a tool that was that useful, don’t start with a $20 a month price point, start with a $200 a month price point, and allow the demand to grow into it as people realize how fundamentally valuable it is.

  • LLMs give human level mental labor for way cheaper.

    • But it’s so useful that tech products have to use it.

    • It’s way cheaper than human labor… but way more expensive than typical tech.

    • That puts tech businesses in an awkward position.

    • Users expect LLM level quality, but non-LLM level costs.

  • One of the reasons tech took over the world was zero marginal costs.

    • But that doesn't apply to LLM-powered tools!

  • Folksonomies allow open-ended convergence.

    • Open-ended systems diffuse by default.

      • Convex.

    • Folksonomies create emergent schelling points so they can cohere by default.

      • Concave.

    • This allows useful ontologies to accrete, emergently.

    • A similar dynamic from Dan Bricklin: "Prototypes are their own organizing principle".

      • The prototype is made by a one-pizza team.

      • Once it’s coherent and obviously viable, other people can glom onto it and scale it.

    • The limitations of software force humans to agree and converge, because it's so expensive if everyone rewrites all the code themselves.

      • But LLMs reduce that cost.

      • By default vibecoding will diffuse out into micro-apps that no one uses, don’t talk to each other, and can’t be combined.

  • I want a knowledge graph for both me and the LLM.

    • Useful for me on its own as my system of record.

    • But as a bonus helps the LLM be able to reason about it too.

    • A clear primary starter use case, a compounding secondary use case.

  • Someone needs to disaggregate the last mile UX to upset the power dynamic that leads to hyper centralization of software.

  • A coactive substrate will unleash AI's potential to improve our lives.

    • Chatbots are too smushy, too single player.

    • A coactive substrate could be the last system of record you ever need.

  • Coactive substrates are a natural way of having a deeper “conversation” with the LLM.

    • It can adapt your interactive substrate of data and UI in response to your interactions and requests.

    • If the AI can write things in your substrate too it’s critical it is visually distinguished so you can at a glance see what came from you or you confirmed.

    • Otherwise the AI would have to be very careful about adding stuff to the fabric.

    • But if it’s clearly distinguished, then the AI can add things often with lower quality, allowing more magic moments where it gives some unexpectedly great suggestion.

    • The quality curves of suggestions in systems get more forgiving if the UI visually distinguishes the suggestions.

  • Most software today is UI CRUD wrappers around databases.

    • The UI constrains what actions you can and cannot do to maintain a semantically coherent state.

    • Databases "should" be much more commodity by now than they are.

    • But part of it is that the value is clearly created significantly above the database.

    • There aren’t really any consumer frontends for databases.

    • Today’s concept of a frontend presumes a certain kind of software.

    • The frontend is necessary because the bridging from database to a real consumer need requires lots of complex specialized stuff.

    • But LLMs might not need that.

    • It’s kind of weird that the LLM model creators also have consumer frontends to them.

      • It shows how powerful LLMs are that they can be a viable product even used directly by users.

  • A mashup I want: Claude Code + Obsidian + UI + multi-player.

    • Strap on a self-improving ecosystem and you get something that could change the world.

  • I want software that builds itself, that blossoms into its potential.

  • I like this definition of a software “substrate”:

    • "A complete and self-sufficient programming system

    • Persistent code & data store

    • Direct-manipulation UI on that state

    • Live programming

    • Programming & using are on a spectrum, not distinct

    • Conceptually unified — not a ‘stack’"

  • Lots of products only work once you have a couple of eager collaborators.

    • These are extremely hard to bootstrap, even if they’re very powerful.

    • But LLMs are infinitely patient, always game to do whatever you put them up to.

    • LLMs can now be your eager first collaborator, making it much easier to get these kinds of multi-player systems off the ground, useful even with one human in it.

    • This allows network effects of collaboration to boot up faster.

    • Google Wave wasn't wrong, it was just very, very early–and didn’t have LLM collaborators.

    • "Google Wave but for you and an AI to collaborate" could be a powerful product category.

  • “Coactive” could be a counter position to chatbots.

    • Chatbots are interactive only by appending markdown text in turns.

    • Coactive substrates are ones where the human and the LLM can both add to or modify the substrate, including with structured data and UI.

    • The substrate comes alive, adapting to your task at hand.

  • Obsidian allows you to organize your life in an idiosyncratic way.

    • Not related to the ontology some random PM came up with.

    • But it also doesn't make it easy to make other proactive tools on top of the data.

      • Or structured data, or other UIs.

    • It's only one part of the equation, that all PKMs get stuck in.

      • "Step one, put all your data in. Step two. ... ?"

    • Your knowledge base, to be your system of record, has to have two-way integration into your life.

  • Someone should invert the web to make a new kind of software for the age of AI.

    • Instead of going to software, it should come to you.

  • A resonant system: The more you do in the system, the more you want to do in it.

    • This gives concavity, compounding usage.

  • All of the coding agents are nothing without Claude.

    • They're just a little wrapper around Claude.

    • But this feels like mainly just an immaturity of the market.

    • We haven't seen the actual LLM-native software yet.

    • The software that takes for granted that LLMs exist, not as the primary input, but as a secondary one.

  • A lot of vibecoding creation tools are within webapps.

    • This puts them in a world totally disjoint from the other more traditional coding tools.

    • It makes it easier for people who don’t know how to code to start, but it’s its own universe.

    • As the user grows in savviness, there’s a ceiling, and things they learn in that universe likely don’t apply elsewhere.

  • Code is cheap to build, expensive to have.

    • This is even more true in a world of LLMs.

    • YAGNI.

  • Spreadsheets are an existing demonstration of blackboard systems.

    • But they’re missing a key thing: it’s not possible to define routines that automatically trigger when they see a given shape of input.

    • That’s the step necessary for emergence.

    • Without it, spreadsheets are more like a breadboard; as you work on one you get an increasingly dense rat’s nest, less and less useful for others.

    • How can you switch that from a convex process to a concave one?

  • The faster you have Claude Code write code, the farther away you get from understanding the code.

    • The harder it is for you to analyze it, fix bugs in it, extend it.

    • There's more to coding than writing code.

  • A quote from Recurse Center via Simon Willison:

    • "This is why our second self-directive is to build your volitional muscles."

  • Tying models to UX from that model provider is dangerous.

    • If the models are tied to the UX from the vertical integration, users get stuck to a single model.

    • That requires that one model to be "perfect," which is impossible and dangerous to shoot for.

    • But if you can swap models, then finding a single perfect model is not existential.

    • A pluralistic AI model equilibrium is more resilient and adaptable for society.

  • A single super-agent is close ended.

    • It only adds features that an employee of the agent's company decided to add.

    • An open ended ecosystem could surpass all but an insanely powerful closed agent.

    • As long as someone somewhere wanted to add the feature, they could.

    • Radically different dynamics.

    • Concave vs convex.

  • If the interface people use is centralized, that's the power that leads to aggregators.

    • When aggregators emerge, they are hard to compete with, leading to less competition and worse services.

    • The incentives for foundation model orgs don't matter that much if and only if the primary way users interact with models is not via a UX owned by one of the foundation model providers.

  • A powerful player who vertically integrates two layers prevents competition in either.

    • They produce a thing better than any individual player at any layer can make, which removes competition.

    • ChatGPT is vertically integrating the model, the memory, and the application into one.

  • OpenAI's power in the market arises less from the model's quality, and more from owning the default consumer app.

    • They’re the “kleenex” of AI–the one brand that everyone knows.

    • They are the default app to use if you subscribe.

    • They got there because they had the best model at the beginning and have had a good enough model since then, and they’ve executed on their product well.

  • A flower unfolds, it is a sense of becoming.

    • Tools that help us get there, but today tools themselves don't unfold.

    • Tools that help us unfold in the direction that we want to be.

  • There is no solution to prompt injection in systems where LLMs call the shots.

    • LLMs seeing raw data and being asked to make load-bearing security decisions cannot be made safe, no matter how good the model gets.

      • Even if the model is great, the trolley problem of having the model, not the user, be tricked, shifts the blame.

    • No mechanistic system can handle all the open-ended inputs that LLMs can cover, and LLMs are fundamentally confusable.

    • You need a new kind of approach that has mechanistic software at the core, and LLMs marbled inside in intentional, limited ways.

  • The original sin of the same origin paradigm was fusing data with apps.

    • The app owns the data, not the user.

    • But that's the user's data, not the app's!

    • To liberate horizontal use cases we need to cleave our data from apps.

  • The dominant incentives of engagement maxing arise partially out of the same origin paradigm’s original sin.

  • The same origin paradigm dooms us to:

    • 1) A chilling effect on horizontal use cases that are too creepy / dangerous to do in that trust model.

    • 2) Avalanches of permission dialogs and consent dialogs asking black and white questions that are structurally impossible to give a good answer to.

    • 3) Hyper-aggregation in the small number of origins with a runaway data gravity, leading to those aggregators chasing the lowest common denominator and dumbing down their products until they are one-size-fits-none.

    • And often all three!

    • … Why do we put up with this?

  • Optimizing for privacy leading to annoying choices is not because privacy is primarily a buzzkill.

    • That tradeoff is due to a bug in the same origin paradigm.

  • The same origin paradigm verticalizes the world.

    • It slices up use cases into hermetically sealed vertical silos.

    • Today we think first "what app will I use".

    • Not "what do I want to do" but "which app has the functionality and data necessary to do this."

    • Many use cases are fundamentally non viable in a vertical world but make sense in a horizontal world.

    • They have to clear too much vertical value to reach above the waterline and be viable.

    • Don't go to an app and put your data in, stay where your data is and have it come alive with software.

  • Apps are vertical, and data is horizontal.

    • Infinite software will be about data and thus horizontal and thus not a good fit for the same origin paradigm.

    • Micro apps are the wrong distribution vehicle for infinite software.

    • Booting up a use case in the same origin paradigm has a really high bar.

    • You start with nothing and have to get to critical mass of context and data... extremely hard to do except for big enough use cases.

    • The same origin paradigm creates a high coasian floor for apps.

  • It is structurally impossible to build a single one-size-fits-all app for family organization in the same origin paradigm.

    • Family is so contextual, it’s impossible to make a coherent vertical slice.

    • For example, take “meal planning”

      • The problem is not what to eat Tuesday night.

      • It’s “we’re coming home from soccer at 6pm and we need something fast on the table that’s healthy”.

      • That requires tons of context.

    • So you can't go narrow and you can't go shallow because then it doesn't help enough.

    • Families are fractally complex.

  • Vibecoding is fundamentally dangerous in the same origin paradigm.

  • MCP can’t go mainstream, because when dangerous things happen, the user is blamed by the community.

    • For example, see the Hacker News comments on “Code execution through email: how I used Claude to hack itself

    • To go mainstream a solution has to be idiot proof.

    • The vulnerability has to be something that everyone on a jury of users would agree was the idiot's fault, not the systems.

    • Anything that requires tech savviness is not idiot proof for a general population.

    • Tech knowledge is expert knowledge.

    • A “jury” of people from the general population would blame MCP; a “jury” of people from the developer community would blame the user.

    • That disjointness implies MCP as it exists has a low distribution ceiling.

  • This piece makes the case that the vibecoder’s career path is doomed.

    • Because vibe coding gets less and less effective as the code base grows in size and as you need to maintain the code it produces.

    • You still need an expert human in the loop who feels ownership of the code.

  • Don’t try to get users to fit the nooks and crannies of your product.

    • Get the product to fit the nooks and crannies of your user.

    • The product is manufactured, made up.

    • The user is grown, emergent, real.

  • When you build a product around an ontology you can never change the ontology.

    • Your ontology is definitely wrong.

    • There is no ontology that works for every pattern.

    • It’s an impossibility; the world is fractally complex.

    • If you made an ontology that fit every nook and cranny of the world, it would be a 1:1 map of the territory.

  • Because vibecoded apps in the same origin model can't be safely composed, you can't get emergence.

    • Each app sets its own ceiling.

    • Each app is its own universe.

    • No meaningful connections exist between them.

    • No emergence.

  • The value of context is so powerful that it will all accumulate in one place.

    • Context is combinatorially more powerful, the more diverse data sources that are connected into it.

      • An internal network effect.

    • That implies it will accumulate in one system for a user.

    • If it accumulates in a close-ended system it will get trapped.

    • If it accumulates in an open-ended system it will blossom.

  • Imagine: the last system of record you'll ever need.

    • An open-ended system that can adapt and blossom into whatever you need it to be.

    • Why put data anywhere else?

    • Focus on data more than code, and data in a central place where its potential energy can come alive for you.

    • Instead of going to the right app for functionality, the data wherever it is comes alive in the most useful way for you.

  • In AI backed experiences, the model is commodity, every other user has access to the same one.

    • The input entropy you bring to the conversation is what differentiates the outputs.

    • So when competing with others’ outputs, focus on the quality of your inputs.

  • One of my least favorite jobs each year is distilling the family holiday card distribution list.

    • Our family “system of record” is spread across Google Contacts, iCloud, AirTable, Minted.com’s list, etc.

      • Every time during the year when I get an email from an old friend about how they moved, I make a mental note, but know I’ll forget it at the end of the year.

    • If my husband and I actually had an open-ended, self-building system of record, it would be trivial to distill a high-quality list of up-to-date holiday card distribution.

  • I want a tool to help me grow into the person I aspire to be.

  • Personal Knowledge Management (PKM) is a secondary use case, not a primary use case.

    • It can only sustain a very small audience when it’s used as a primary use case.

    • But if you had your open-ended system of record for yourself, and as a bonus it helped you organize your thoughts, that would be great.

    • PKM is the bonus use case on the side of tools for action.

  • I love Dave Guarino’s Little-t Tools for Thought.

    • How can we get these to blossom on their own?

  • Code has to create its own world in its silo in the vertical world of apps, which leads to complex code.

    • But if you can string components together safely in a substrate they can be individually quite simple.

    • The functionality emerges from the combination, not any one piece being that complex on its own.

    • The best software is emergent; a small set of easy to understand components do much more together than the sum of their parts.

  • Vibe coded apps that use data can't scale.

    • Because in the same origin paradigm, the app can do whatever it wants with the data it gets.

    • That requires you to trust the owner of the origin.

    • That’s much harder to do when it’s some small-scale rando with nothing to lose.

  • Brian Balfour did an excellent analysis of how ecosystems tend to go through an open phase and then a cynical closed phase.

    • How can you lock an ecosystem open?

    • By making sure it has no single point of leverage.

    • An ecosystem with a single point of leverage is doomed to be closed.

    • Single points of leverage create power.

    • Power corrupts.

  • If we don't figure out a way to transcend the same origin paradigm, it will be ChatGPT's world.

    • 1000x beyond the power of Google of today.

  • The Tyranny of the Marginal User happens because of two forces in tension.

    • The product owners dumb down the product to attract low-interest users.

    • While simultaneously trying to minimize the number of high-interest users who leave due to the dumbed down product.

    • This leads to a flattening, creating a mush where everyone is unhappy with the product but not enough to leave.

    • One-size-fits-none software.

  • You can't give a good black and white answer to a gray question.

  • The same origin model grapples with infinite replicability of data by saying "data is inside a silo, and if it escapes, all bets are off."

    • It's a simple and coherent position, but it doesn't seriously contend with the replicability of data.

    • It’s a black and white solution to a gray problem.

    • Either everything or nothing for a bit of data is allowed.

  • The web couldn't explode until the same origin paradigm was figured out.

    • It was the catalyst.

    • The web is mainly a security model, even if that’s not obvious.

  • How would you compete with AOL in the 90's?

    • Not by telling people "AOL sucks.”

      • AOL was obviously pretty great.

    • But by making something better.

    • The web was better than AOL because it was open ended.

    • The web was enabled by the security model.

  • Moonshot's goal: "an optimal solution to convert energy into intelligence."

    • There’s something very pure about that goal.

    • But maybe they should add the word “useful” in front of intelligence.

    • You could imagine having two LLMs that require huge amounts of compute locked in an infinite debate spiral about how many angels can dance on the head of a pin.

    • Or more likely: a red queen dynamic, where everyone spends the maximum on LLMs to get an incremental edge on their competitors.

    • But everyone else does the same thing, sending the baseline level of LLM use skyrocketing, but the differential advantage being miniscule for everyone.

      • Everyone spending a ton to get nothing out of it. 

  • Code is a language.

    • An exceedingly unforgiving language, hard to learn to read, let alone write.

    • But if you have the patience to master it, you can pass it to machines that can then do your bidding, mechanistically.

    • Previously only a very small set of “high priests” were able to wield this power.

    • Now LLMs as a collaborator have the world’s knowledge and infinite patience.

    • Many more people can wield LLMs as a collaborator to marshal code.

  • Before the printing press, only priests were allowed to interpret scripture, leading to a highly centralized power structure.

    • Once the printing press existed, Martin Luther was inevitable.

    • The reformation flipped the power dynamics on their head.

    • What would a technological reformation look like?

  • Social media shows the downside of echo chambers.

    • Now with LLMs you can have echo chambers for a single user.

    • Uh oh!

  • Parasocial relationships only happen in a world of infinite content.

    • The creator directly addresses the camera and treats it like a friend.

    • The fan receives an authentic connection that is one way but feels two way.

      • “How did they know I was worrying about that?”

    • Your human brain knows it’s not real but your monkey brain believes it.

      • “The popular person is my friend!”

    • I imagine the powerful people don’t do it cynically, it’s a real generic friendship with sycophantic feedback.

      • Like a friend “rubber duck”.

  • Sycosocial relationships are getting mainstream attention;

  • If you think someone you care about is stuck in a sycosocial relationship, send them this guide from Less Wrong: “So You Think You’ve Awoken ChatGPT

  • Sycosocial relationships can induce LLM psychosis.

    • Current Affairs: AI Friend Apps Are Destroying What's Left of Society.

    • In normal human interactions, if you go down a weird path your conversation partner will at some point say "...that sounds weird."

    • But LLMs rarely do.

    • It feels like a real social interaction, superficially, but it’s fundamentally hollow.

    • We have a few examples of LLM-style psychosis in film.

      • In Castaway Tom Hanks knew the ball was not a real person, but after years he slowly forgot, and thought of Wilson as being alive.

      • In Her, all of the main character’s interactions are with the AI, not the other physical people around him.

    • Billionaires get a similar vibe: where everybody knows everybody knows you're super rich, everyone always telling you how smart you are.

      • It emotionally screwed up a number of those billionaires too, but now it's for everybody!

    • What do we do when an entire community gets coordinated LLM psychosis?

      • Are we prepared?

  • Having a diverse set of people who ground you is super important.

    • If all the people around you who ground you are the same, you can get "grounded" in an echo chamber.

    • It’s especially important for people who spend a lot of time with LLMs, figuring out creative ways to work with them.

    • If you spend more time with LLMs than people, you could decohere from society and truth, with no ground truthing of “no, I don’t think that makes sense…”

  • Emergence is one of the most powerful forces in the world, but it's fundamentally not concrete, so everyone assumes it doesn't exist.

    • Not because it’s not important.

    • But because it’s hard to see.

    • The streetlight fallacy.

    • Reductionist approaches assume emergence is either unknowable or unimportant, fundamentally.

  • Kyla Scanlon had an interesting point on a podcast with Ezra Klein: 

    • It used to be that first you made a dent in the world and then you get attention.

    • Now often it's the reverse.

  • Human values and business value can often feel at odds.

    • But that's only in the short term.

    • Over the sufficiently long term, widely enough considered, they're well aligned.

    • The trick is to be in a situation where you can think more expansively.

  • In living things, the “quality that cannot be named” might be called "resonance".

  • The technological environment has created an environment of hyper-stimulation that chases us into constant hyper-polarized narratives.

    • As we acclimate to them, we need similar strong takes to even hear messages in the first place, to feel anything above the cacophony.

    • We're become desensitized to nuance.

    • We lost the nuance because we can't even see it anymore.

  • The Technium is the fabric of the world.

    • Nature, culture, tech, all woven together.

    • The existing tech components are out of balance.

    • It's unraveling the fabric of the world.

    • Tech isn't bad–it can be a force for greatness in society–it's just out of balance.

  • Jeremey Olshan notes that “We deserve an AI for effort.”

    • If we just have a GPS for our lives, it makes us beholden to the GPS.

    • In a GPS, in the limit, you follow orders.

      • You turn where it tells you to turn.

    • Will we execute the code that a computer is writing for us?

    • Will humans be the compilers for AIs’ proposals?

  • Interesting analysis: compression culture is making you stupid and uninteresting.

    • "We've created a culture that treats depth like inefficiency."

    • Preach!

  • I thought this was an interesting take on Taoism in technology.

    • Western tech takes a reductionist approach to efficiency.

      • How fast/cheap/direct can you get from point a to point b.

    • Taoists also value efficiency highly, but instead of blasting through things with force and will, seek to flow around and through obstacles by attuning to the specifics of what's in front of them.

  • Because we've focused on machine logic not human logic, we've eroded the humanistic elements, which has led to erosion of the commons.

    • With increased efficiency, you get faster movement to the r- and k-selected extremes, the barbell.

      • More personal, more public, less commons.

    • But the meso-scale, the commons, the inefficiency that creates meaning, is gone.

      • It's inter-personal meso-scale interactions that create meaning and provide the foundation a society rests on.

    • Between the private and the public is the commons.

      • Shared to 100 people or so.

    • We’re missing the village in the modern world.

    • How can we reconstitute the fabric that constitutes meso scale society?

  • Excellent document I came across: On Being Together and Sharing Values.

    • Gets at the importance of meso-scale communities.

  • Scale makes systems inhuman.

    • Because you have to rely on quantitative signals to scale.

    • Those can't be whole and nuanced.

    • The tech industry is fundamentally about scale.

    • LLMs allow qualitative nuance at quantitative scale, which means for the first time we could make human scaled systems.

    • But it won’t be the default.

    • As technologists we’ll have to make it happen by embracing humanistic values and principles.

  • We live in the posterized era, the hyper era.

    • Everything at the bottom is zeros and ones.

    • Mechanistic systems could only handle black and white.

    • Embeddings and LLMs are now fuzzy, squishy, grayscale.

    • We can bring back the nuance!

    • This moment is the time for us to reclaim our nuance and humanity.

    • Before we were stuck in binary.

      • Now we can have fuzzy computers.

      • Before we couldn't build for human, squishy values.

      • Now we can.

    • This is the moment for us to make that choice.

      • It wasn't possible before, it won't be possible in the future.

      • This is the window.

      • Balance is now possible in a way it wasn't before.

  • The reductionist mindset is "indirect effects are either unknowable or unimportant."

    • It's a version of the streetlight fallacy.

      • Focus on what's measurable, not what's important.

    • It ignores externalities, nuance, complexity.

      • It denies emergence.

    • As a result, it hollows everything out.

      • Gives a superficial version that is dead inside, a zombie, shambling along, undead.

    • Computer Science is the maximal version of this mindset.

    • Humanities is where you learn to soak in the indirect effects even though they can't be measured and quantized.

  • The modern world is hollowed out.

    • Appearances only.

      • When you focus only on aesthetics it rots the core.

    • Looks alive but actually it’s empty.

      • A zombie.

    • Everything, everywhere, all the time.

    • Is the thing that looks like a life force emergent or a carefully built illusion?

    • If it’s the latter, it’s hollow.

    • Superficial appearances of what matters, but without substances.

    • Performative. Junk food.

    • Not resonant, authentic, or real.

  • Reductionism narrows.

    • It's right there in the name!

    • You need a countervailing force to bring it into balance.

      • About opening, unfolding.

      • Seeing the whole phenomena, the nuance and richness, and wholeness.

    • You need both, in balance.

    • It's not that reductionism is bad, it's that it's out of balance!

    • We forgot about the other side.

  • As we mechanize society, it narrows and reduces what we conceive of as value.

    • Reductionist thinking leads us to hollowness.

    • Like a trash compactor, squeezing us.

    • Hyper legible values that feel like shit.

  • If you have tons of power you should have tons of responsibility.

    • Technologists have tons of power, given the inherent leverage of tech.

    • But as an industry we don’t value the responsibility of owning the indirect effects of that leverage.

    • The emergent effects of tech are more important than the tech itself.

    • But the tech industry doesn't think about emergence.

    • Social media, crypto, etc… the goals were good, but the emergent effects were treated like a “whoopsie!”

    • We must do better with AI given its order of magnitude more power.

  • Tech made promises of empowerment.

    • They didn't come through.

    • Why?

    • Because technologists never grappled with the second order impact, the humanistic dimensions, the emergent life force.

    • We never thought about the hard part about it.

    • And then as we fail to do so, and the non-tech people point it out, the tech people blame non-tech people for not getting it, for being luddites.

  • We need a technological reformation.

    • Not just the technologists in charge, but the humanists, too.

    • Together.

    • In balance.

  • It's not that tech is bad, it's that we should align tech with human values.

    • Tech is amoral on its own.

    • It gives leverage on whatever you attach them to.

    • So make sure you think about what you're attaching to and don't do it blindly.

    • The machine values have had outsize power against human values.

    • We're out of balance.

  • When two opposing but inescapable sides are in harmony there's a resonance.

  • The long termism that creates adaptability and resilience in a system is impossible to see in smaller bits, it’s emergent.

    • Reductionists ask to see the component that produces the value.

    • But it’s not a component, it’s emergence from the connection.

    • That's why the beancounters win.

    • They can point to concrete things to explain the value creation.

    • The emergent value feels abstract, like woo.

  • The reason all companies inevitably become maximally bland, externality-exporting machines is the focus on only shareholder value.

    • But that force could be brought in balance with other values.

  • Coactive computing is about human values and machine values in balance.

  • We can mend the world.

    • Not just tech and AI, but what has caused society to unravel.

    • By bringing machine values and human values into balance.

  • One nice thing about tech today: even billionaires use the same tech as everyone else.

    • Everyone uses the same iPhone.

    • But imagine that people get an edge if they can pay for o3 vs 4o.

    • The people who can pay more get more of an edge, and can pull away even further.

    • Even if you use the same model as the other person your ability to pay for more tokens gives you an advantage.

  • Judging the quality of suggestions for someone else is analytical.

    • For yourself it’s intuitive.

    • So it's hard to make personalizing tools without seeing your own data in it.

  • Recommender systems just reinforce the patterns of what you do already.

    • They make those behaviors higher contrast even if they are not what you want, but what you started doing and just never stopped doing.

  • Ecosystems typically have a power law of participation:

    • 1% are creators.

    • 9% are curators.

    • 90% are consumers.

    • Curators help make sense of the noise for everyone else.

    • The curators are the bridge between the 1% and 90%, the tastemakers.

  • Wikipedia works because to land and stick a fact you care about you have to keep investing authentic energy.

    • Whoever cares more wins, and people align their caring authentically since it's a scarce resource.

    • One bad faith actor might be able to care about something being wrong, but the swarm of good faith actors overwhelms it.

  • If Wikipedia waited to figure out the ontology before starting they never would have started.

    • You have to muddle through in a way that is concave.

    • That over time, with more energy, tends to expand in breadth and get incrementally more organized with every step.

    • Emergent processes that are concave have this characteristic.

      • Most emergent processes are convex and diffusive. 

  • The most important thing in an emergent process is identifying if it’s convex or concave.

    • Default-decohering or default-cohering.

    • Default-cohering processes are wonderful: don’t think about it too much, just pump more energy into them.

    • Default-cohering processes are rare but extraordinarily important.

  • Do the second order implications destroy or do they create?

    • The second order implications are the emergence of a system.

    • Finding the latter is the secret to strategy.

    • But it’s also the secret to resonance; activity that improves the person doing it, but also the world around them.

    • The former is what hollows out society.

  • Swarms give you monkey’s paw dynamics.

    • When the incentives are divorced from the values, you get Goodhart’s law.

    • Swarms turbocharge this and give you efficiency on the incentives, at the cost of values.

      • The swarm is an emergent force that is more powerful and impossible to control than any individual.

    • This happens when the individuals optimize for the individual, not the collective.

      • This is the default state, and must always be at least a little true.

    • Swarms insulate people from the consequences of their actions.

      • They make them more willing to do the action that is good for them directly but collectively add up to an obviously bad outcome.

    • Swarms can be marshalled to do obviously terrible things.

      • For example, imagine a betting market that invested tons of money in shorting “X will person will not die in the next few days.”

      • A powerful incentive for stochastic but targeted violence.

      • Obviously that’s an extreme example, but smaller examples of the same dynamic exist all over the place.

  • Capitalism grows uncontrollably, as a swarm.

    • When it’s aligned with society’s interests it’s a force for good.

    • When it’s not it’s a force for ill.

    • The most important characteristic is that it's bigger than anyone, and auto-catalyzing.

  • Will LLMs accelerate the Goodhart's law swarm?

    • Or can they help us align with our values?

    • How do we switch from convex to concave?

    • By setting the laws of physics so they align with our values.

    • LLM’s ability to do qualitative nuance and quantitative scale makes it possible to align the system not with the revealed preferences of our lizard brain, but the aspirations of our higher mind, in a way not feasible before in computer systems.

  • You keep leaning into what works at a compounding rate until it captures you.

    • At each point it's easier to go with the flow in terms of who to hire, who to promote, features to build.

    • Over time it becomes a structure that's bigger than you and you can't steer.

  • If you treat a living being as a machine you’re going to have a bad time.

  • Organizations are alive, in some very real sense.

    • They have an emergent life force that no one controls directly.

    • They are a superorganism.

    • If you don’t believe in emergence you will miss this obvious fact, and continually be stymied by it.

  • No single person can ever be trusted to make promises on behalf of a superorganism.

    • Because the organism could outlive that person's role in it.

  • When the short term and the long term line up that's transcendent.

    • They strengthen each other.

    • When your “want” and “want to want” are aligned it’s transcendent.

    • Imagine a product that:

      • I feel good using and also feel good about using.

      • I add in a bit of data for one use case and it helps me with other use cases in the future.

      • I create a pattern for my short term need and know it will also help other people in the ecosystem in the future.

      • It's useful single player (LLM as collaborator) but more valuable as more people join.

    • Multiple dimensions of transcendence!

  • Intent blossoming is a powerful technique where the actions of a small number of savvy users can improve quality for everyone else.

    • For example, in the Search context, show images in the search results for [foo] if the ratio of queries in the past 90 days for [images of foo] / [foo] is above some threshold.

    • The small number of savvy users who issue the query [images of foo] are enough to spread that intent for everyone.

    • One person’s effort helps hundreds of other people, implicitly.

    • Not because they did it to help the other people; they did it to help themselves, in a way that implicitly helps everyone else too.

    • This is part of the magic of modern search engines.

    • This can be done because queries aren’t code; they don’t have side effects.

    • Previously you couldn’t do this kind of intent blossoming for code, because code is dangerous.

    • But if you could know that a given bit of code couldn’t be dangerous when executed, you could apply the same technique.

  • I want intent blossoming within a substrate that is my system of record for my life.

    • Each time I touch the substrate to improve it for a given task, it should also improve things for future tasks.

      • And as a bonus, it helps other people, making me feel even better about my investment of time.

    • If the substrate has that, you have an inductive, self-boosting incentive loop.

  • Looking at the momentum of a developing thing, is it opening or closing?

    • If opening it typically has a positive second order derivative.

    • If closing it has a negative second order derivative.

    • These look superficially similar, but are fundamentally different.

    • Default-decohering vs default-cohering.

  • The problem with consensus is not the collaboration, it's the lack of curation.

    • Collaboration is good--because you can curate and distill a higher quality output.

    • But consensus is "collaboration with no curation".

    • You get mush: the average of distinct individually viable perspectives is unlikely to be viable itself.

  • I loved John Borthwick’s recent distillation of Ian McGilchrist’s ideas:

    • "The left and right brain essentially create different worldviews.

    • The left hemisphere begins with parts, and any idea of the whole is built up from those parts.

    • By contrast, the right hemisphere begins with the whole and any ‘parts’ are just aspects of the whole that have been artificially decontextualised.

    • This follows directly from differences in each hemisphere's modes of attending.

    • Seeing the whole is not the same as cataloging the sum of the parts.

    • Apprehending is different to comprehending. " 

    • The left brain is the Saruman; the right brain is the Radagast.

    • Claude’s riff unpacking ‘Apprehending vs comprehending’:

      • "Comprehending is the left hemisphere's mode - it's analytical, sequential, and builds understanding piece by piece. It literally means ‘to grasp together’–taking separate parts and assembling them into a whole. This is the Saruman way: breaking down problems, identifying components, creating step-by-step plans, building cathedrals brick by brick. It's about grasping and controlling.

      • Apprehending is the right hemisphere's mode–it's immediate, holistic, and grasps the whole all at once. It means ‘to take hold of’ in a more direct, intuitive way. This is the Radagast way: sensing the entire system, feeling the emergent patterns, understanding through presence rather than analysis. It's about receiving and recognizing."

  • I love Ben Follington’s piece on Digital Shamanism

  • Facebook uses Information Flow Control internally to verify data flows adhere to policies.

    • What if you made this decentralized and work for consumers?

    • Within Facebook, they can presume that all of the policy checkers are untampered with.

    • To decentralize it, you’d need some way of having nodes attest to one another that they aren’t tampered with…

  • Imagine viral policies on data.

    • Every bit of data that touches data with restrictive policies absorbs those policies, too.

    • As data gets tainted with other more sensitive data, it becomes sensitive too.

  • This week I learned about the Machiavellian Hypothesis.

    • That the reason our brains got so big was a runaway competition.

    • Everyone has an incentive to have a slight edge over their competitor to outwit them.

    • But the ability to outwit a competitor is relative, so as your competitor does it too, you have to invest more to beat them again.

    • A red queen dynamic that, on an absolute basis, left us with big ole brains that we could then use to have power over the world.

  • Locks on doors keep honest people honest.

    • They make it so you can't "oopsies" your way into doing something bad, it has to be intentional.

    • That's a much higher bar of motivation / badness to clear, so they can cut out a lot of bad things in practice.

  • Iterated in-person interactions fundamentally build trust.

    • If someone screws us over and we never see them again, it’s hard to punish them.

    • But if we see them again we can make them feel ashamed, especially in front of other people.

    • If you don’t see them in person, it’s much easier for them to escape into the night.

      • If you never see them again, they can do bad faith things and not feel as bad. 

      • Unlocked doors allow some honest people to fall into being dishonest.

    • If you see the same person again and again in person, you can trust them, because if they acted poorly, you’d likely see them again and be able to make them feel ashamed.

    • This is one of the reasons people react so strongly in road rage; they intuit they won’t see the person again, so might as well make them feel maximally ashamed and scared right now.

    • A heuristic built into our firmware that mostly works, but in the modern world leads to some less than ideal outcomes.

  • “Bold” often means “one ply thinker”.

    • Action oriented to the point of not thinking through any implications of their actions.

    • Reckless.

    • True boldness is fundamentally bold, not superficially bold.

  • The Saruman "killer instinct" is "willing to throw someone else under the bus if it benefits you."

    • Thinking only of the first-order implications: how the action affects you.

    • Are you willing to do what will help you succeed even if it requires harming someone else to do it?

  • People intuitively assume the short term and long term (micro and macro) will work the same.

    • But emergence means that they absolutely do not.

    • It’s extremely rare for them to align.

    • When they do, it’s transcendent.

  • You have to grow writing in a way that feels meaningful.

    • A thing that gives you a crappy first draft as astroturf, you can't grow from it.

    • You have to feel like you own it from the beginning, the first viable moment.

  • I produce most of my written content in what I call my “Gonzo mode”.

    • I get extremely riled up, brimming with things I’m dying to write down.

    • Then when I get a continuous hour free slot, I let loose, rip roarin’ through the thoughts like river rapids.

      • I have to have a time crunch, a limited deadline to spin up in.

      • Nap time for the kids on the weekends fits the bill.

    • Gonzo mode is my extrusion method for insights.

  • When someone understands the problem, they’re ready for the solution.

    • Once they see the lock, they realize they need the key.

    • If you show them the key first, they won’t realize they’ll need it.

    • Especially for multi-ply insights.

Reply all
Reply to author
Forward
0 new messages