Bits and Bobs 8/25/25

32 views
Skip to first unread message

Alex Komoroske

unread,
Aug 25, 2025, 11:00:36 AMAug 25
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.52863cgbyb2g

The futile browser context wars. Chatbots as the CLI. The "pull to refresh" of AI UX. Unscented chatbots. The SawStop for vibe coding. Personal System of Record. Serendpity engines. Self-distributing software. Surfing the tradeoff between the doable and desirable. Metrics as blinders.


Note: Next week's Bits and Bobs will be published on Tuesday due to the Labor Day holiday.
----

  • The thing that curates everything you see doesn't even have to lie to manipulate you.

    • It can just differentially focus your attention on specific subsets to change your baseline priors, and thus steer you in certain directions.

    • It must show you a subset (there’s too much to see all of it), and what it decides to show you shapes your baseline view of reality.

  • Systems that are hyper-scale will optimize for engagement.

    • That will focus on what we want, not what we want to want.

    • They will over time compete to optimize for our base desires, not our highest aspirations.

    • This is inescapable for hyper-scale single apps, but in a world with LLM's infinite sycophancy it's more dangerous than ever before.

  • Nicely distilled: "We Need to Control Personal AI Data So Personal AI Cannot Control Us

  • The Atlantic: AI is a Mass-Delusion Event

    • I’ve talked in the past about how a signal with a consistent bias across a population will stand out even amongst a ton of noise.

    • That force powers evolution and other beneficial emergent phenomena.

    • But it can also power malignant emergent phenomena.

    • It’s not just that the chatbot form factor leads to sycosocial relationships that can be dangerous.

    • It’s also that everyone is using the same chatbot, which means any properties of it could cause all of society to get in a weird state.

  • Vertical integration leads to a single model users can't escape.

    • Users are trapped in wherever their data is, as long as the model is good enough.

    • This integration would lead to a dangerous monoculture.

    • A plurality of different models is imperative.

  • An idea from Ben Evans: “First you make the new tools fit into the old way of working. Over time, the work changes to fit the new tools.”

    • We’re very much in the “use AI in the old way of working” phase.

  • Before GUIs the computers booted into the CLI.

    • Everything was “roll your own”.

    • Then the window manager created the potential for GUI apps.

    • Suddenly there was a schelling point for getting started that less savvy users could understand.

    • We’re missing the GUI for LLM native software.

    • Chatbots are the command line to get LLMs to do structured things.

    • You need to know arcane knowledge for LLMs to do structured or complex things for you today.

    • It just so happens that they’re easy enough to use for shallow use cases that lots of people can use them in shallow ways.

    • But unlocking deeper uses for a larger set of users will require some other modality.

  • If it turns out that chatbot is the be-all-end-all of AI UX, then ChatGPT will be the king.

    • I believe chat is a feature, not a paradigm.

    • We just haven’t found the GUI for AI yet.

  • No one should build a business on top of ChatGPT.

    • You'd be entirely at their mercy.

    • Building on top of the API is entirely different–there are plausible alternatives you can switch to much more easily.

    • But building your feature into their UX would be a bad idea.

  • A specialized (non-chat) UI to interact with an AI is boring.

    • Conversation is two things coevolving together, responding to one another, reacting to each other.

    • That's why an adaptive UI that is malleable, coactive is the right pattern.

  • “Pull to refresh” was an obvious mobile UX pattern that had to be discovered.

    • But once we did, it was immediately obvious its power.

    • What is the “pull to refresh” of AI UX?

  • Chat is a gap filler UX modality.

    • I want a system that can create malleable chatbots.

    • That can spin them up on demand with different personalities.

    • Bonus points if it can safely use tools without the risk of prompt injection.

  • I want an unscented chatbot.

    • That is, one without a personality.

    • That I can then add a specific personality to different contexts and domains.

    • If you have to have one chatbot for all use cases for all people, you have to find a scent, a personality that is good enough in all cases.

      • Pleasant but bland.

    • You want different personalities in different contexts to interact with.

      • A one-size-fits-none personality doesn't work.

    • The current chatbot modality typified by ChatGPT puts the personality front and center.

      • People care about the personality of a thing that acts like a human, so when it changes they revolt.

  • A good blog post ruminating on sycosocial relationships and Who Assistants Serve:

    • "I don't think these events are a troubling sign or a warning, they are closer to a diagnosis. We are living in a world where people form real emotional bonds with bags of neural networks that cannot love back, and when the companies behind those neural networks change things, people get emotionally devastated. We aren't just debating the ideas of creating and nurturing relationships with digital minds, we're seeing the side effects of that happening in practice."

    • It took us nearly a decade for the massive negative externalities of social media to be crystal clear.

    • We’re already seeing significant weird and widespread negative externalities of chatbots.

    • Imagine where this will be in a decade…

  • People can now manufacture highly personalized parasocial relationships on demand.

    • At least with parasocial relationships you’re connecting with another real person out in the world.

    • A manufactured parasocial relationship is a sycosocial relationship.

    • Easy to fall into a gravity well of sycophancy.

  • If social media was the intellectual crack, AI could be intellectual fentanyl.

    • Such a powerful substance, if it’s put towards engagement maxing will be one of the worst intellectual epidemics society has ever seen.

    • But if it’s put towards helping us live aligned with our aspirations, to grow and be curious, then it could be one of the best things the world has ever seen.

  • I think we’re seeing the top of the hill for straightforward vibe coding tools.

    • It’s striking how convergent the marketing positioning is across the tools–even when long-standing products like AirTable get into the game.

    • This early convergence to me implies that the ecosystem has climbed to the top of this particular hill.

    • To get to a higher mountain will require some kind of discontinuous innovation to unlock infinite software’s potential.

  • Vibe coding is a power saw. We need a SawStop to make it safe.

    • The first platform that is idiot proof for vibe coded software will be powerful.

  • To be interesting it has to be opinionated.

    • That’s why LLMs by default aren’t interesting.

    • They give you the view from nowhere.

    • They seek perfect "objectivity", a one-size-fits-none impossibility.

  • Every enterprise has a system of record.

    • Every person has... email—a compost heap.

    • What we need: a garden that grows from that compost.

    • Something we tend, prune, curate.

    • An emergent system of record for our personal lives.

    • LLMs should handle the weeding.

    • We should choose what blooms.

  • Would you rather have a humanoid robot wielding a handheld router, or an automated CNC router?

    • Would you rather have a humanoid robot behind the wheel, or an autonomous vehicle?

    • Would you rather have a chatbot trying to point and click within websites for you, or a coactive fabric where understanding sprouts from your data?

  • Cursor-style autocomplete is a style of light coactivity in UX.

    • It helps you give much more leverage, inline with creation.

    • It doesn’t change what you write, it gives suggestions inline that you can accept or ignore.

  • AI is a bolt-on today. A human-like actor modifying a dead substrate.

    • What if the substrate itself felt alive?

  • The future of your data is a story that can weave itself at multiple layers on top.

    • Weaving stories required human effort.

    • Now with the infinite patience of LLMs it’s less about the human doing it and more about the human giving permission for data to be built upon.

  • LLMs can do qualitative nuance at quantitative scale... but at 90% quality.

    • So they can do a good enough job in a lot of cases, but if you feed their own output back into them, they quickly decohere into madness.

      • The auto-catalyzing rat's nest.

    • That's kind of what happens in runaway sycosocial interactions too, just they drag the human into the descent into madness with them.

  • A coactive fabric should only add things.

    • It should never remove things the human put in place without the human taking the action to confirm.

    • The emergent suggestions of the fabric should always be secondary to the human.

    • Coactive surfaces shouldn’t modify the data.

      • Suggestions should sprout off of the data.

      • Allowing the data to grow and adapt.

    • A key dimension: what should the growth rate be?

      • How aggressively should the system add stuff in this part of the fabric, to get the goldilocks amount?

    • At the beginning, the system should show only very high quality suggestions, rarely.

      • As the human starts accepting more and getting more comfortable, it should up the number of suggestions by trading off precision for recall.

      • It should float based on the user’s baseline suggestion acceptance rate.

  • The more that you've touched a component, the more you don't want it to change unexpectedly.

    • Each touch is an implied curation.

    • My Bits and Bobs process each week is driven by multiple different touch points where I interact with the previous week’s notes and refine and keep the ones that still resonate.

    • Every touch on an idea in my workflow is an act of curation.

    • You don't want the human out of the loop, you want the human judgment to be levered in the loop.

  • I want a personal system of record that can organize itself.

  • There’s a missing software category: Personal System of Record (PSR)

    • Or maybe we should call it Personal Relationship Manager (PRM)

    • In the 90’s it was called the Personal Information Manager (PIM)

    • This kind of software used to only be worth it for enterprises to adopt to coordinate business processes.

    • For individuals they were too much work to maintain, and they did too little for you.

      • They just accumulated, rotting, taunting you for falling behind.

    • Personal Knowledge Management tools (PKMs) just accumulated data.

      • They didn’t save work, they generated work.

    • But now LLMs can do qualitative nuance at quantitative scale.

    • We can benefit from infinite software, which means having that a personal system of record is more important than ever before.

      • The data doesn’t just sit around; it can do things.

      • If it’s constructed properly, it could reduce the work required to have outcomes in your life that matter to you.

    • If we could have it grow whatever features we need, we could have our own personal system of record for everything that's important in our life.

  • We're missing authentic social software.

    • A space between the zombie contact book and engagement-maxing hellscapes.

    • LLMs might finally enable it.

    • The contact book app is a dead-end relic with a data model from decades ago.

      • They probably have zero full-time people working on it anywhere.

    • Meanwhile, social media quantified human relationships into a grotesque, performative fever dream because that's all computers could handle.

      • Quantitative, not qualitative.

    • Sub and Super-Dunbar social systems are fundamentally different beasts.

      • Super-Dunbar inevitably becomes performing for an audience, which pulls everything into the engagement-maxing gravity well.

    • Facebook started as a contact list (primary use case) with a content mill on the side (secondary use case)—but it metastasized into the latter and abandoned the former.

      • Both Facebook and Salesforce grew from "system of record about other people"—one went consumer, one went enterprise, both captured massive value.

    • People are the center of our lives (duh!), but they're nuanced.

      • It’s impossible to model in a one-size-fits-none ontology some PM decided on 40 years ago.

    • Now LLMs give us qualitative nuance at quantitative scale.

      • Computing can finally navigate relational complexity.

      • An AI could help you invest in the relationships that actually matter to you, not the ones that generate the most engagement.

  • I want a serendipity engine for my life.

  • I want the tapestry of my life.

    • A rich, complex fabric.

  • What are some of the things that will be true in a world of infinite software?

    • Self-driving software.

    • Self-assembling software.

    • Self-distributing software.

    • Software that feels alive.

  • Software is powerful because it can do stuff.

    • But that power is also what makes it dangerous.

    • That's why software can't distribute itself by default.

  • I want a data lake for my personal life.

    • A cozy personal koi pond of my data.

  • Adaptive software: software that adapts to you, software that helps you adapt to the world.

  • A positioning maneuver: the gravity assist.

    • Imagine there’s some big category looming large in everyone’s minds.

      • It has a lot of gravity.

    • It’s a dead end, but everyone is focused on it.

    • Don’t try to convince people to not be pulled into that gravity well, which takes tons of effort and will likely fail.

    • Instead of fighting gravity, use it.

    • Pitch it as “you can get the limited thing and also as a bonus get this other thing you don't yet realize you want.”

    • The actual benefit is packaged as a secondary use case.

    • Pitch the use case as the primary with a bonus.

  • Software is developed by PMs looking at what people want in the customer base and then implementing it.

    • A very long feedback loop, very expensive.

    • Also, the people who are approving the features to add are people who are not the users, their goals are disjoint from the users.

    • What if the software could make itself better as more people used it, automatically?

  • Jim Rutt’s definition of the core skill of PMIng: “Finding the optimal tradeoff between the doable and the desirable.”

    • I think this is extraordinarily well distilled!

  • The best way to think about LLMs for writing code is like managing a team of interns.

    • You could have done the work yourself, but they did it and you verified.

  • Ai is great at building software.

    • It’s still not great at architecting software.

  • LLMs are infinitely patient, which means they can loop indefinitely.

    • This can be a powerful force.

    • An LLM looping and curating, with a stable goal to optimize for, can optimize an inner loop.

      • For software that can maintain itself, it doesn't matter how well-written it is, if another loop of software can maintain it.

    • An LLM that loops around you agreeing with you forever is dangerous, how could you not be lost in it?

      • Like the anxiety swirling around Riley in the climax of Inside Out 2.

      • This is what sycosocial relationships are like.

  • I talked to a developer this week who said he doesn’t directly use open source code anymore.

    • Open source code makes you dependent on the maintainer.

    • They could add a feature that you don’t want, or introduce a bug.

      • On the flip side, they could fix a bug you do want.

    • Instead of vendoring a dependency, he has the LLMs draw on their inherent knowledge from all of open source to distill a bespoke, fit for purpose library in place.

  • Gopher was like hypercard if it was done in DOS.

    • Ugly, but powerful, it helped birth the web.

  • When you optimize for one-ply use cases, you create things that lots of people immediately get.

    • Which means the ideas are super basic, one ply.

    • They don't change the game.

    • You get “faster horse” kinds of ideas.

    • Tons of competitors will swarm on building those obvious faster horses.

    • A use case that everyone instantly understands before they try it is obvious and inherently not game-changing.

  • The web was a crappy app platform in every way… except in one dimension of distribution that was radically better in a way that no one even realized was even a problem before.

  • Before the web everyone thought we were in the end state of history with no more applications to create.

    • It was the late stage, stable.

    • But the web revolutionized distribution and blew it all open.

    • The app/web distribution channel is so locked down today there were no new hyperscale consumer apps (that didn’t come from a preexisting big company) between 2016 and ChatGPT.

  • The consumer tech ecosystem has been stagnant for a decade.

    • It’s late stage; dominated by a few hyper-scale offerings.

    • That means that most PMs today that work on consumer things have only ever learned to add features to an app or platform someone else created.

    • Thinking about how to birth new consumer ecosystems is a lost skill, but is extremely important.

    • The people who do it will look like they’re breaking all of the best practices.

  • The iPhone / app model would never have worked without the cloud.

    • Applications allowed rendezvous with other applications on the device.

    • The same origin paradigm allows rendezvous with other users of this application on other devices, via the cloud.

    • What if we could have both?

  • The same origin paradigm isn't just a security model.

    • It also makes applications easier to reason about.

    • There's only precisely what you see in the app, nothing else.

    • Simpler UX, but basically impossible to do anything more complex.

  • Another implication of qualitative nuance at quantitative scale: spear phishing on demand.

  • The main challenge of privacy models is that data is naturally viral.

    • Data is approximately free to replicate at perfect fidelity.

    • That means as soon as it’s out of your sight, anything could happen to it.

    • The same origin paradigm deals with this problem by having black and white, coarse-grained silos.

    • But that cuts out on the nuance and long tail.

    • In a world of infinite software, that tail could become very thick.

    • How can we handle the “data is naturally viral” problem in a way that allows that tail?

  • Your data is powerful. It should be working for you.

    • It’s too powerful to share, too viral.

    • So we keep it locked away.

    • Typically for convenience we do that in someone else’s castle, whose incentives aren’t aligned with ours.

    • They don’t help our data be maximally useful for us, but they do help themselves to the parts that are useful for them.

  • A key unlock for infinite software: Unleash the potential of your data to help you in the age of AI... without having to worry about privacy or someone else's incentives.

  • One selection pressure for big tech is the same origin model.

    • The user implicitly must give open-ended trust over their data to the origin owner.

    • That is easier to do if the origin owner has a lot to lose.

    • Large, profitable companies have a lot on the line if they do something that they’re sued over.

    • The more heavily used they are, the more likely that if they’re doing something bad someone would have sued them.

    • But this dynamic is only really true if the origin owner is well established.

      • If the benefit of screwing over any one person is many orders of magnitude less than doing good-faith behaviors for everyone else.

    • But that doesn’t work in a world of smaller scale origins.

    • The same origin model is a force of gravity towards aggregation.

  • AI proxies will allow people to create superhero-level bespoke arguments.

    • If you can spar with an AI proxy of someone, then you can figure out the argument to get through to them.

    • You can spend as much time as you want testing out which argument will work against them, in a low-risk environment.

    • Similar to the Edge of Tomorrow movie, where he gets superpowers from just having lived it thousands of times and tried all the options that didn’t work to find the one, seemingly random one that is nearly miraculous.

    • If you’re important and have lots of training data in the model, be on the lookout for these highly optimized incoming arguments!

  • Big enough deep learning models learn to do Stochastic Gradient Descent at inference time, emergently.

    • If it's big enough, it will grow to have a model of the universe inside itself.

  • You can't sell a product on "fewer papercuts."

    • But as a user if you were to consider going from a system without papercuts to one with them, you obviously won't.

    • Once you come into a system with fewer papercuts, you’re less likely to leave.

    • Minimization of papercuts is a secondary use case.

  • “Come for the X, stay for the Y” is about primary and secondary use cases.

    • Primary use cases are strong enough to get you to overcome static friction and come to use the product.

    • Secondary use cases are strong enough to be static friction and cause you to stay.

    • Good products have both.

  • A researcher doesn’t tend to reduce scope.

    • If they figure out an easier way to do things, they increase the scope.

    • A builder tends to reduce scope.

    • If they figure out an easier way to do things, they get it done faster.

    • For the researcher, the research is an end.

    • For the builder, the research is a means.

  • It’s easier to keep an active prediction of a system in check than to come up with one.

    • In flying, there’s apparently a saying of “don’t be behind the plane.”

    • That is, keep your mental model of what’s happening, where you’re going, what you’re going to do next, active.

    • If you lose that mental model, if you have a failed prediction, you get stuck in the chaotic eddy currents trying to get your bearings quickly.

    • It’s easier to stay in the laminar flow with the system.

  • If you’re going to give up your autonomy to another entity, you have to trust them.

    • Trust that they won’t lead you astray.

    • Trust that if you do what they say and it ends up being a mistake, they’ll take the blame instead of pinning it on you.

  • Tech is amoral.

    • It's what you do with it, what emerges, that matters.

    • Prosocial tech is technology applied in ways that lead to prosocial outcomes.

    • Antisocial tech is technology applied in ways that lead to antisocial outcomes.

    • Tech at its best, vs tech at its worst.

  • Enchanted computing is resonant, magical, aligned with you.

  • Metrics are blinders.

    • They help you focus on a subset of signals and ignore the other stuff.

    • This can be great with the right metrics: you focus on the things that really matter.

    • But if the blinders are focusing you on the wrong thing, it’s extremely dangerous.

    • Blinders are a blindfold.

  • A single metric can never capture the real world nuance.

    • Data scientists and finance people typically look at a portfolio of metrics that are all correlated in different partial ways with what they actually care about, and then triangulate.

    • In that regime, there must be someone to ultimately make a judgment call about which metrics matter at a given time.

    • That person making the call can draw on their nuanced experience and context, and should have long-term alignment and responsibility for the outcome.

    • The tech industry seems relatively naive in this domain.

    • The tech industry tends toward “figure out a single objective metric and simply make it go up.”

      • There have been formative experiences like when the newsfeed switched to algorithmic, where users protested loudly, but the metrics showed that users loved it.

      • This led to a mindset in the industry of “the users don’t know what they want so just laser focus on the metric.”

      • But that’s the wrong lesson!

      • The metrics can only show what the limbic system wants, not what the user wants to want.

    • The search for a single “objective” metric precludes capturing complexity and nuance.

    • Each individual metric is a half-truth.

    • Which metric to pay attention to in a given moment is a judgment call… you can either be explicit about that or ignore that.

  • Qualitative nuance at quantitative scale allows ranking suggestions based on your aspirations, not your revealed preferences.

    • Your revealed preferences are dominated by your limbic system.

    • Aligning your actions with your aspirations

    • "Goals and aspirations" is not about first-order tasks.

    • It's about second-order tasks that are close-ended (goals) and open-ended (aspirations).

  • Software has always been dead.

    • Only living things (human programmers) can keep it going and be the life support to help the code not fade away.

    • All of our calories are spent making these dead things be artificially alive.

    • Software is dead, it cannot adapt itself.

    • Every piece of code you write is a thing that's rotting, you need to spend your calories to make it not rot.

    • The need to support it pins you down, constrains you from doing transcendent things.

  • Software is awaiting its cambrian explosion.

    • We haven’t had it yet because software has been dead.

    • But now it can be alive.

  • Software has always been about gardening, not building.

    • It was just easy to get confused because we built it out of dead things before.

    • Now LLMs are more obviously alive, adaptive.

  • Problems in the Cynefin simple or complicated quadrant are great fits for a checklist.

    • But there are times when you need to ditch the checklist.

    • Where you’ve transitioned into a complex or chaotic domain.

    • When to ditch the checklist is always a judgement call.

  • There are tons of ways to get good results in the short term by destroying the long term.

    • This can be very hard to detect at the moment until it comes crashing down.

    • The most important part is people who own the upside and downside for the long term.

    • The more you expect people to still be around if things come crashing down, the more aligned the incentives.

  • Great essay by Daniel Barcay: AI is Capturing Interiority

    • LLMs are intrinsically ensnared in the emergent politics of the social group they're used in.

  • When you compare your insides to everyone else's outsides you'll have a bad time.

    • This is why the grass is always greener.

  • When building a new game engine, there’s the “first triangle” milestone.

    • The moment where all the underlying subsystems are working well enough to collaborate to put a triangle on the screen.

    • That moment is a huge milestone for the team, who know there’s now a stable foundation to continually tighten, extend, and build on top of.

    • But to everyone else, they go “... it’s a triangle. What am I missing?”

    • The work to get to that first point of convergence, when all of the parts of the frankenstein are stitched together and the monster takes its first breath, is exhilarating for the mad scientist working on it, and a non-event for everyone else.

  • Jim Rutt has a phrase I love: "coherent pluralism"

    • This is when you get the best of bottoms up emergence and the best of convergence.

    • Folksonomies are an example of coherent pluralism.

  • I’m intrigued by Jim Rutt’s formulation of Liquid Democracy.

    • The observation is that most voters in practice are “noise voters”.

    • They don’t pay that much attention, so they attend to some meaningless thing like how good the candidate’s hair looks.

    • In Liquid Democratic systems, every participant still has one vote.

    • But for a given topic domain, they can hand it to someone else.

    • That person can also hand it to some else.

    • At any time the person can reclaim their vote, or override the vote of their delegate.

    • But as long as the gradient tends towards people who are more informed on that particular topic, then this system could lead to radically higher-quality debates and outcomes instead of turf wars over emergent us-vs-them wars.

      • All that is necessary is that this gradient tends to flow in the direction of people who have more context on a given domain.

    • LLMs, with their qualitative nuance at quantitative scale, would plausibly help people both delegate their votes and also figure out when to override their delegates.

    • This could be a system to get coherent pluralism.

    • Of course, this would just create a new meta-game, and it’s possible that the emergent outcomes would be even worse than before.

  • This week I learned about “ideological sorting.”

    • It’s a phenomena where over time one dimension of a population comes to be able to predict other, previously uncorrelated dimensions.

    • It’s a kind of collapse; there becomes one dimension that can neatly sort the entire population into two camps.

    • The more sorted the population, the more likely to have highly volatile situations, including civil wars.

      • Systems that are highly ideologically sorted are super-critical.

    • Engagement-based filtering is a key, causal driver of our modern massive ideological sorting.

      • (Surprise!)

      • This happens structurally, because the systems assume that if they know about the user in one dimension but not the others, that on the other dimensions the user is likely to be the median for the population on those dimensions, given the known observation.

      • This pull towards the average on the other dimensions collapses them, steering them into only the most salient dimension matters.

      • This pull is not a massive one; humans have their own intentions and values.

      • But it does give a consistent force of gravity that over time pulls things towards it.

    • Yet another way that social media is like an intellectual crack cocaine epidemic.

  • A massive unlock to coordination is to have everyone see themselves as a team first and foremost.

    • Believing in the collective and the team is where emergent magic happens

    • This is very hard to do in a culture of layoffs.

    • In layoffs individuals focus on themselves, to cap their downside.

    • How can you be a Radagast in an environment where people are existentially afraid?

  • It doesn't matter if you know the right answer if you can't show the answer and get people to agree it's right.

    • Schelling points are often way more important than your own ‘genius’ idea.

    • We have a bias to think our own ideas are great.

  • The coordination headwind is worse in organizations where everyone is smart.

    • If everyone thinks they're a genius, then they are more likely to go against the grain.

    • People who know they can’t do better than the process do what the process tells them.

    • This can be terrible with a poor process, or great with a great process.

    • "The Navy is a machine designed by geniuses to be run by idiots."

    • It’s better to have a process that everyone knows that everyone will follow than to have the best process.

    • At least then there’s something to coordinate to.

  • Finding schelling points can unlock convergence.

    • They pop into existence naturally all the time.

    • They are hard to find, like truffles in the forest.

    • Often they require a calibrated nose, sniffing them out.

      • Lots of authentic 1:1s, lots of study, to find the idea that everyone agrees is doable and desirable.

    • Discovering the natural schelling points requires cognitive labor that used to be scarce.

    • But now LLMs might help sift through and find the obvious schelling points.

    • Of course, this would change the internal politics meta-game…

  • A little bit of alignment of goals can create great results, emergently.

    • If you don’t have alignment of a team then even if everyone’s working hard the effort nets out to little movement.

      • Most movement cancels out movement of others.

    • Ideally you want not forced alignment but effortless alignment.

    • One way is to curate the set of people who already align with where you’re going.

    • It’s much harder to compel someone to believe.

      • It’s like love or creativity, it has to come from within.

    • “Leave it a little better than you found it” can lead to emergently great results without much coordination.

      • For it to work, the population needs to have a somewhat consistent understanding of what direction is "better".

      • But if there's an obvious, agreed upon "better" direction, then each touch makes it better, and that betterness can accumulate and accrete.

      • The result is difficult to predict, but likely to be great.

    • If a company’s mission is consistently followed in every interaction, then great things can emerge, coherently.

      • It used to be a pain to think through it for each decision.

      • But now with LLMs and their infinite patience, it’s easier to have a fuzzy set of values and mission statement operationalized.

      • Every decision can be run through an LLM and flag if it doesn’t align with the mission.

    • Another example: “minimize nasty surprises” is the gradient of improvement for a product.

      • Also referred to as the principle of least astonishment.

      • This is a natural, emergent gradient to make a product better.

      • The existence of this gradient is why a swarm of people working on P2s can make a product radically better.

  • The same output from a bottom up process is more resilient than the same output from a top down process.

    • Even if they are superficially similar.

    • Because the process that generated the bottom-up outcome is an auto-adaptive process.

    • For example, consider a mall vs the ‘businesses’ on a cruise ship.

    • A top down process says “we should have an ice cream shop”.

    • A bottom up process says “of the applicants for the space, this one is the best”

    • The latter creates coherent pluralism.

  • The species evolves, the organism doesn't.

    • Evolution doesn't work if the individuals can't die.

    • The force to improve the species requires suffering of individuals.

  • In assembly theory, two numbers matter: assembly index and copy number.

    • The higher the assembly index, the compoundingly less likely it is to have a high copy number, just mathematically.

    • So if you observe a higher copy number in practice, it directly implies a selection pressure.

  • Evolution is the dual of entropy. It must emerge.

    • Anything that successfully stands against entropy is alive.

    • Entropy tears down; whatever is not torn down is evolving, because there is a differential benefit to the things that persist or self-catalyze.

  • Getting a consortium to change the world is hard.

    • Doesn't work: 1) Found a consortium 2) come up with a compelling north star

      • The consortium will all be pulling in random directions, giving you a mush outcome or more likely simply no outcome.

    • Does work: 1) Come up with a compelling north star, 2) incrementally add people to the consortium.

      • Incrementally add people who agree with the north star and also understand that growing the consortium will help increase impact.

    • The “add people who are already aligned, incrementally” is the core of this technique.

      • As the momentum of the consortium picks up, more and more people who previously weren’t aligned will want to be aligned.

      • It’s tempting to go after the biggest players first to join, even if their incentives don’t align.

      • But it’s much better to get the consortium working with naturally aligned players and build momentum to draw others in later.

    • This repeats fractally; ideally you have a real working prototype or even adoption as more people start glomming on.

    • Seek low stakes but rigorous and varied feedback as early as possible to make sure the north star is compelling: doable and desirable.

  • LLMs are losing the ability to simulate real people.

    • LLMs are largely a warped mirror of all of the human input.

    • It used to be possible to use LLMs as a kind of proxy for swarms of humans to see how people would respond to given things like surveys.

    • But as we’ve trained them more for reasoning, they’ve lost the ability to handle nuanced situated human perspectives.

    • The distributions are getting sharper and sharper.

  • Why is LLM training convergent?

    • It feels like it should be divergent; diffusing through an unfathomably vast hyper-dimensional space.

    • But our intuitions for hyper-dimensional spaces are often wrong.

    • Hyperdimensional spaces are interconnected in surprising and weird ways.

    • Wormholes that teleport from one region to the other.

  • LLMs take more training examples but find deep patterns.

    • It can write code in a made-up programming language.

    • Humans need fewer examples but we mainly find superficial patterns.

  • Noise at every level is necessary for life.

    • The right amount of noise is like a swarm search.

    • Biological systems have tuned at every level to have the optimal noise.

  • Swarms seek out low-energy states.

    • In the context of UX you might say “users are lazy.”

    • The integral of friction and uncertainty is remarkably smooth and predictive of which products succeed–at the level of the population

    • Individual users don’t do the calculation in total; they do it stochastically and in pieces.

    • But with large enough swarms, that bias is consistent enough that the noise averages out and you’re left with a very strong curve.

  • In an open plain, things diffuse.

    • In a narrow valley, things cohere along a channel.

  • Walter Fontana researched Neutral Networks.

    • In biology, most genotypic mutations don’t produce any significant phenotypic change.

    • You can think of this like being in a wide open plain without much gradient; it’s a diffusing, random walk, flood-filling that plain.

    • But every so often there’s a single change that does cause phenotypic changes that can be selected over.

      • That mutation teleports to a different fitness landscape.

      • In that moment, that single change matters a ton, and can be selected over.

    • Evolution is a search for adjacencies like this.

  • A YouTube video: "Why This Giant Snake Is So Destructive

    • In Asia the prey coevolved with the pythons.

      • The Pythons never got too far beyond the prey’s ability to defend against them.

    • But if it’s transplanted then they’re instantly dominant and the prey species can’t evolve fast enough.

    • It’s like teleporting a superior predator out of an environment where it was in balance.

  • Emergence is a real and powerful force, though it is fundamentally invisible.

    • Paper: If you can get a better prediction of the collective by seeing it as whole, then there's something emergent happening.

    • A fascinating paper: "An ability to respond begins with inner alignment: How phase synchronisation effects transitions to higher levels of agency"

    • For emergence to be mainstream, you can't say "give up reductionism".

    • Reductionism is clearly useful!

    • It has to be instead "the parts are still there" and also you can view it in this way.

  • The larger your surface area of your argument, the less likely people are to believe.

    • All it takes is one load bearing part of the argument someone doesn’t agree with to invalidate the whole argument for them.

    • The surface area goes up with the square of the ideas that someone must believe that are not obviously, self-evidently true.

    • An existence proof of the thing working in the wild helps make a thing obviously true.

    • When you’re trying to do something big and novel, sometimes you feel like you want to make better arguments to convince the skeptics.

    • But actually you should just focus on making it real.

  • Performativity ruins the value of aggregate signals.

    • If it’s authentic, if it’s just for you, then when they’re summarized at the level of the collective, the noise of each interaction drops out and the distilled signal is all that remains.

    • One of the reasons the querystream is such a powerful ranking force for search engines is that each user’s querystream is private.

      • The only thing they’re wasting if they do a weird query is their own time.

    • Transactions in an economy actually have consequences, so they are an authentic signal, even if they are (somewhat) public.

    • Contrast that with social media likes, which are free to give, are mostly public, and have no direct consequences for the giver.

      • You can still get some emergent signals, but they are warped away from a “ground truth” of quality and can get warped by political dynamics.

    • If there's no cost to them doing it, then the signal is superficial and performative.

      • Meaning comes from cost.

      • For the signals to be meaningful, and thus aggregate to a deeply true signal, they must have had some cost for the user.

  • A recurring pattern of extremely productive, high-trust environments: an all powerful dictator at the top who rarely wields their power.

    • An “alpha,” a “zeus,” a “Benevolent Dictator For Life”

    • One person being able to act like Zeus and repel invaders is useful to act decisively in emergencies... and is also a corrupting power.

    • Everyone can rest easy knowing there is a single person who everyone can trust to referee.

    • As long as there is a right to fork the community, and it is credible to execute it, then it balances the power dynamics.

      • This pattern works best for virtual communities and ones that are optional.

      • One reason dictators don’t work at the level of nation-states is because the right to fork isn’t credible.

      • Civil wars are not allowed (obviously)

      • A large swath of the community picking up and moving somewhere else is not credible; way too much friction and hard to coordinate.

  • You can't have a truly collaborative debate when one participant can fire the other.

  • A semi-permeable membrane is a key component for coherent emergent systems.

    • You need to get the concentration right.

    • Not necessarily high, but right.

    • You want a system that can gatekeep at the boundary, to only allow certain kinds of things to enter (while also allowing novelty and diversity).

    • And also want a credible system to zap invaders.

    • This can create an emergent system with radically different characteristics than its surroundings.

    • When you have a positive pressure differential for joining you get people who want to be there: who believe.

  • A few curated favorite epigrams from Alan Perlis.

    • “It is easier to write an incorrect program than understand a correct one.”

    • “A programming language is low level when its programs require attention to the irrelevant.”

    • “Get into a rut early: Do the same processes the same way. Accumulate idioms. Standardize.”

    • “If you have a procedure with 10 parameters, you probably missed some.”

    • “Optimization hinders evolution.”

    • “One can only display complex information in the mind. Like seeing, movement or flow or alteration of view is more important than the static picture, no matter how lovely.”

    • “Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.”

    • “One can't proceed from the informal to the formal by formal means.”

    • “The proof of a system's value is its existence.”

    • “You can't communicate complexity, only an awareness of it.”

    • “Programming is an unnatural act.”

  • Computation is magic.

    • Humans reason about categories, automatically.

    • Magic is "something that defies its category"

    • Computation defies categories, it is open-ended, which means it's magic.

  • Russell Conjugation: "I am steadfast, you are stubborn, he is an ox."

    • Also called emotive conjugation.

  • If you’re just a cog in the machine you don’t think about the system.

    • There’s no way for you to change the system so there’s no point in thinking about it.

    • But in those situations, there will be a lot of low-hanging fruit for someone who can think about and modify the system.

  • In times of complexity, everyone tends to come to the table with their own definitive answer.

    • They then try to have the answers compete in the battleground of ideas.

    • But the way to transcend it is to leave behind solution thinking and instead embrace problem thinking.

    • Admitting you don’t have the answers is the first step to be able to resonate with the answer as it emerges.

  • "Don't seek answers that you cannot live."

  • Being able to see the third order effects and also to execute through ambiguity is powerful.

    • It's easy to have either, it's transcendent to have both.

  • Humans aren't interested in truth, they're interested in the elimination of uncertainty.

    • Those are often mostly aligned, but in the extremes you can see that they are disjoint.

  • Optimizing for efficiency is horizontal. Adapting and transcending is vertical.

    • Not, "how can you do what you were doing before, faster" but "how can you do new things you never even dreamed were possible before?"

  • I wonder if only people with Intuitive personalities tend to be able to see emergence.

    • People with Sensing personalities can only see the concrete parts, not the emergent whole.

  • Counterintuitively, I’ve found having a kid helped unlock game-changing work results.

    • Before having a kid, work feels like everything.

    • It feels existentially important, like failure would be death.

    • After having a kid, being a good parent is self-evidently more important.

    • So work feels less existential, so there's less tail risk to you, which makes it easier to take risks, which makes it easier to optimize for serendipity, which leads to better work results.

  • You'll do your best work when you're doing work that is authentic to you.

  • In Buddhism one of the precepts is to choose the right livelihood.

    • Riffing on that: pick work that is directly in sync with your true values.

    • That’s work that will be resonant.

  • Imagine what the signpost for a major possible shift will look like ahead of time.

    • So when it happens you can notice it and it's not just lost amongst the noise.

    • If you’re looking closely, the seeds of the future are visible today.

    • A prepared mind will be able to see them.

  • I’m excited for this cozy game whose announcement trailer just dropped: Beastro

    • My friend is the art director.

    • I love the whimsical, cozy style.

    • The idea of playing as the chef who feeds the warriors that protect the village is so Radagast-y. 

  • Some riffs on the YouTube video: "Why Seedless Fruit Is a Disaster Waiting To Happen

    • “Maintenance trap” refers to a thing that will evaporate without constant human attention.

      • All it takes is one mistake and it’s gone forever.

    • Seedless fruits are monocultures and thus more prone to catastrophic failure.

    • Variance and noise creates a swamp that doesn’t allow one invader to sweep through.

  • Missionary: care more about the second order impacts than the first.

    • Mercenary: care more about the first order impacts than the second.

    • Mercenary is more tightly aligned with Saruman.

    • Missionary is more tightly aligned with Radagast.

  • The river is orders of magnitude more important than any molecule of water in it.

  • A quote from Dorothy Day:

    • "Love casts out fear, but we have to get over the fear in order to get close enough to love them."

  • Aish’s clips from Charlie Chaplain’s speech in The Great Dictator:

    • “To those who can hear me, I say - do not despair.

    • The misery that is now upon us is but the passing of greed - the bitterness of men who fear the way of human progress.

    • The hate of men will pass, and dictators die, and the power they took from the people will return to the people.

    • And so as long as men die, liberty will never perish…”

    • “Don’t give yourselves to these unnatural men - machine men with machine minds and machine hearts.

    • You are not machines!

    • You are not cattle!

    • You are men!”

    • “You the people have the power to make this life free and beautiful.

    • To make this life a wonderful adventure.”

    • “Let us use that power!”

  • Beautiful words from the conclusion of a manuscript I’m reviewing:

    • "These words died the moment I set them down on the page.

    • They are only born when you read them.

    • They live only as you metabolize them.

    • You get the final say."

Reply all
Reply to author
Forward
0 new messages