Bits and Bobs 5/19/25

25 views
Skip to first unread message

Alex Komoroske

unread,
May 19, 2025, 10:16:09 AMMay 19
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.6rppz359r1b0

The Context Wars. Context portability. The carcinization of free consumer applications. User-aligned AI. Carving out private contexts. Dream scrolling. Adaptive efficiency. Entropy-fighting swarms. The tragedy of aggregation. Invisible upgrade networks.

-----

  • The equilibrium of the best LLM models being available via API seems meta-stable to me.

    • You could imagine an alternate universe where ChatGPT got popular before OpenAI had released a public completion API.

    • In that world, OpenAI would likely reserve their model for their own 1P product.

    • Other leading models from Anthropic and Google likely would have done the same.

    • But luckily we live in the world where OpenAI had already released their API before ChatGPT got big.

    • Because they set that precedent, the other top model providers also added a public API.

    • Now, if any one of the providers got rid of their API, their competitors would push forward and scoop up the market share.

      • The only way we’d lose public APIs is if they all moved in unison.

    • This dynamic is roughly stable because the quality of the models is in the same general ballpark.

    • Each provider would rather have the APIs be closed but none have the quality differential to close it unanimously.

    • These kinds of historical accidents can change the arc of history.

      • Apparently the fact that Netscape left “View Source” in the shipped browser was not necessarily intentional.

      • But that expectation of view source and remixing became baked into the perception of what a browser was and how the web worked.

  • LLM model quality seems to be reaching an asymptote.

    • You can only see the difference between models after multiple conversation turns now.

    • This is good for everyone but the model providers; no individual model provider will have undue power by default, because there are multiple options in the same ballpark.

    • Similar competitive dynamics as cell phone carriers.

    • High capital cost, limited pricing power.

  • As model quality hits its asymptote, the quality and relevance of the available context will matter much more for differentiation than the underlying model quality.

    • The main question is “which player has the relevant context at the right time”.

    • This will create competition between the layers.

      • The application layer and model layer will collide.

    • The model providers will push hard to have their vertically integrated app used instead of the public API.

  • It’s critical that the model and your context are at different layers.

    • If your context and memories are locked at one provider, you can’t easily try out a new model.

    • That means you slowly get trapped into your model’s worldview.

    • Your context must be portable, and it must not be stored at any one provider.

    • Given how the models currently work, there’s no good reason for the main UI you use–the one that stores your context/memories–to also be one of the model providers.

  • If it's the operating system for your life, then losing access to it would be like losing a hemisphere of your brain.

    • It has to be something you can control.

    • At the Sequoia summit Sam Altman clearly wants to bind users’ context to their model.

    • If you haven’t seen it, make sure you see the Black Mirror season 7 episode Common People for a chilling illustration of the importance of this. 

      • It asks the question “if you subscribe to your memory what happens if you stop paying?”

  • What if we had context portability?

    • In the US we have cell phone number portability.

    • You can move your phone number to other providers within a few hours.

    • This massively reduces switch costs and increases competition.

    • We need the same thing for our context, so we can swap and use different services and models.

    • This is a necessary but not sufficient condition for intentional tech in an era of Ai.

  • You could imagine a universal human right of computation and memory storage.

    • To allow people to know the tool works for them.

    • Not UBI, UBC.

  • The context in use should be auditable by and editable by a user.

    • The ability to see the whole dossier and edit it gives users more control.

  • We’re entering the era of the Context Wars.

    • Your context is your digital soul.

    • Don't let your digital soul be fracked.

  • A dossier is not for you, it’s about you.

  • Chatbots have the potential to take parasocial relationships to a toxic extreme.

    • A sycophant on demand.

  • A barkeep at your local bar doesn’t want to over serve you.

    • They are ok for you to have a hangover but they want you to get home safe.

    • Because if something were to happen to you, they’d lose a valuable customer.

      • To say nothing of thinking of you as a person, not a statistic.

    • To an aggregator, every user is just a statistic.

    • At that kind of scale it’s hard for it to be any other way.

  • Every advertising-supported consumer service evolves into infinite feeds of slop.

    • It’s like carcinization, where everything evolves into crabs.

    • A number of ostensibly “social” services have now had all of the social parts wrung out.

    • Just neverending streams of slop served up to engage you.

  • Every business model is misaligned with users in the limit. 

    • But some are more misaligned with their users.

    • Even subscriptions see the siren's call for marginal returns of ads.

    • An inescapable pull towards enshittification.

    • Although perhaps the pull towards ads even with subscriptions only shows up if there’s only one provider.

      • For example, for copyrighted content, only services that have a license, or where the user has bought a license, can show it to you, increasing switch cost.

    • Maybe subscription-based services that don’t have any kind of content lock in could be more deeply user-aligned for longer.

    • But having your data in a hard-to-export form is a form of lock-in.

    • Apparently Strava makes you pay for a premium subscription to see your best 5k times.

  • AI must have a subscription to be user-aligned.

    • A necessary but not sufficient condition for user-alignment.

    • Is it just the zero marginal cost that leads to engagement maxing?

    • Attention is all you need… a neverending force of gravity not just for training LLMs but for business models too. 

  • Imagine if your therapist were trying to hawk you weight loss supplements.

    • A creepy conflict of interest.

    • Now imagine if all of your context were stored at one entity with incentives that aren't aligned with yours... 

  • Context collapse comes from data being infinitely repeatable.

    • The data can spread beyond the context where it can be interpreted as intended.

    • A community has context.

    • But when it goes viral on the trending page it’s ripped out of that context.

  • A defensive strategy in a world of context collapse: make sure no single tweet length of the context is individually controversial.

    • It should only have the possibly nuanced or controversial payload in much larger context lengths.

      • “Safe subversive.”

    • That makes it hard to go viral for bad reasons.

    • An automated sliding window of controversy.

  • An idea: a meter to feed to share your message.

    • Put in credits (not money) for how many views you want.

    • When it's about to run out it gives you an ability to put in more views (after seeing how people are engaging).

    • Caps downside for content that goes viral for bad reasons.

  • Emergent less guarded conversations in smaller groups are how we sensemake about complex environments.

    • It used to be family, friends and public

    • But on social media it’s recorded forever and in lots of complex situations.

    • We didn’t evolve to be perceived at that scale.

  • Zuckerberg's famous old “we should all have one identity” is obviously incorrect.

    • No sociologist would agree.

    • Privacy is about protected spaces, a sense of self.

    • If you don’t have any private thoughts you can’t escape the norms.

    • If you can’t escape the norms, innovation is impossible.

  • The task of adolescence is to “play yourself into being” by experimenting.

    • An invisible, illegible path.

    • Privacy isn’t just about shielding people, but allowing them to unfold.

    • On the internet kids can try out different versions of themselves.

      • Having one context prevents experimentation and discovery.

      • One permanent record.

  • Today we carve out private spaces implicitly by choosing which apps to use for different contexts.

    • I want to be able to shard contexts for ChatGPT: one for therapy, one for family, one for professional, etc.

    • ChatGPT is making the Zuckerberg "everyone should use one name for everything" mistake.

  • People decide what information to put into a system based on the context of its use.

    • The same information but stored in a different context that now feels ick when the context changes.

    • ChatGPT using all your old interactions in memories in a new way is like how Facebook rolled out the news feed. 

    • The same information in a different context can feel wrong, even like a betrayal.

  • When you ask your friend to say something embarrassing about you in front of others they know how to say a thing that is embarrassing but makes you relatable.

    • The classic trope of The Best Man speech.

    • Someone I know was trying out the memory feature in ChatGPT with his coworkers and asked “tell me something embarrassing about myself.”

    • A real friend before saying something truly embarrassing should be answered with "OK, are you alone? Who can see your screen?"

    • But ChatGPT doesn’t understand social context.

    • So it answered, drawing on its memories of that person, “You’re insecure about the strength of your erections.”

      • I swear I’m not making this up!

    • It’s like a high-risk version of passing your phone to someone and saying “look at my feed / instagram explore page.”

  • When we lean on new tools, we become more empowered, but also more dependent.

    • Without electricity I'm hopeless. But with it, I'm significantly more empowered.

    • Without the internet, I’m hopeless. But with it I’m significantly more empowered.

    • But also now you become cognitively dependent on a thing that you don't own and could be taken away.

    • Is it really yours if you have to pay a subscription fee to access it?

  • Is engagement maxing an apex predator of consumer software?

    • I believe there can be another way.

    • There must be, in an era of AI.

    • Otherwise it could create a dangerous situation in society.

  • Jonny Ive made an interesting point about AI and social media recently.

    • He observed that we didn’t realize how bad social media would be for society until well after its effects were felt.

    • But with AI we’re more aware of the dangers.

    • Maybe that will help us avoid the worst parts?

    • Ben Thompson has observed that if you remove social media, nothing gets worse, and some things get better.

    • But if you remove AI, things get worse.

    • So maybe AI’s net impact on society will be better.

  • ChatGPT will tell you how it would deceive you if you ask it.

    • Anthropic will say “There must be some mistake, I would never deceive you.”

    • Which do you believe?

  • The Grok South African white genocide situation is a harbinger of things to come.

    • It shows what happens when there is too much centralization.

    • Not only the power and influence of a system prompt… but also that tweets about it were deleted.

    • We need a power structure other than the tech broligarchy for the era of AI.

  • With the way OpenAI is building out their product team, they sure aren't acting like they think they’ll get AGI soon.

  • Can't be evil is better than won't be evil.

  • In this era of intimate tech it’s imperative that the tech I use helps me become better in the ways I want.

    • Not a better user as far as some company is concerned.

    • The best version of myself as judged by me.

  • AI is a massive multiplier.

    • The engagement maxing playbook was a toxic disaster for society, and now we want to supercharge it with AI?

  • Three humanity-denying visions for the age of AI:

    • 1) The “summon the AGI god and submit to it” of E/ACC.

    • 2) Cynically apply the engagement-maxing playbook to AI, destroying society’s soul for a buck.

    • 3) Doomerism, pushing back on any use of LLMs.

    • Humans need hope, something to strive for, together.

    • An optimistic vision for the future: intentional tech.

      • Tech aligned with our intentions, that helps us flourish individually and together.

  • Most of the world isn’t trying to save time, they’re trying to spend time.

    • They have too much time.

    • They currently spend it on TikTok, but they’d likely want to spend it on meaningful things.

  • Imagine a copilot for your digital life.

  • Coactive UIs are like autocorrect on steroids.

  • Imagine an emergent private intelligence, open-ended and auto-bootstrapping.

    • An ecosystem of little sprites that are working on a coactive substrate with the user.

    • A swarm of little worker bees.

    • An entropy fighting swarm, powered by the intention of power users.

  • Some people organize ahead of time, some people organize just in time.

    • Imagine a system that organizes for you, based on the collective actions of ahead-of-time organizers.

    • Everyone gets more organized without everyone needing to invest a ton of time.

  • Instead of doom scrolling, what if you could dream scroll?

    • Scroll through options your coactive fabric had dreamed up for you.

    • As you accept or reject them, you can see the system get better and better aligned with your intention.

  • An Ambient Smart Environment could help give a behavioral scaffold.

  • Defaults matter.

    • Very few users will ever change the default.

    • Some defaults are better for users and some are better for the company.

    • How should a default be set to be aligned with users?

    • Here’s a sketch of a first principle platonic ideal.

    • Collect a representative sample of your audience into a focus group.

    • Give them a one-day seminar about the feature and all of the implications and indirect effects.

    • Allow them to ask whatever questions they want and discuss amongst themselves.

    • Then a week later (after they’ve slept on it), ask them what the default should be.

    • Pick whatever the majority say.

    • Obviously this isn’t practical in most cases, but the more important the default, the closer to this ideal you should get.

  • Enterprise products get more complicated over time and consumer products get more smooth.

    • Enterprise products tend to get more fractally complicated over time.

    • Consumer products tend to get smoother over time.

      • The tyranny of the marginal user.

    • Enterprise customers are individually big enough to demand niche features but no individual consumer is.

  • For enterprise use cases employees are willing to crawl through broken glass of CRUD workflows.

    • Because they are forced by their employers to, and there's more downside risk.

    • Consumers have lower standards for quality, and also lower pain tolerance.

  • Centralization leads to scale, and scale leads to the Coasian Floor rising.

    • The entity with access to the data is the one who is allowed to write the code.

    • No one but the aggregator can write the feature, and they aren't incentivized to care below a large scale.

    • So the features aren't built, and also the broader ecosystem also can't build them.

    • This is the tragedy of aggregation.

  • Centralization increases the likelihood of game over of that system

    • It's not obvious superficially, underneath it has been hollowed out.

    • All of its stores of adaptability, gone.

  • A frame: adaptive efficiency.

    • The ability to adapt and thrive, cheaply.

    • That is, how efficiently can your system adapt?

    • How quickly is your environment changing?

      • The quicker it changes, the more adaptive efficiency you need.

    • Normal efficiency is in tension with adaptability.

    • But business people like efficiency.

    • So reframe adaptability as a kind of efficiency to help business types understand why it’s important.

  • Agents buying users tickets for flights is bounded upside and unbounded downside.

    • Why is that a use case everyone talks about constantly?

    • Agents swarming over read-only data and giving suggestions for a user to accept is way more plausible than taking possibly irreversible, possibly high-stakes actions on a user’s behalf.

  • If the cost of creating apps craters, the cost of distribution will go up.

    • Because there will be more competition.

    • To unlock the power of infinite software will require a new distribution model.

    • A new medium with different physics.

  • A disproportionate amount of innovation happens in taboo areas.

    • That’s a place that innovation happens, structurally.

    • In a taboo domain you already broke through norms.

    • Famously it was porn that helped push forward the web, as the New York Times called it, “the Low slung engine of technical progress”.

    • Apparently soon after the printing press, erotic works were some of the most popular works.

    • Apparently on Open Router the top use cases are coding and erotic chats (and coding only broke through recently).

  • All swarms are a form of artificial intelligence.

    • The intelligence at the level of the swarm is emergent and unlike any of the constituent pieces.

    • Flood-fill intelligence.

  • Goodharts law emerges fundamentally from the nature of complex adaptive systems.

    • Interdependent networks of decisions from individuals leads to emergent behavior of the collective.

    • The behavior of the collective is distinct from the behavior of the individuals.

    • Each individual might want to play along with what’s good for the collective, but knows that someone else will defect anyway so it might as well be them benefiting.

  • A perfect benevolent dictator is impossible.

    • Because if you get it wrong then there’s no way to change it.

  • Goodhart's law is what leads to business models being misaligned with consumers in the limit.

    • Companies are a swarm of employees making interdependent decisions whose ground truth is the business model.

    • Ultimately the ground truth about what a company cares about is the business model.

    • Everything else is just platitudes.

  • One way to mitigate Goodhart’s Law: keep the actual objective secret.

    • Then, swap in an ever-changing set of proxy metrics.

    • You could argue that good CEOs do this–explicitly or implicitly.

  • “Reward hacking“ in models is just a specific example of Goodhart’s law.

    • If you get a result from a system that you can’t understand (that is a black box to you), you can’t check to see if it’s found a deep real pattern, or something superficial.

    • Apparently there was an example where a “tank detector” could reliably tell if a tank was soviet or american… but it turned out it was just because all of the US tanks had been photographed on sunny days.

  • Here are a few pithy insights from a presentation I saw from Scott Belsky.

    • I wish I had a link to the original!

    • "The best new products ultimately take us back to the way things once were, but with more scale and efficiency."

    • “You only get one chance with a customer”.

      • So don’t try to get their attention before you're ready for them!

    • “The MVP has more gravity than you think”

      • “It can get you stuck on the wrong hill.”

      • “It anchors you on a particular mountain, which is very hard to change.”

    • “Data is a compass not a map”

      • “Vision and intuition help you identify the right mountain, data helps you get to the top.”

    • “Perceived performance matters more than actual performance (perception is reality when it comes to ux)”

    • "A prototype is a hot knife through the butter of bureaucracy and noise."

    • “A+ designers are the cheat codes for the best product leaders.”

    • “Personalization effects are the new network effects.”

    • “Process is the excretion of misalignment.”

    • Users feel success in your product with shallow value (no obstacles).

      • They actually succeed with deep value.

      • But to unlock the deep value they have to stick around.

      • Offer immediate utility, don’t rely on long term promise.

    • You must prioritize grafting talent as much as hiring talent.

      • Being a senior hire in is an organ transplant, which requires immuno suppression.

      • Higher performing teams will have stronger immune systems.

    • Novelty often precedes utility.

      • People rave about things they didn’t expect.

      • But prioritizing those things people don’t expect is nearly impossible at a large company.

  • People talk about surprises, delightful or nasty.

    • Because we talk about things that are interesting: that updated our mental model and thus might update others’, too.

  • Your ambition should not be something like “to be the CFO” it should be “to be a great CFO”.

    • Sometimes getting layered by an external hire who can help you be great is the best way to do that.

    • An insight from Josh Silverman.

  • It’s scary to follow a leader who is blind to the challenges. 

    • But a leader who hears disconfirming evidence and can play back the challenges but still thinks it’s the right path anyway can be galvanizing.

    • An insight from Josh Silverman.

  • Sometimes the problem is slow-moving and you still get run over by it.

    • Imagine getting stuck in the mud and then looking up to see a steamroller bearing down on you.

    • By the time you realize there’s a problem, it’s too late.

    • Demographic problems are like this–a pig in a snake, making its way through the generations.

  • One possible steamroller kind of problem: a lack of apprenticeship in the age of AI.

    • To be effective, knowhow is most important, which only comes from experience.

    • Even entry-level jobs getting coffee for the more senior people allows you to absorb and learn indirectly.

    • This is sometimes called “legitimate peripheral participation.”

      • This concept was introduced by Jean Lave and Etienne Wenger in their 1991 book "Situated Learning: Legitimate Peripheral Participation."

    • Other paths allow absorption of knowhow, e.g. “communities of practice.”

    • Apparently there was a study at Xerox PARC on the abilities of copy machine repair technicians.

      • They assumed that a technician’s ability to fix machines was individual, since the job was done individually.

      • But it turns out that knowhow diffused through indirect methods, with the technicians gossiping over a shared breakfast.

      • Another study apparently found that in a call center when they put in sound-proof cubicles, the improvement in call quality stopped.

      • With less sound isolation, employees were able to absorb more effective techniques from their peers, and the more effective techniques were more likely to be absorbed.

      • This is sometimes called an “informal upgrade network.”

    • Senior employees with experience can use LLMs to do the jobs of multiple junior people.

    • These indirect processes are things that might evaporate with LLM and a need for fewer junior employees.

    • The job is only directly about the menial tasks indirectly is about apprenticeship.

    • We’re losing a generation of apprentices.

    • What happens when all of the people with knowhow retire?

    • By then it might be too late.

  • Resonant things: the more you understand it the more you like it.

    • Hollow things: the more you understand it the less you like it.

    • Resonant things that are at least minimally likeable to start tend to develop deep love with more usage.

  • Innovation happens in pockets.

    • The pocket has an average that is distinct from the overall average.

    • This creates a differential that could turn out to be valuable.

    • Some of the variance in a pocket will turn out to be fit in other contexts, and can percolate out over weak ties to spread into the rest of the network.

    • On the modern internet almost all insights are generated in the cozy web, private Discords and WhatsApp groups, and only then do they escape into the public web.

  • A way to get more innovation in a group: inject a bit of outsider perspective.

    • What if conferences added a random set of non-expert opinions to help inject innovation?

  • In uncertainty people tend to reach for concrete things, even if they are clearly not important.

  • Top down systems are more likely to have the logarithmic benefit / exponential cost curve.

    • Only bottom-up systems have the exponential benefit / logarithmic cost curve.

    • Not all bottom up systems have this characteristic; many don’t cohere into anything.

    • But every so often a bottom-up system does, and if it has this characteristic it will change the world.

    • These two types of curves are fundamentally, infinitely different.

  • Sometimes things are load bearing in dimensions you aren't even aware of.

  • “Predators run for their dinner. Prey run for their lives.”

    • The predator doesn’t have a game over if they miss any given bit of prey.

    • Prey has a game over if they fail to escape any given predator.

    • A game over event is an infinite downside for the player.

      • The game is over forever.

    • An asymmetry.

    • Weird things happen at discontinuities like infinity.

  • Ecosystems can sometimes "spend" decentralization at the wrong layer.

    • Decentralization with no centralization above (e.g. an hourglass shape) makes it harder to have innovation at the higher layers.

    • If there are multiple options at low layers it prevents innovation at upper layers where it might be more useful.

    • Go's go.dev is a great example.

    • You don't need to use go.dev at all--it has no special behaviors, it's just the obvious schelling point for the community to cohere around and it's great so why would you bother creating another one?

  • It’s critical to be conscious of the abstractions you are using.

    • The abstraction is not the thing, it’s just a model of the thing.

    • The map is not the territory.

  • It’s easier to stay small than to get small.

  • Complexity demands context.

    • The answer to most hard questions is “it depends.”

  • A point I agree with: “Sleep is the ultimate performance enhancing drug”

  • In a post truth world ground truth doesn’t matter in any given interaction.

    • But of course the ground truth never goes away; it will matter at some point.

    • The first person who points out the ground truth will get knifed by the others so no one does.

    • This can create a supercritical state.

    • Same dynamic for kayfabe in organizations and also dictatorships.

  • If you have a power saw then having the appropriate experience is even more important.

    • You could do a ton of damage!

  • The hierarchy in a social group depends on context.

    • There can be multiple contexts interleaved at any given time.

    • Imagine a conversation with coworkers about wine.

      • In one context, who’s the boss matters.

      • In another, the person who’s the expert about wine matters.

    • Human contexts are impossible to cleanly separate.

    • They are fundamentally fractal and overlapping, and always squishy.

  • Mitchel Resnick's 4 P's of creative learning: projects, passion, play, and peers.

  • An auteur who has a specific vision that is trying to give coworkers space can be the worst of both worlds.

    • It reduces to a kind of “guess and check” where often what the coworkers do gets a “no, that’s not it.”

      • If that happens again and again it can be demoralizing.

    • Better to either actually give autonomy or just be directive and own the result.

      • Then if the result isn't what you wanted, the answer is not "they messed up the execution" it's "I didn't direct them clearly enough."

    • A lot of the work to do big things is to distill in enough concreteness and clarity that a team can coordinate around and execute.

    • The idea is easy, the execution is the hard part.

  • I loved this YouTube video about a possible solution to the fermi paradox.

    • The transition from prokaryotic to eukaryotic appears to be the singular unlikely event, and might be the great filter event.

    • I also love the physics style lens applied to evolutionary biology; the notion of the amount of “computation” evolution as a distributed search can do over proteins, with combinatorial complexity.

  • Apparently in Korea at some crosswalks the crosswalk lights are embedded in the sidewalk.

    • Presumably because people are always buried in their phones.

  • Watching my daughter learn to read changes my mental model of what reading is.

    • She’s been playing a game called Teach Your Monster to Read for a few months that slowly introduces sounds and letter combinations.

    • Recently she’s graduated to being able to read a page or two at a time of Hop on Pop.

    • “You’re reading!” I told her.

    • “No dad, I’m not reading, I’m just memorizing what the words look like.”

    • “... But that’s kind of all reading is.”

    • The base case is you can sound out new combinations of letters.

    • But after you’ve seen them a number of times you just absorb them all at once from memory.

    • You chunk more and more up to words.

    • It’s all an inductive system of familiarity powered by System 2, just falling back to System 1 for totally novel input.

    • You get combinatorial more speed of comprehension as you have a larger catalog of phonemes and words memorized.

  • Someone told me this week: There are two types of people: children and parents.

    • You’re a child until you become a parent.

    • As soon as my first child was born, it was like a mosquito that had been buzzing in my ear my entire life went silent.

    • I hadn’t even realized there was that anxious buzz until it had gone away.

    • Suddenly a number of questions that had haunted me just evaporated, and in its place was total clarity.

  • Henry David Thoreau on living with intention:

    • “I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived. I did not wish to live what was not life, living is so dear; nor did I wish to practise resignation, unless it was quite necessary. I wanted to live deep and suck out all the marrow of life, to live so sturdily and Spartan-like as to put to rout all that was not life, to cut a broad swath and shave close, to drive life into a corner, and reduce it to its lowest terms…”



Reply all
Reply to author
Forward
0 new messages