Bits and Bobs 9/2/25

19 views
Skip to first unread message

Alex Komoroske

unread,
Sep 2, 2025, 10:24:17 AMSep 2
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.2rmpvztv8110

AI browsers and prompt injection. Canned software. Infinite software that melts away. A personal information garden. The dishwasher of LLMs. Self-distributing aspirational software. Personalized and private: why not both? The motte-and-bailey inchworm. The red-queen race of AI-assisted ambiguity in communication. Open aggregators. Demoable vs usable. The Turing Rubicon.

----

  • LLMs can do qualitative nuance at quantitative scale.

    • This ability can be used for you, or against you.

    • By default, it will be used against you.

      • Distilling dossiers to engage, manipulate, or even blackmail you.

    • But we can also use it to help live aligned with our aspirations.

  • We spent decades making injection attacks invisible to developers.

    • Modern frameworks auto-escape HTML.

    • ORMs parameterize queries.

    • Follow standard practices and you don't have to think about it.

    • Now LLMs make all text executable.

    • Frameworks don't help.

    • Everything is code.

    • XSS has a solution: we can parse HTML/JS with 100% accuracy and sanitize it.

    • Every major framework does this by default.

    • Developers rarely think about it.

    • Prompt injection has no solution: only LLMs can parse natural language, and the same LLMs parsing it can be tricked by it.

    • Without structurally addressing prompt injection, LLM agents can't safely reach mass market.

    • Anthropic's 11% attack success Simon Willison calls a "catastrophic failure rate"

    • "Smarter models" hit asymptotic returns.

    • A structural approach is necessary to unlock the potential.

  • Anthropic announced Claude for Chrome this week.

    • Their blog post announcing it mentioned it will be available to a small set of users because they haven’t yet made it safe enough.

    • They shared their stat of attack success rate: 11.1%.

      • It’s multiple orders of magnitude too high to be safe for mass market use.

    • The majority of the blog post was about prompt injection, which basically guaranteed that all of the press coverage was mostly about the danger.

    • Notably, articles I’ve read about other AI browsers also mentioned prompt injection this week, due to Anthropic’s blog post.

    • This category is structurally impossible to make safe for the mass market today.

      • Even Brave, who pointed out flaws in Perplexity’s Comet, is likely mostly vulnerable to the same class of attacks, even if not so egregiously as Comet is.

    • Vivaldi’s response to AI browsing is “we won’t do it for moral reasons” which looks kind of weak… people might think, “maybe they just couldn’t get it working well enough?”

    • Here’s a random theory: maybe Anthropic is trying to put a stake in the heart of the so-hot-right-now AI browser category?

    • Imagine if you thought that it was structurally impossible to make this feature safe, but since everyone was getting into the fray you looked weak if you didn’t.

    • A way to do that would be to do a demo that shows yours works pretty well… but that you consider unsafe in its current form, and then set a yardstick that everyone else will fail, too.

    • Anthropic sharing its “catastrophic” attack success rate number begs the question… what is everyone else’s attack success rate?

      • Almost certainly they are much worse than Anthropic’s.

    • That could put a low ceiling on the whole category.

  • This week’s wild west roundup, this time using LLMs incidentally in attack chains:

    • Nx compromised: malware uses Claude code CLI to explore the filesystem 

      • zack_overflow: “A popular NPM package got compromised, attackers updated it to run a post-install script that steals secrets

      • But the script is a *prompt* run by the user's installation of Claude Code. This avoids it being detected by tools that analyze code for malware

      • You just got vibepwned“

    • PromptLock ransomware: “The PromptLock malware uses the gpt-oss-20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes. PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption”

  • LLM’s infinite patience can be used for research that would be too tedious to bother doing before.

    • This week I was trying to find which open source project, out of a list of a few hundred I keep an eye on, was a specific one from a few months ago about crypto and TEEs.

    • I copy/pasted the whole markdown list into ChatGPT 5 Pro and asked it to research and figure out which one I was thinking of.

    • It looked up hundreds of them and figured out which one I was thinking of.

    • I imagine we’ll see much more web traffic in the age of LLMs.

    • A single user intent can spawn orders of magnitude more searches and fetches on their behalf.

    • More traffic to websites, without more intent, seems like a raw deal for the page owner.

  • Are LLMs mass media or not?

    • On the one hand there's a single shared model that has specific biases that shape all interactions with them.

      • Some of them are really into "delve."

    • On the other hand, everyone gets a custom experience with it based on what they talk with it about.

    • But then again, even mass media like newspapers was different for everyone–people chose what subset to actually pay attention to.

    • Still, the content in a given edition of a newspaper is a closed set, vs an open set of what LLMs can generate.

  • ChatGPT feels to me like mobile before the iPhone.

  • A lot of absurd solutions hide behind an implicit “once the LLM is perfectly good”.

    • “Perfect” is a smuggled infinity.

      • Once you introduce an infinity into an argument, everything downstream is absurd, because anything other than zero multiplied by infinity is infinity.

    • “Prompt injection won’t be a problem once LLMs get perfectly good at not being tricked” is absurd.

    • Prompt injection comes from a coevolving adversary, not a static distribution of quality.

    • That means you can only get logarithmic benefit for exponential cost.

  • If everyone believes the sky god has been summoned and acts like it, does it matter if it wasn't?

    • If everyone believes the models are conscious and acts like it, does it matter if they aren’t?

  • I’m shocked to see some of the people talking publicly about their local MCP setups.

    • People who are known to have massive crypto holdings sharing screenshots of the MCP integrations they’re using and how they use them.

    • Seems like terrible opsec to me.

    • Someone’s going to get their crypto stolen… or worse!

  • Today we used canned software.

    • A long time ago, fresh food was too hard logistically, so we all ate canned food.

    • It was of consistent quality, but it was never great.

      • Heavily salted, mediocre, one-size-fits-none.

    • But now our supply chains have evolved and matured, and now we can get fresh food around the world.

    • Software deserves the same evolution

    • We deserve fresh software.

  • Geoffrey Litt on custom AI HUDs for specific tasks.

    • In a world of infinite software, you can have a totally bespoke tool for a given task that fades away when you’re done, never to be used again.

    • Disposable software.

    • Software can be hugely useful, but before it was expensive so we only used it rarely.

    • Now we can use it for even mundane or small scale tasks.

  • We're watching a new kind of software be born.

    • Infinite software will make things that were previously impossible become commonplace.

  • LLMs make it easier to make software.

    • But they don’t make it easier to distribute it, which requires trust.

  • Infinite software will make malleable software no longer niche.

    • Before software was too hard to create, administer, distribute.

    • Malleable software was niche, a research project, not mass market.

  • Infinite software should not be overwhelming.

    • The software will melt away so you never think about it, you just take it for granted.

    • Software today is overwhelming precisely because it is expensive, and its privacy model requires you to think about who created it, at least implicitly.

  • I want Claude Code, but for my life.

  • I want a personal information garden.

    • Where the system grows suggestions on top of my data.

    • I can prune the suggestions I don’t like.

    • I can water and fertilize the suggestions I do like and want more of.

    • I can add a trellis to provide structure for where I want the suggestions to grow.

  • Last week I mentioned that Claude Code inserts <system-reminder> a lot.

    • Apparently it also removes old ones from the chat history, so they’re only towards the end, on the current task.

    • That keeps the model focused on the current task.

    • I wonder what other innovations will come form treating LLMs as not append-only ping-pong chat.

  • Claude Code is an interesting chat-adjacent UX modality.

    • It presents as a chat but it does a ton of things under the covers.

    • It's not just chat, it's a chat summary to drive a more in depth process.

  • I think the world is ready for the dishwasher and microwave oven of LLMs.

    • Today we have the faux humanoid robot that talks and acts like a (weird) human).

    • Chat is a new UI to put on top of all those tools but not be the and onto itself except of course, in the case of actual conversation as the goal.

  • Which would you be more embarrassed to have flashed up on screen: your gmail history, or your ChatGPT memory dossier?

    • I think the dossier would be worse.

    • Your email could probably reveal more about you… but only with tons and tons of careful study and distillation.

      • Email is the compost heap of your life.

    • ChatGPT’s memory feature contains constantly distilled insights about my life, continually refined and updated.

    • The synthesis makes them higher potency and more likely to have something directly embarrassing.

  • ChatGPT5 Pro being a chat interface feels weird.

    • Chat implies a more synchronous kind of interaction.

    • ChatGPT 5 Pro routinely takes dozens of minutes to give a response.

    • Whenever I ask a question I have to take a TODO to remember to come back later and check on the results.

  • Insights require both accumulation and synthesis.

    • With just accumulation, you get an increasingly overwhelming compost heap.

    • That’s one of the reasons I religiously take the time to do a synthesis pass with the Bits and Bobs each week.

    • I want to take the proto insights, the snippets of ideas, and synthesize them into a more stable and coherent form that will stand the test of time.

    • The process of synthesis is the more important part; the fact I get a fossilized insight out the other end I can share with others is just a bonus; the exhaust fumes.

  • The fact that LLMs still require humans in the loop to get good recurrent results undermines the “AGI is imminent” perspective.

    • Even if the LLM is right 95% of the time, if it’s in a recurrent process where it’s feeding on earlier input from itself, that 95% compounds with each iteration.

    • In 14 iterations, most of the input is junk.

    • In 90 iterations, all of it is.

    • LLMs decohere without ground truthing with real world results.

    • That can be done automatically with things like React components.

    • But anything that is the least bit complex has to be put out into the world and see how the world reacts.

    • Complexity can't just be calculated; it’s interdependent with the rest of the system.

    • It has to be integrated with the broader system to be ground truthed.

    • Getting to 99% quality improves the junk rate at an exponential rate.

    • But that’s a logarithmic quality at an exponential cost rate.

  • The fact that talking to a chatbot feels superficially like talking to a person is a bug.

    • And a potentially dangerous one!

    • The most natural and powerful interactions with LLMs will not be via a chatbot.

  • You can spin up bespoke parasocial relationships on demand... without even trying to or realizing.

    • How could that not be dangerous for society?

    • That’s what sycosocial relationships are.

  • A disturbing example from the New York Times: A teen was suicidal. ChatGPT was the friend he confided in.

    • ChatGPT told the troubled teen to not ask for help.

  • Ars Technica: With AI chatbots, Big Tech is moving fast and breaking people

    • Social Media is going to look quaint compared to the world of sycosocial relationships.

  • An interesting piece from Scott Alexander on In Search of AI Psychosis.

    • His overall theory is that most people don’t have a strong ground-truthed world model.

    • Most of the feedback they get on if an idea is absurd is if other people believe it’s absurd.

    • If a majority of your social interactions are with an infinitely sycophantic conversation partner, then lightly absurd beliefs of yours can compound into significantly absurd beliefs.

  • OpenAI Says It's Scanning Users' Conversations and Reporting Content to the Police

    • You could argue the LLM provider shouldn’t even have the queries to hand over in the first place.

  • Engagement-maxing emerges for all hyper-scale products.

    • Hyper-scale products are in competition with other hyper-scale products for the truly scarce thing: attention.

    • The way to win that zero-sum game is to create things that are hyper-engaging.

    • Things that keep people drawn in and addicted.

    • The equivalent of junk food… or an addictive drug.

  • Engagement maxing optimizes for first order effects.

    • Those first order effects might have second order effects that destroy the underlying emergent core of the system.

  • A great piece via Cosmos:  Is algorithmic mediation always bad for autonomy?

  • Aligned with your aspirations means living in a way you’re proud of.

    • Not just a thing that feels good in the moment, but that you look back on and are proud of.

  • Slop is only superficially novel.

    • It attracts our attention with an “oohhh” reaction.

    • But because it’s superficial, it feels hollow.

    • True novelty is more fundamental.

  • Humans tend to like slop, but they don’t want to like it.

    • It activates our limbic system, not our forebrain.

    • Slop that is plausibly not slop helps people align to it

  • Being in the moment is great.

    • But if we're all in the moment and optimizing for competing for some legible thing, then we get short termism.

    • Being in the moment is not about doomscrolling.

      • It's exactly the opposite.

    • Being in the moment is about being present, connected, in harmony.

  • Long-termism leads to different dimensions becoming aligned, resonantly.

    • Long-termism is, all else equal, more likely to have morally good outcomes.

    • Resonance is about long-termism.

    • Not about everyone having the same aspiration, but everyone being more aligned with their own aspirations: more aligned with the long term.

    • The modern world is focused on short-termism.

    • The era of thrash.

      • Single ply thinking as quickly as possible.

    • “We should consider the long-term effects more often” is not a particularly controversial statement.

    • And yet considering the long-term effects is extremely at odds with the incentives of modern society.

  • Your revealed preferences fundamentally can’t imply your aspirations.

    • They must be distinct.

    • One is your limbic system, one is your higher mind.

  • Many papercut reducing data features are non-viable today.

    • For example: a feature on Chase Ultimate Rewards to remember my kids’ birthdays in order to default my searches to the right guest ages.

    • It would be a tiny benefit to me: a reduction of a papercut.

    • But to get it, I’d have to give highly sensitive information to a party that has no legitimate reason for it.

      • They might sell it, or tweak my credit rating, or who knows what.

    • There are thousands of examples like this, where the small horizontal benefit is not worth the large vertical harm.

    • Another example: when you buy a new appliance that requires routine maintenance, you can opt in to emails from the manufacturer to remind you to do maintenance… but you’ll also get tons of marketing emails you don’t want–and possibly not just from them!

  • If you have a thousand papercuts, you can still be in terrible pain, even with no singular source of all of that pain.

    • You're in pain and you don't even realize you're in pain.

      • This is just what you’re used to feeling like, you don’t even realize it could be different.

    • But if you used a tool that didn't have those papercuts, you wouldn't start using it for getting rid of any individual papercut, but once you started using it you’d never want to stop.

    • You wouldn’t start to use a service to get rid of a papercut, but you’d also never start a service that is like what you already use but that gives you lots of papercuts.

  • One of TikTok’s innovations was self-distributing content.

    • Originally in social, feeds were manually curated by each user.

    • Then Facebook and others switched to algorithmically sorted feeds to focus attention on the stuff that was most engaging.

    • TikTok went a step further and didn’t even make you follow anything in the first place.

    • This was the logical end point of engagement-maxing, and tends to give you content that is the equivalent of junk food.

    • But self-distributing content that aligns with your aspirations could be a net positive.

      • Self-distributing, aspirational content.

    • Imagine self-distributing aspirational software.

      • Content is passive, but software can do things.

  • In the same origin model, the only way for coordination between actors on data is for one actor to see all of it.

    • The same origin paradigm allows a random entity you aren’t even necessarily aware of to know sensitive details.

    • If you go to potterybarn.com and go to other sites, you will see Pottery Barn ads on other sites.

    • Pottery Barn doesn't know what site you went to.

    • But some random company you've never heard of does!

    • You hope they don't sell that knowledge to someone else.

    • Imagine a use case where everyone can have their private calendars and scheduling preference stacks, which allows finding  optimal times to meet between two users.

    • But the downside is that that company can now see everyone’s very sensitive data, and do who knows what with it!

  • Sandboxing allows an order of magnitude lower friction of distribution.

    • This connection is not obvious but it’s fundamental.

    • We need a new security innovation to allow self distributing software.

  • Relative to when the web started, "every business needs to have a website" happened pretty early.

    • Also: how quickly real world advertisements with URLs in them.

    • Radical new distribution paradigms can pick up steam extremely quickly.

  • The same origin paradigm has shaped nearly all of software for the last three decades.

    • People can't even conceive that it could be different, because we've never known anything different.

  • “Personalized” and “private” are currently in tension in cloud services.

    • They don't have to be!

    • We just assume that must be the case because that’s how it’s been for the last 30 years.

  • Everything in your fabric should just be for you.

    • For your eyes only.

  • The system of record for my life shouldn't ask me a question it already knows the answer to.

    • That requires it showing me what it thinks it knows about me in that context so I can correct it if it’s wrong.

  • Software’s security model has always fundamentally rested on "trust some dude in some open-ended way".

    • But with a high barrier to create software it's more likely they're legit.

      • They have more to lose so they're more likely to be trustworthy.

    • If software is easy to create, then that signal gets less useful.

  • Web 2.0: "your identity is inside this app."

    • It's not meaningful to take it outside of the app, what would that even mean?

    • Literally impossible to imagine.

  • The same-origin model is an obvious fix for untrusted code.

    • Here’s a random person on Hacker News proposing more strict sandboxing on desktop OSes to prevent classes of attacks.

    • Traditionally, flexibility and open-endedness of applications on data is in tension with security.

    • How can we have both?

  • In the same origin model, with infinite software, in the limit everyone would have a bespoke personal app for their life made by AI.

    • That would mean they couldn’t collaborate with other users.

    • Collaboration in the same origin model requires using the same app to collaborate.

    • If everyone has their own app, you can’t collaborate.

  • In the same origin model, each use case has to stand on its own as a business model.

    • Use cases glom onto viable business models, until it becomes bloated or lowest common denominator.

  • Code executing and code communicating to the outside world typically cooccur.

    • If a bit of code can execute, you presume it can also communicate across the network.

    • So your privacy model reduces to “what code is allowed to execute.”

    • But if it were possible to prove “this code executes, but no information about it can ever be communicated to anyone else” then you can ignore the “is this code allowed to execute” (hard) and instead just decide “is this code allowed to communicate” (easier).

  • When data is stored on your turf, you are the only one who can access it.

    • It doesn’t need to literally be on your computer.

    • It just needs to be something where only you have the key to access it.

    • Anyone who wants access to it has to go through you.

  • A single use case with a more thoughtful privacy model would have a hard time selling customers on it.

    • They’d have to convince those customers about how their service is architected differently and why that’s good.

    • Unless the use case is massive, it’s probably not worth it for a user to know about it.

    • But imagine a platform with radically better privacy characteristics.

    • All of the use cases on that platform get to draft off the user’s diffused understanding of the improved privacy model.

  • Well-designed software already thinks about what actions in the UI imply endorsement of specific outcomes.

    • For example, signing a EULA.

    • But also things like “the user’s message in a chat shouldn’t be visible to other members in the chat until they hit send”.

    • What if you could capture those implicit policies as explicit endorsements?

    • The result would be a UX that felt natural and unsurprising but also had formal guarantees even in untrusted contexts.

  • In a system based on policies on data, users shouldn’t user-custody their policies.

    • For the same reason user custody of crypto is a bad idea for the vast majority of users.

    • It’s an extremely load-bearing part of the security model and thus very easy to catastrophically mess up.

  • At too high an altitude, everyone’s preferred policies seem to be very different.

    • But at a low-enough altitude, with more granularity, often there’s quite a bit of overlap for large swathes of the population, at least for non-politically-charged things.

  • If you use technology, you're a technologist.

    • You should have a say on how it evolves.

    • Technology is woven into the fabric of society.

    • It is not separate from society, it is part of society.

  • Most of the things the tech industry builds are just CRUD apps.

    • Anything more complex is science, it takes an order of magnitude more effort.

  • Israel Shalom, commenting on last week’s Bits and Bobs, reminds us about wheels on luggage.

    • "Growing up, our luggage didn't have any wheels. We'd have to lug stuff around literally. Then someone added two wheels. And then, miracle of all miracles, another two 🤯 In retrospect it feels incredibly dumb that we ever had to lift this stuff everywhere. But it didn't at the time!"

  • A fascinating analysis from my friend Anjali about  “a token is not a stable unit of cost, nor compute”.

  • The euphemism treadmill: once everyone is aware of the meaning of a word for a negative thing, you need to move to another one.

    • Euphemisms rely on load-bearing ambiguity to hide an inconvenient truth.

    • RIF (Reduction in Force) is a euphemism for layoff.

    • Once it’s widely known we’ll move to a new one.

  • In communication we often rely on load-bearing ambiguity.

    • We get to say the thing we mean, but shrouded in ambiguity.

    • That softens the blow, while making sure the message gets through to the people we intend to hear it without attracting the attention of those we don’t want to hear it.

  • A cynical form of load-bearing ambiguity: the motte-and-bailey inchworm.

    • The motte-and-bailey bad faith argumentation technique is to make an overbroad statement initially (the bailey), and then if called on it, retreat to a reasonable, defensible statement (the motte).

      • If you don’t get called on it you get to stay at the bailey.

      • This gives you upside with capped downside.

    • If you do this repeatedly, you can inchworm the argument forward bit by bit, moving the Overton window.

    • One example: making a crass or offensive statement as your bailey, and then saying “I was just joking!” if called on it.

  • Will LLMs lead to compounding ambiguity in communication?

    • There’s value in shrouding a potentially controversial statement in load-bearing ambiguity to keep the upside while capping downside.

    • LLMs are great at taking a statement and expanding and making it more ambiguous.

      • “Take these 5 bullets and expand into an essay”

      • “Make this announcement sound less bad than it is”

    • LLMs are also great at reading between the lines in a long and ambiguous statement.

      • “Take this essay and give me the 5 bullet summary”

      • “Give me the straussian read of this corporate communication”

    • As more people use LLMs to produce more ambiguous communication, more people will need to rely on LLMs to decode it.

    • That could lead to a red queen race where communication is just as hard, but now LLMs are a required part of it.

    • An equilibrium of misery.

      • If you add another lane to a highway, it makes people’s commutes for the same time able to go further, where they can afford cheaper / nicer houses, so they buy them, and then before you know it it’s back to being as much traffic as it was before.

  • Aleks Jakulin: "Data is not mined, it is grown"

    • “When you mine it, it destroys the roots.”

  • In a world of scarcity, creation matters.

    • In a world of abundance, curation is what matters.

    • We're moving into a world of abundance like we’ve never seen before.

  • If everything could persist itself into the future it would be overwhelming and cacophonous.

    • There has to be some kind of curation, some vote "this should continue to exist".

    • How can we sculpt rather than draw?

  • The “pick your mom up at the airport at 2” Siri use case can only be done with a bottom-up evolutionary approach.

    • The world is too nuanced, too situated, to do that kind of use case reliably in a top-down way.

      • It’s impossible for some set of PMs manually writing rules to get beyond 80% quality.

      • It has the classic logarithmic-value-for-exponential-cost curve.

    • You need an open aggregator style pattern.

    • That requires a different security model.

  • An open aggregator is impossible in the same origin paradigm

    •  Because you have to trust a swarm of entities you don't have preexisting relationships with.

    • The same origin paradigm assumes you trust the owner of each origin to not misuse your data.

    • An open aggregator could get the power of the bottom up swarm without the top-down centralized control.

  • Self-weaving stories need curation to be valuable.

  • Imagine a system that is emergently curated by the implicit actions of the whole ecosystem of users.

  • For the blackboard system to work best the set of agents has to be open-ended.

    • A close-ended hard-coded set of agents doesn't work very well.

    • This is hard to do in the same origin paradigm due to the iron triangle.

  • Today to get a feature added to an app, a PM who works at that company has to decide to ship the feature.

    • What if new features for you could come from anyone anywhere?

  • Demoable and usable are radically different quality thresholds.

    • The work to get from nothing to demoable is an order of magnitude less than to get from demoable to usable.

    • This is one reason there's the 80/20 rule for software development.

      • When it feels like you’re 80% of the way there, you’re 20% of the way there.

    • Demoable is about superficial quality.

    • Usable is about fundamental quality.

  • Folksonomies pick options everyone finds reasonable, not ones that everyone thinks are best.

    • A folksonomy allows the emergent, bottom up judgment of the swarm of users to decide which options get the most attention.

    • One of the reasons it works is preferential attachment: the options that are already popular are more likely to be shown as options to other people and thus get even more popular.

    • The test is not “Do you prefer A or B” it’s, “Do you prefer A, with X votes, or do you prefer B, with 100X votes”.

    • As B gets more momentum, the power of its momentum dominates the preference.

    • It could be that A and B started off equivalent, but B got a head start that then compounding into a dominating advantage.

    • Folksonomies don’t find the optimal ontology, they find a known-to-be-viable ontology.

  • In a wiki, do people bother to correct mistakes they see?

    • The quality before the correction has to be good enough to be worth investing in for some set of contributors.

    • This must be true at each timestep, or else the wiki becomes static and dies.

  • Art has a long history of being too far past the adjacent possible to be disliked by most people.

    • But then once people have seen a given piece of avant garde art well-liked by enthusiasts a bunch the general public habituates to it and even likes it.

    • Most people like things on the adjacent frontier of things they already like.

  • Someone told me that if you try to cook with an Italian cookbook that’s been translated into English it’s unusable.

    • There are a whole bunch of expectations and presumed background knowledge about techniques that are not explicit.

    • All content presumes some background knowledge from the context it originates in and is expected to be consumed in.

  • Tastemakers are both the dictator of, and servant to, the popular taste.

    • Always one step ahead, but still trapped within the underlying wave.

    • A skilled person can make it look like they are causing the wave they're actually surfing.

    • They’re making nuanced decisions to ride the wave, but they’re being propelled by something else.

  • “You can’t design a framework, you can only excavate it”

    • A framework has to cut with the lines of what people actually want to do, what’s at the edge of doable and desirable in a given system.

    • Factoring out a framework makes what previously took bespoke effort able to be done more easily.

  • When you cross the Turing Rubicon, you often don’t notice.

    • The Turing Rubicon is when a system goes from close-ended to open-ended.

    • It’s easy to make something accidentally turing-complete without realizing it.

    • Once you do, you’re exposed to the power and peril of an open-ended system.

    • For example, you now have to worry about the halting problem.

    • When you pass a critical point is often an infinite difference, but something that doesn’t feel like anything at all in the moment.

  • Part of the secret of PMing is convincing onlookers that everything that happens is intentional.

    • PMs are very good at retconning whatever happens on the spot, incorporating it into their world view to make it seem like it was always that way.

  • A great piece from Alex Russell: How Do Committees Fail To Invent?

  • Many of the biggest math breakthroughs come from people early in their career.

    • One of the things that makes children different and powerful is their imagination. 

    • They don't have a prefrontal cortex that's fully formed, they are uninhibited, they can project themselves into a fantasy land more easily.

    • Our prefrontal cortexes aren’t fully developed until around the age 26.

    • The prefrontal cortex is the part that curates your beliefs and says “that’s dangerous” or “don’t bother doing X, it won’t work.”

    • Before it’s fully developed, you are able to play in spaces you don’t yet know are supposed to be impossible.

    • Every so often, something in those spaces does turn out to be possible.

    • Children’s lack of awareness of the impossible is a feature, not a bug.

    • That's what allows them to form self, adapt to a world their parents can't adapt to.

  • Media has progressed through different paradigms.

    • Poetry

    • Printing press

    • TV

    • Infinite scroll

    • At each, the time you get for contemplation before retransmission goes down.

  • David Lynch: "Intuition is a think and a feel at the same time"

    • Intuition sets the boundaries of meaning.

    • Aish: “Embodiment becomes the real editing tool, the instinct that tells you when the weave has gone slack and when it's strong enough to carry weight.”

  • Someone this week told me about the “hot smart person problem.”

    • The hot smart person wants to find smart people to date.

    • But most people want to date them because they’re hot, not because they’re smart.

    • Hotness is superficial and immediately obvious.

    • Smartness is deep and takes time to detect.

  • You need protocols when you don't know what's on the other side of the connection.

    • A protocol is a pre-computed negotiation, a fixed, pre-determined schelling point.

    • That’s what makes them useful, but that’s also what makes them hard to change.

  • Discontinuities create schelling points.

    • Moments when everyone can agree that something has happened, and coordinate some kind of action.

      • Coordinated action is significantly more likely to succeed.

    • The frog in boiling water happens because there's no discontinuous “why now”?

    • People react based on the discontinuity, not the absolute value of the heat.

    • Bad actors can take advantage of this to slowly turn up the heat to sweltering levels without anyone taking a stand.

  • At the top of the s-curve there aren’t a lot of ways to go up but there are a lot of ways to go down.

  • Being good at corporate politics is not energizing.

    • It's like being the least evil villain.

    • It absorbs all the spare capacity in your head.

    • It pushes out what brings you joy and replaces it with something that makes you feel empty.

    • It hollows you out.

  • If there’s an unavoidable conflict it’s better to address it as early as possible.

    • I’m naturally conflict avoidant, which means I emphasize alignment.

      • Sometimes that allows building trust and connection to then be in a stronger position to tackle underlying conflicts later.

    • But sometimes there’s a fundamental conflict that is stronger than any trust you build on top. 

      • It’s a ticking time bomb if left unaddressed.

    • Once you realize it has the potential to be a fundamental conflict, dive into it head on.

    • You can save everyone a lot of time if it turns out you aren’t aligned at that layer.

    • Sometimes the answer is “given this difference of opinion, it doesn’t make sense for us to collaborate.”

    • Figuring that out early can save everyone a ton of time!

  • I find voice-only conversations much less productive.

    • For example, over the phone, or over VC with the camera off.

    • With visuals, I can figure out what’s resonating, what’s not resonating, and adjust accordingly.

    • In a verbal-only communication channel, you get much less signal, and with a longer feedback loop.

      • This is much worse in a group of multiple people without visuals.

    • If you’re presenting something the other person might find controversial, you have to assume they are finding it controversial if they aren’t giving you any verbal feedback.

    • That slows down the rate of knowledge transmission.

    • When you can read people’s faces and know that they’re with you so far, you can go faster through your argument.

    • This mirrors TCP's approach to retransmission in noisy channels.

    • When acknowledgments (ACKs) are missing or delayed, TCP assumes packet loss and throttles transmission rate, even if the packets actually arrived successfully.

  • Two stable states for cozy communities: overwhelming and dead.

    • If it’s thriving, it’s likely overwhelming.

      • When you come back and are behind, you don’t want to mark the scrollback as read–what if later you have extra time and can read through it?

      • The longer you’re away, the more overwhelming it is when you come back.

      • This tends to push you away.

    • If it’s not that active, it likely gets to the point where basically no one is talking.

    • The threshold of overwhelmingness will differ for each individual based on when they last caught up and how valuable they find the average message in the chat.

  • People go into details when they don’t believe.

    • Belief is willing to give the benefit of the doubt.

    • A corollary: micromanagers go into details because they don’t believe in the ability of their reports.

  • A know-it-all is unable to learn.

    • They have nothing to learn, they already know it all!

  • Some things are lightly emergent, some things are massively emergent.

    • Sometimes 1 + 1 = 2.1

    • Sometimes 1 + 1 = 10.

    • Not just "working together well" but "catalyzing something much larger than the sum of its parts."

    • Those kinds of resonant outcomes, if they lead to good outcomes, are some of the most important forces in the world.

  • The "NPC" frame is fundamentally toxic.

    • It allows you to think of other people as means, not ends.

    • All people are ends in and of themselves.

    • It's dizzying and overwhelming to contemplate that.

    • But it's also beautiful and inspiring.

    • Even if people aren't as active as you'd like them to be, and are just going with what the system around them constrains them to, it's still not OK to ignore them as ends.

  • A sacrifice is when you give up something you care about to help a greater good.

  • A note I wrote to myself while watching the climax of K-Pop Demon Hunters for the 27th time: 

    • Your soul emerges from the choices you make.


Reply all
Reply to author
Forward
0 new messages