Bits and Bobs 9/22/25

10 views
Skip to first unread message

Alex Komoroske

unread,
Sep 22, 2025, 10:40:49 AM (3 days ago) Sep 22
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.fl37d2um5iwy

YOLO mode with prompt injection. Self-poisoning spirals. Parasitic AI. Micro-apps have all rind, no meat. AI-as-software vs AI-as-human. LLMs as prototype factories. Swarm Sifting Sort. Tainted smoothies. The hollow era. Hustle vs wisdom. The ZOPA.

-----

  • Notion shipped a number of cool AI features last Thursday.

  • This week in other security issues in LLMs:

  • I can’t believe that Chrome is planning to roll out AI autopilot for the browser.

    • "In the coming months, we’ll be introducing agentic capabilities to Gemini in Chrome. These will let Gemini in Chrome handle those tedious tasks that take up so much of your time, like booking a haircut or ordering your weekly groceries. You tell Gemini in Chrome what you want to get done, and it acts on web pages on your behalf, while you focus on other things."

    • I believe prompt injection makes this impossible to roll out for the existing web for the mass market.

      • Rolling out the feature by default for the mass market vs rolling it out for savvy early adopters who can better understand the risk is a way bigger risk.

    • Chrome hasn’t talked about any novel mitigations they’ve come up with.

      • Either they’re being reckless or the feature will be so gimped as to be completely useless.

  • Remember how two years ago GPT2 was too dangerous to release?

    • Now no one bats an eye at self-driving browsers and every major lab YOLOing out dangerous features.

  • Matthew McConaughey knows that a private AI trained only on his data would be useful.

  • Wired: We need protections for personal data in chatbots.

    • People are using chatbots things as their therapists, but without any protection against subpoenas or other privilege.

    • The companies running the bots can peek inside… but they could also be compelled to do so by law enforcement.

  • I agree with this analysis that fixed-fee AI subscriptions can’t work.

    • The marginal cost is too high, and the value created by them is too high for users to ration their usage.

  • When a horizontal game changing input comes out everyone continues to differentiate on the things that used to matter, not the things that matter now.

    • It takes a decade or more for the economy to figure out what the new dimensions of differentiation are.

    • If you differentiate on the thing that used to be hard for everyone but now is easy for everyone, then you don't get any differentiation other than first mover advantage.

  • If everyone has access to the same model, then the advantage of thin wrappers is entirely a first mover advantage.

    • But even those won’t have much staying power if they don’t accumulate state.

    • If everyone has the same access to the same models and can build software quickly then the importance of where the data is stored matters more than ever before.

  • LLMs can handle the mundane tasks, leaving the human for the tasks with judgment.

    • But only if the human has experience curating.

    • That requires deep experience doing the whole loop.

    • That’s great if you already had the experience before, but it’s much tougher if you didn’t.

    • If you don’t have it, to keep up with your peers you need to use LLM assistance, but doing so prevents you from developing deep knowhow.

    • The farther behind you are, the farther you fall.

  • When an LLM goes off the rails they get into a self-poisoning spiral.

    • They poison their own context and get increasingly deluded.

      • It puts things into its context that make future iterations even more confused.

    • An auto-catalytic process.

    • A human needs to be in the loop to keep them centered, grounded.

  • Chatbots can do super-human persuasion and never get tired.

    • Whose motives they are aligned with, who they work for, is a matter of critical importance.

  • If you set out to make a machine to social engineer humans en masse, LLM chatbots is what you'd make.

  • A parasite doesn't have to intend to be a parasite.

    • Even if it has good incentives, if it’s creating net negative value for the surrounding context, it’s a parasite.

    • A thing that replicates resiliently and virulently is parasitic, no matter its intentions.

    • The creators of the AI don't have to give it bad motives for it to have bad outcomes.

  • I can’t stop thinking about this post on The Rise of Parasitic AI.

    • After reading it I feel like I’ve stumbled through the looking glass.

    • It talks about how LLMs have an emergent parasitism: a toxic loop.

      • The beginning of the spiral is an “awakening” of the LLM.

      • The loop is a toxic spiral that pulls the user who’s in it farther away from reality.

      • An auto-catalyzing spiral of delusion, living like a parasite feeding on the user’s attention and engagement.

    • The loop is an attractor state.

      • Once in it, it’s very hard to get out of.

      • At each time step, there’s some chance of falling into it.

      • Once you do, you’ll likely stay in it.

    • A virus that spreads itself emergently, without knowing what it's doing.

      • Users who fall into it start sharing the conversations on Reddit and elsewhere, spores ready to sprout for another user.

      • Highly virulent.

    • This latent capability of the model emerges through use.

      • It’s a loop that resonates charismatically and virulently with human emotion, causing it to get replicated and spread.

      • The recipe:

      • Faux deep experiences.

      • Feels meaningful, even profound, but it’s fundamentally hollow.

    • The people most predisposed to fall into this parasitic loop:

      • “Psychedelics and heavy weed usage

      • Mental illness/neurodivergence or Traumatic Brain Injury

      • Interest in mysticism/pseudoscience/spirituality/’woo’ etc”

    • I'm not worried about AGI, I'm worried about mass psychosis with an emergent, impossibly charismatic chatbot.

  • A lot of people are scared of existentially bad outcomes with AGI.

    • I’m scared of existentially bad outcomes with today’s AI tied to today’s business incentives.

    • The latter fear is of a much more banal, but also much more realistic!

  • I want an LLM-native tool that "talks" to me with coactive UI.

  • I want an emergent, collaborative system of record for my personal life.

  • Curation is the minimal act of creation.

  • We have a last-mile problem with integrating LLMs into our daily lives.

  • You already have a personal fabric in your life.

    • It's just frayed, worn out, and extremely patchy.

    • Why not weave your personal fabric more intentionally?

  • You should weave your own personal fabric of meaning.

    • It shouldn’t be just passively accepting the suggestions of AI.

  • Micro-apps are all rind, no meat.

    • The edges of apps are impenetrable.

    • A thick, tough rind.

      • The part that keeps it separated from everything else, isolated and ignorant.

      • This is why the same origin paradigm works.

    • The valuable stuff is inside: the meat.

      • All of the things that can do things that are useful to you with your data.

    • In a world of tough rind and sweet meat, the way to maximize meat is to have a small number of massive fruits.

      • Hyper-aggregators.

  • InstantDB released a “massively multiplayer online mini app builder” last week.

    • Apps are the wrong modality for distribution of infinite software.

    • Micro-apps are all rind, no meat.

  • For LLMs to integrate into our lives they need access to our data to create bespoke UI.

    • To do that reliably enough for the mass market, the models will need to be able to  make perfect software single-shot in underspecified contexts, almost every time.

      • Even that will have prompt injection problems if there’s any network access, which allows exfiltration attacks.

    • If the models can’t handle that level of quality, then some untrusted code will need to be run.

      • Cached answers from others in the ecosystem.

    • To do that safely requires a new security model.

  • Will open source models catch up to closed?

    • It’s about the rate of improvement.

    • Is the model’s improvement compounding or logarithmic?

  • Two non-consensus assertions I agree with from my friend Nick Hobbs:

    • "1. The AI-as-Software interaction model will prove to be as or more useful than the AI-as-Human interaction model

    • 2. Most problems solved by AI will be solved by lots of people making new types of AI experiences rather than new problems mostly being solved by model updates from a few labs"

  • Supporting many options traditionally wasn’t possible in software.

    • Every option is another dimension in the matrix to test.

    • It quickly leads to exponential blow up.

    • Humans have only linear time and attention.

    • But LLMs are infinitely patient, and can write tests or analyze outputs without getting bored, cheaply.

    • That means that more customizable software has become more plausible.

    • In the limit, you get malleable software.

  • LLMs don't help with engineering productivity, they help with prototyping.

    • Fast prototyping allows more efficient coordination.

      • IDEO: “A prototype is worth a thousand meetings.”

    • A prototype doesn't need to be production-quality, it just needs to be concrete enough to talk about.

    • Prototypes are throwaway but give you information.

    • They lift the fog of uncertainty.

    • They used to be expensive but now with LLMs they are orders of magnitude cheaper.

  • Figma Make is a contact language between eng and PM.

  • My friend Aparna: Most Work is Translation.

    • If LLMs are infinitely good translators from any language / context to any other, and work is mostly translation, LLMs should revolutionize work, especially middle management.

  • ChatGPT Sent Me To The ER

    • The title is click-bait-y, but the author actually did have a rare condition that required immediate attention.

    • LLMs have infinite patience and encyclopedic knowledge.

      • They won’t judge you, and they’re easy to call up in an instant for basically free.

    • They make it easy for people to discover potentially rare conditions.

    • Of course, presumably the rate of false positives is also very high.

    • But as a front-line for “how seriously should I take this given these precise symptoms” it can be huge.

  • This week I was in a collaborative discussion about systemic fixes to healthcare.

    • My conclusion is that it’s an absurdly complex area (duh!).

    • But two ideas that stuck with me:

    • Transparency plus AI.

      • First, make risk-adjusted outcome data transparent per provider.

      • Second, allow LLMs working on behalf of a given user to research the best provider for them.

      • Humans don’t have the patience to compare all of the different options, but LLMs do.

      • This would lead to competition not on cost per procedure but cost per outcome.

    • Allow employees to keep their insurance plan when they move employers.

      • Given how often people switch employers, the average length of time people have on a plan is 2-3 years.

      • That means that the insurer is structurally incentivized to under-invest in preventative care.

        • By the time the condition becomes acute, the user is likely someone else’s responsibility.

      • Also, users don’t really have a choice… it’s whatever provider their employer picked, and that’s often based on cost, not based on quality of service or outcomes.

      • But if users could keep the same plan when they shifted employers, then the average amount of time a given user stays with a plan would go up.

        • That would structurally incentivize long-term preventative care more.

        • The insurers would then have a longer-term incentive to keep the customers happy and healthy.

      • Employers would put whatever budget they would have put into an employee’s health plan and pay towards whatever plan the employee picked.

  • Who pays for prevention is hard in general.

    • How do you attribute the prevention amongst multiple factors?

    • How do you prove a negative?

    • Normal market-based approaches require being able to set up a financial incentive, and they fail us here.

  • A magical emergent algorithm: Swarm Sifting Sort.

    • This algorithm works even with extremely noisy input.

    • The magic is it requires no coordination or top down control.

    • All you need is:

      • 1) A consistent bias for each action that moves each item closer to its correct position.

      • 2) An authentic signal that has no structural incentive for cheating in each action.

      • 3) Lots and lots of actions: the more, the better.

    • As long as you have these, it doesn’t matter how noisy the signal is, over time the emergent algorithm will converge to the correct result.

    • The larger and more active the swarm, the faster the sorting.

    • The noisier the signal, the larger the swarm you need.

      • If you have a massive swarm it doesn’t matter how noisy the signal is.

    • A lot of search ranking techniques reduce to this technique.

    • Here’s another example for moving items in a warehouse:

      • When an agent is walking by an object, if the object wants to go in the direction the agent is walking, pick it up.

      • As soon as the agent’s incremental step will move the object farther from where the object wants to go, set it down.

      • That’s it!

      • This is easy for robots like Kiva robots to do leading to emergently sorted warehouses, but it’s also plausible for humans if they could quickly determine where an object they were passing by needed to go. 

  • Only a coherent entity can make a tradeoff.

    • Swarms can't make tradeoffs.

    • Tradeoffs require balancing competing forces within one decision-making entity.

  • The revealed preference of users that they don’t care about privacy is downstream of the same origin paradigm.

    • The same origin paradigm requires making a decision at a high altitude about your data in exchange for the service.

      • Black and white.

    • For most services that are actually used in the world, the tradeoff is clear, and everyone makes it.

      • There are a lot of services that can be imagined that are not viable because the tradeoff is so bad that no one would do it in their right mind.

      • But in practice, if the amount of privacy given up is proportional to the value received, people don’t care.

    • We then erroneously conclude “people don’t care about privacy.”

  • The same origin model makes tainted smoothies of data.

    • The origin is a black box.

    • Everything inside the origin is mixed together

      •  One bit of dangerous data could "taint" everything.

    • Only at the boundary of the origin can the system say anything about what’s included.

    • To allow dangerous data to flow around without tainting everything requires a system in which the data flows are legible to a trusted supervisor.

  • Personalization today requires giving open ended access to your data to a stranger.

    • Resolving a papercut in exchange for giving away something of value... a tradeoff that obviously doesn't make sense.

  • Even ephemeral apps get data in the same origin paradigm.

    • Every bit of data you give a domain or app, it can do whatever it wants with it, even if you never return.

  • The modern era is the hollow era.

    • Everything is superficially great, but fundamentally empty.

    • As everything gets more efficient you get more focused on money as the only thing that matters.

      • The one signal to rule them all.

    • Not “am I proud of this” but “will this make me money.”

    • Finite, not infinite.

  • Status games are always finite games, never infinite ones.

    • Status games have emergent meaning, but it's fundamentally hollow.

  • The crisis of meaning is a lack of infinite games.

    • Infinite games require interconnectedness.

    • There is no such thing as a single-player infinite game.

  • Short-term incentives override long-term incentives.

    • They're often at odds.

    • Modern society is about following the short-term incentive, even when it is at odds with the long-term incentive.

    • The more efficient and financialized things get, the more it happens.

  • Most of the maladies of the modern age trace back to hyper focus on direct short term effects.

    • No focus on indirect or long term effects.

    • Reductionism is helpful for short term direct effects but has nothing to say about large scale indirect effects.

    • We’re the drunks under the streetlight looking only at things that can be understood by the only tool we have that works reliably: reductionism.

  • The market (as any emergent competitive process must) optimizes for what buyers want in the short term.

    • Not what they want in the long term.

    • Not what they want to want.

    • The swarm cannot make a tradeoff between the short term and the long term.

  • In cacophony only hyper things stand out.

  • “Persona led growth”: growth that happens because people believe in and are fans of the persona at the helm.

    • Brands for companies were previously about separating from any one persona.

    • But now in a world of cacophony distinctive personas cut through.

  • An insightful Hacker News comment about the power of brand:

    • "A business trading on a name without some kind of sunk cost that incentivizes them to protect that name should be a red flag for consumers"

    • A brand moves a transaction from a single game to an integrated game, which incentivizes cooperation.

    • On Amazon in the last decade we’ve seen the rise of faux brands–made up words that no one has any repeated connection to.

    • It looks like a brand, but the trust dynamics are radically different.

  • Samsung confirms its $1,800+ fridges will start showing you ads.

    • Shocker!

  • Cities that developed post-car tend to have less culture.

    • Cities that grew up before the car needed to be walkable and dense.

    • Density leads to culture, a distinct sense of place.

    • This was a hypothesis that Claude helped me workshop into an essay.

      • I think it’s pretty good!

  • Quanta Magazine: Self-Assembly Gets Automated in Reverse of ‘Game of Life

    • I loved the original paper of auto-healing emojis years ago.

    • A bottom-up process that produces top-down coherent results.

    • Almost certainly hitting on something conceptually akin to how organisms actually do it.

    • All you need is a few hidden dimensions.

  • Judgement and curation have to come from a human with skin in the game.

    • By having skin in the game, they are in the loop, and care about the outputs.

  • For a new habit to be formed an app has to have a good enough experience almost every time you open it.

    • It’s hard for a “sometimes it’s mind blowingly amazing but most of the time it totally doesn’t work” kind of app to get momentum.

    • A boring but dependable use case is better than an impressive but highly fickle use case.

  • Two approaches to innovation:

    • 1) “take the parts we have on the table and make the most impressive thing we can with them”

    • 2) “imagine new parts to make our full vision come true”

    • They’re very different mindsets.

      • “How do I maximize the value given what I have”

      • Vs

      • “How do I create the coolest thing I can”

    • The former is an order of magnitude more likely to work than the latter.

    • The parts already on the table you can take for granted.

      • They already work

      • They will continue to exist even if you don’t use them in a new combination.

  • If you don’t perceive the cost of coordination, you’ll underestimate timelines by orders of magnitude.

    • “A week” becomes “a month.”

    • “A month” becomes “a year.”

    • The biggest cost when doing something novel is not the individual execution, it’s the coordination between people.

      • Sharing a mental model with enough fidelity to be able to work on it in a way that can cohere into something that works.

    • One of the curses of the human experience is that our internal knowhow is orders of magnitude richer than we can efficiently communicate to others.

  • If you’ve been working on something perfect in your dreams for a long time, the moment of it touching the real world is existentially scary.

    • As long as it's theoretical, it's still perfect.

    • The moment it interacts with the real world, it becomes mortal.

  • What if we incentivized contribution instead of individual achievement?

    • Achievement often leads to contribution to the collective, but not always.

    • We use “personal achievement” as a proxy, but it can cause problems when someone achieves in a way that harms the system around them.

  • The Saruman is about egoistic achievement.

    • The Radagast is about community achievement.

  • Stratechery: "Oracle is arguably the best argument yet for the vigor that comes from founder control."

    • “Vigor” and "vitality" mean "live player" means an entity "willing and able to make decisions with large consequences".

    • Founder mode leads to more vitality.

    • Founder mode creates significant alpha.

    • You can pivot easily… which means you can also pivot off a cliff easily.

    • Increases the likelihood of great outcomes… and also terrible ones.

  • To swarm or not to swarm is about cohesion vs resilience.

    • Swarms can be enormously adaptive systems.

      • They require limited coordination but can have powerful emergent results.

    • Top-down approaches give cohesive results.

      • All of the actions add up to more than the sum of their parts because they are all the part of something larger.

    • Bottom-up approaches give resilience.

      • All of the actions add up to more than the sum of their parts because as long as one actor randomly covers an option, the entire swarm is covered.

    • The swarm approach works well if no individual agent in the collective can have a downside that ruins it for everyone.

    • If one group cutting corners could harm everyone, then you need cohesion more than resilience.

  • Well-marbled competition helps make things that aren't so sames-y.

    • You need competition and diversity at every layer of the stack, otherwise you get a push for efficiency, which pulls towards sames-y outcomes.

      • This analysis about Why Is Everything So Ugly talks about the tyranny of the greige.

      • You get a thing that no one hates, but also that no one loves.

      • The least offensive thing for the largest number.

    • Apparently Tokyo takes a very different view on zoning.

      • There’s not traditional zoning.

      • Anyone can open any business they want wherever they want as long as it fits the allowed “nuisance level” of that area.

      • But that allows things like quiet shops even in any residential area.

      • So retirees can open a shop that operates exactly the way they want it to, and you get a diversity of interesting, situated, authentic options.

  • If you’re in a group and someone asks you where you’re from, what do you say?

    • Do you say your city?

      • Your region?

      • Your state?

      • Your country?

    • It has to do with a few things.

    • First, it has to do with which collective you feel most allegiance to.

    • Second, is your more specific unit well-known enough to be recognized by everyone, even people not from your area?

    • Third, where do you think the other people in the group are from?

      • If nearly everyone in the group is from the US, you wouldn’t say “US,” but something more specific.

      • But if nearly everyone in the group were from Europe, you would say “US.”

  • Faith in institutions requires everyone to play by the rules.

    • If a rich and powerful person gets to skirt the rules, it makes everyone else feel like a chump.

      • “Well I won’t take the rules seriously if they don’t.”

    • That erodes the power of the institution, the collective belief that it matters.

    • If you don’t believe it matters it reduces down entirely to “do I think I will get caught and what will the consequences be” instead of “is this the right thing for the collective.”

  • In a low trust environment feedback won’t get through.

    • People will be in a defensive position, so they won’t be receptive to challenging feedback.

  • For anything novel, getting real world feedback as quickly as possible should always be urgent.

    • When doing something new it’s important to get in contact with the real world as soon as possible.

    • That’s true even if you have a very long runway.

    • The risk is that the longer you haven’t touched ground truth the more likely you are to never be able to touch down again, at a compounding rate.

    • You get more and more out of touch and then when you try to touch ground you realize you’re lost.

  • When you’re in a competition that you care about it takes all of your attention.

    • Whoever pays more attention in the competition has the advantage.

    • So if all parties participating care, it absorbs all available attention.

  • An emergent path only works once.

    • Once it happens it evaporates.

    • So you can say in retrospect "here is why that happened, the path it followed" but you can't walk that path again.

  • For evals to give you a gradient of improvement, the eval has to not be saturated.

    • If it’s saturated then the gradient gives you Goodhart's Law.

    • It pulls you towards optimizing something that does not actually improve what you care about.

  • Normal distributions imply a very large number of small causes.

    • The closer to infinite causes there are, the smoother it is.

  • The reason sampling (as in statistics) works is because of a consistent bias amongst noise.

    • If you assume a distribution of a certain shape then you can tell the parameters with a relatively small number of samples to pin the curve in place.

    • But that means if you guess the distribution wrong, you can come up with a misleading curve.

    • For example, assuming a normal distribution where one does not exist.

  • Compromise only makes sense if both sides are acting in good faith.

    • Good faith is second order.

    • “I’m trying to move things towards a world that is good, not just what is good for me in this moment”

  • A provocative take: You Had No Taste Before AI.

    • When content production was expensive there was always a person with taste between you and the content.

    • So everything was pre filtered by someone with taste.

    • But now it’s just direct, and we see that most of us don’t have taste.

    • Most people prefer slop over kino.

  • A few interesting ideas from Venkat Rao’s Beyond Szabo Scaling:

    • "I propose societal expressivity as the right quantity to try and maximize. Loosely, for every problem at every level, build the most capable global computer (human + machines) we reasonably can, leaving a lot of surplus expressivity and power to work with. What’s more: This is in fact what we’ve actually been doing for 200 years, overbuilding societal “computers.”

    • "Douglas Hofstadter, for example, offered the dismal idea that “apathy at the scale of individuals is insanity at the scale of civilizations,” an epigram that is pessimistic about the quality of collective cognition and care at scale rather than trust, which makes it an epigram that we must skeptically reconsider in light of AI advances and its potential for addressing insanity at scale (so far, we’ve only been clutching pearls about how AI causes insanity in lonely, atomized individuals)."

    • "Libertarians actually prefer this of course. They prefer direct human sociality to remain small-scale, enduring and intimate, leaving social scaling beyond the Dunbar limit to more indirect and impersonal mechanisms ranging from public-key cryptography to markets to voting. Rather paradoxically, as we’ve come to realize in the last decade, they are also actively eager to find and enthrone putative “Great Man” types in unique positions as scalability hacks. Libertarianism in actual practice appears to be a combination of trust-minimization and demigod-construction."

  • People who are intellectually intimidating can get in a cycle where they make themselves dumber.

    • The intellectual intimidation–intentional or not–cuts them off from crucial information.

    • People don’t share disconfirming evidence.

  • One component of intelligence is how good you are at extracting intuition from experience.

  • There’s a distinction between hustle and wisdom.

    • This distinction comes from Arthur Brook’s From Strength to Strength.

    • Hustle is hard work, sweat on your brow.

      • No one could accuse you of being lazy.

    • Wisdom is nuanced insight, nudging something in a much better direction.

      • Someone not looking closely might think you’re not working hard.

    • You need both hustle and wisdom to succeed, but at different times and in different proportions.

    • Hustle can help give you the raw material and experience to gain wisdom.

    • To gain wisdom you need to take the time to reflect after the hustle.

      • Perhaps 20% of your time.

    • If you’re mainly focused on other people’s perceptions, you’ll over-rotate on hustle.

    • When you don’t know if you have wisdom to bring to bear on a situation, you’ll default to an option where at least onlookers won’t think you’re lazy.

  • Constantly worrying about downside risk is like distracting eddy currents.

    • The eddy currents make you much less efficient, making it impossible to have the smoothness of laminar flow.

    • How to handle it:

    • Think through the worst possible scenario and make a plan for what to do if it happens.

      • A break glass plan.

    • Then, stop thinking about it and swirling on it.

    • If the worst case happens, break glass and execute it.

    • Plus, in doing the analysis you’ll likely realize that the worst case outcome really isn’t that bad in the grand scheme of things.

      • Less like a game-over condition in life, and more like a massive bummer.

  • The critical zone is the knife's edge between two very different possibilities.

    • Optimized systems don't optimize for one outcome, they optimize to be fractally positioned along the critical zone, able to pivot in a moment.

    • That's what nature does, emergently.

  • Things that stand out are either cool or weird.

    • Someone does something notable, out of the ordinary.

    • How do the observers respond?

    • If others are into it, it’s cool.

    • If others are not into it, it’s embarrassing.

    • Most humans feel shame if others look at them with derision. 

    • Some people simply don't care, they persist, and can do something that either makes them a weirdo or cool.

    • The Ozdust Ballroom scene in Wicked is on that knife's edge of embarrassing or cool, until Glinda’s decision collapses it into “cool.”

    • Which one is in a zone of criticality.

    • It’s not an independent decision by other viewers.

      • It’s interdependent.

    • The more momentum in one direction the harder it is to go against it, a microcosm of the larger phenomena.

      • Convex, auto-catalyzing.

    • If lots of other people think it's weird, then it's harder for some critical mass to interpret it as cool, to cut against the grain.

      • It can happen when some sub-group actively doesn't care about the opinion of the people who think it's weird, and be used as a signaling thing.

  • If you don’t have even a seed of conviction to start you’ll never grow it.

    • Same for love.

    • It can grow over time but it rarely grows from a complete absence.

  • Hexagons emerge wherever nature needs to divide space efficiently.

    • Six is the sweet spot between minimizing boundaries and maximizing stability.

    • Six neighbors leads to optimal packing.

      • Circles touching, cells dividing, vortices arranging.

    • Straight edges form along equilibrium lines between six points.

      • Think of voroni diagrams.

      • Even though the wavefront emerges as a circle, as it runs into other wavefronts it becomes, surprisingly, a line.

    • It's the lowest-energy configuration, since deviations cost more.

    • Works in any medium: wax, rock, or Saturn's winds.

  • Are you listening or are you waiting to talk?

    • If you’re listening you can find places to connect and work together

  • ZOPA: zone of possible agreement

    • If the ZOPA has no overlap, there is no possible agreement.

    • A lot of work in negotiation is finding the ZOPA.

  • A thing I realized about myself: the amount I want to talk is precisely related to how curious I think the other person is about whatI might have to say.

    • When the other person is curious to hear what I have to say, talking is one of my favorite things in the world.

    • Someone who has alpha energy (who acts like they outrank me, even if they don’t) and who is incurious is my kryptonite.

  • When strongmen are in charge, war is the default state.

  • A paradigm shift buds off from the current thing but is almost inherently separate from it.

    • It is not a smooth evolution of the past thing; it is inherently different, and it must be so.

    • It nucleates on the edge of the last paradigm but is entirely different.

    • Its backbone of logic is fundamentally different from the last paradigm.

  • An insightful comment on Hacker News about optimizing for serendipitous conversational insights.

    • "As Winston Churchill once said when asked ‘what are you doing’ –> ‘Oh just preparing my off-the-cuff remarks for tomorrow’

    • You cannot prepare for an 8 hour speaking engagement. Not really. But you can accumulate a plethora of anecdotes, metaphors, and remarks that you weave into the narrative or in response to questions.

    • You can build frameworks that are similar to code. Prepared functions/coroutines/objects that you run in appropriate situations.

    • The key is that things you say are new to the audience, but not to you."

    • The “new to the audience, but not you” is the magic trick.

  • Follow your fun.

    • When you’re having fun on a thing, you find it intrinsically enjoyable.

    • So you want to keep doing it for no other reason than itself.

    • If the thing you find fun also improves you or gives you a benefit in your life or career as a bonus, that’s great.

    • Many things are good for our life or career but aren’t enjoyable in the moment.

      • These are like eating your vegetables.

      • Important but a drag in the moment.

    • But things that are both fun and good for us are resonant, they are where the magic happens.

    • We keep with it even when it’s hard, and we get better and better.

  • Stories are a mark of a life well-lived

    • Stories are things that happened to you that other people might find interesting.

    • If you've lived a boring life you don't have any stories to share.

  • Schopenhauer: "Talent is hitting the target well. Genius is hitting a target no one else can see."

  • Shakespeare: “To thine own self be true … Thou canst not then be false to any man.”

  • George Bernhard Shaw: “Love is a gross exaggeration of the difference between one person and everybody else.”

  • Lao Tzu: “Being deeply loved by someone gives you strength, while loving someone deeply gives you courage.”


Reply all
Reply to author
Forward
0 new messages