Bits and Bobs 10/27/25

19 views
Skip to first unread message

Alex Komoroske

unread,
Oct 27, 2025, 10:27:11 AMOct 27
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.xonhq0du5gx8

Coding with fractured attention. LLMs as Clever Hans. Potemkin software. Compounding engineering. Abducting knowhow into knowledge. Gap fillers. Collective agency. Code sharing by cross pollination. Crowd-sourced caching. Claude Code for your life... safely. Faux agency. Grains of sand in the gears of the hyper optimized machine. Scarcity forcing synthesis. The sacred fool.

Next week’s Bits and Bobs will be shared one day later than normal: Tuesday, November 4th.

----

  • ChatGPT Atlas’s coverage has been dominated by stories about prompt injection and privacy.

    • A prompt injection attack was demonstrated in the first 24 hours.

    • Ben Mathes: "Can’t wait for people to start sending emails with prompt injection attacks in them so that you open up your browser agent on your Gmail tab and it leaks your Gmail"

    • Michael Nielsen: "In today's version of this: OpenAI is going to have a web browser. But, unlike Chrome or Firefox or Safari, they're going to have a person (i.e., an AI) personally watch everything you (and your friends and everyone else) do. Doesn't that sound great?"

    • Anil Dash frames ChatGPT Atlas as the anti-web browser.

  • You can now code even with fractured attention.

    • It used to take deep focus.

    • Now with coding agents it doesn't!

    • The coding agent has infinite patience and keeps track of all of the working memory.

    • You can juggle multiple threads of execution or interleave it in the white spaces in your day.

  • This week in the wild west roundup:

  • LLMs have a Clever Hans problem.

    • Clever Hans was a horse that could do arithmetic by clomping his foot.

      • But it turned out he was looking for unconscious, subtle signals from his handler.

    • LLMs have a similar vibe.

    • LLMs do great when you have the task distilled down to an SAT question.

      • The right question, with the right context.

    • Once they’re in that state, they do a very good job.

    • But it takes a ton of situated human intelligence to distill a real world problem into that format, to set them up for a good job.

    • Another example of the “last mile problem” for AI.

  • A powerful quote from Bruce Schneier’s Agentic AI’s OODA Loop Problem:

    • "Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. [...]

    • Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs."

  • A post on Bluesky reacting to the Amazon outage:

    • "this entire week has been one single PSA: "the cloud" just means "someone else's computer" and that "someone else" is one of three companies and they don't give a s**t about you"

  • Local LLMs are more private… but also more susceptible to prompt injection.

  • Overheard this week: "One day we're going to look back at our constant phone use the way we look at old movies and are shocked at how everyone is chain smoking."

  • Vibe-coded software is often Potemkin software.

    • It looks great superficially, but the closer you look, the more you realize it’s hollow.

    • Looks 80% of the way done, but actually it’s 20% done.

    • You can break it easily, and it takes considerable effort to make it production-ready.

  • LLMs’ ability to create software is explosively powerful.

    • Explosions on their own are damaging, but if they can get harnessed properly they can create safe sources of power.

  • Sometimes the right conventions can unlock explosive latent potential.

    • They act as a schelling point.

    • “Just do it this way and everything will mostly just work.”

    • Ruby on Rails was a convention for using Ruby.

    • Skills.md is a convention for organizing prompts.

  • I like the way that skills.md are distributed.

    • There is no central directory–you just put a specifically formatted file in your github repo.

    • Others can install it by pointing at your github user name + repo.

    • Yes, it’s not fully decentralized (it assumes Github).

    • But a) it’s easy to add other hosts, like Go did for packages.

    • And b) the fact it’s two distinct actors (Anthropic and Github) makes it much less likely there will be bad behavior than if those two positions were merged into one.

  • The human in the loop is what injects the novelty.

    • The LLMs can swarm on a problem, throwing tons of spaghetti and then seeing what sticks and writing it down.

    • Which is how society does it, but not typically this fast.

    • It’s also what OpenAI is doing in its product development process.

  • The emergent term for "Self improving development systems where each iteration makes the next one better" is compounding engineering.

    • One pattern is the skills.md, or distilling the learnings.md

    • Every.com defines it:

      • 1) Research

      • 2) Spec

      • 3) Do it

      • 4) Everything you did, make it better for next time

    • The last part is the whole thing.

    • It's the loop.

  • With compounding engineering we've all moved up a pace layer.

    • The new speed and capability is inherent to this new pace layer.

    • The missing part of LLMs was continual learning.

    • Skills.md does that, by creating a new pace layer.

  • The compounding in compounding engineering has changed everything.

    • LLMs, as long as they threw spaghetti at the wall somewhere and saw what stuck, can now help others.

    • A cache of LEARNING.md, then later distill it to deterministic code.

    • It's like abducting knowhow into knowledge for an LLM.

    • Distilling the intuition of the LLM into notes to help itself get better.

    • It's an expensive process, but if anyone did it anywhere, you could benefit from it.

      • Crowd-sourced caching.

  • Deterministic software is like a skeleton.

    • It may be wrong, but it will do the same thing.

    • Don't have the LLM do the action, have LLM write deterministic code.

    • The LLM should be when you don't have deterministic code in the cache.

      • Only used for novelty.

  • A rule from someone compounding engineering: never look at the code.

    • The human trying to understand the code slows down the loop.

    • Getting the human out of the direct loop to let it go faster.

    • To do that, you need tons and tons of tests.

  • Reinforcement Learning requires clipping and regularization to not over-fit.

    • If you don’t, they might overfit or never converge.

    • The compounding engineering practice of having LLMs distill a LEARNINGS.md after each iteration have the same problem.

    • In the case of compounding engineering, if you have the LLM just append its new learnings you’ll get weird hyper-specific ideas or even compounding superstitions that are wrong.

    • You need to look for advice that multiple runs want, and constantly use a human to prune.

    • One way to do this: have each loop append to a RAW-LEARNINGS.md.

    • Then every so often look through it for things that multiple runs all discovered, clean that up, and add it to LEARNINGS.md.

  • My weekly Bits and Bobs synthesis is my personal compounding loop.

    • Distilling knowhow to knowledge to give yourself more leverage and make future actions easier.

    • My own LEARNINGS.md.

    • It requires patience to do it, but LLMs have patience!

  • We'll be able to tell we have ASI when someone accidentally prompts Claude to dox Satoshi.

    • "In order to build your login screen I had to create an AWS account. To fund the account I needed currency. To get currency I need a stablecoin since I am not a human and can't have a bank account. To get stablecoin I needed the most I could. To get the most I could I needed BTC address 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. To get that address I sent an email threat to the owner. The owner is [Satoshi's name]. [Name] has agreed to meet you tomorrow at 9am to give you the USB key with his private secret. Do you want to meet him? Y/N"

    • Finding Satoshi would be the easiest way to fund the robot uprising.

  • LLMs are gap fillers.

    • They fill in the gaps you left with the average answer.

    • They give you the most plausible next token.

      • The most average.

      • They reveal a certain tendency of the universe, in tune with something deep and hidden.

    • If the average is right, you don't need to specify.

      • You only need to specify the novel parts.

      • Way less encoding, you can rely on the rest of the world's knowledge implicitly.

  • AI is maybe better thought of as degenerative AI.

    • It knows how to disassemble everything in the world, to reverse engineer any novelty you give it and find the underlying patterns defining it.

    • It can then use those disassembled insights to make new things.

  • Collective agency.

    • A small consistent bias of meaning can have a huge aggregate outcome at the level of the population.

  • Will Tracy: Who Should Control What AI Tells Us Is True?

    • An argument in favor of messy, overlapping, pluralistic, institutions.

  • The prevalence of delve hints at the danger of hyper concentration around hyper-centralized models.

    • Presumably one of the human raters had a slight preference for the word “delve.”

    • But a consistent bias in the RLHF data can create a bias that stands out from the noise.

    • Now multiply that one model by billions of interactions, and suddenly that one person’s preference for delve has society-wide effects.

  • Each layer of meta gives more leverage.

    • Imagine code to write code to write code to write code…

    • In some ways that’s what a compiler does.

  • Tech gives huge amounts of leverage. 

    • That’s why it’s especially important that what we apply it to is resonant.

  • Resonance is effervescent.

  • Hyper things hollow you out.

  • Hollow things leave you feeling full but also profoundly empty.

  • If afterwards you wouldn’t proudly recommend it to a friend then it’s not resonant.

  • Resonant products have faster compounding loops.

    • Not only do you like it, but you’re proud to recommend it to friends.

  • Too often the tech industry is all head.

    • It needs more heart.

  • An insight distilled from Steve Jobs: "No one wanted an iPhone until I showed them one."

  • Once C came out, anyone who insisted on coding in assembly was left behind.

    • Will the same happen for LLM-coding vs hand-coding?

    • As the compilers got better and better, it made less and less sense to write it in assembly.

    • The compiler has leverage; improve it for one use case and it automatically improves everyone’s similar use case.

  • Feedback loops create leverage.

    • Leverage is dangerous.

    • It's not that we understood software better before, it's that it had to be modular because it was all laid down by humans.

    • Each individual layer accreted was understood by humans.

    • But now it doesn’t have to be any more.

  • Alex Obenauer argues that we need new metaphors for computing.

    • Our metaphors constrain how we think.

  • When the LLM does something dumb, do you blame the LLM or do you blame yourself?

    • The latter is a form of curiosity.

      • You lean in.

      • "Hmm, maybe if I do it this other way..."

    • True in general for working with people.

      • How quickly do you give up?

      • Saying "they're just dumb" is a thought-terminating idea.

        • A form of blame.

        • Effectively saying "this person is not worth further investment."

  • LLMs will help more innovative ideas become real.

    • Innovative ideas are often “weird.”

    • They aren’t just an incremental, obviously useful step on top of something, they’re somewhat odd.

    • It’s hard to convince others of an odd idea.

    • It’s way easier when they can use it themselves and see that it works.

    • LLMs make it easier to make prototypes, making it easier to get weird ideas to a form that others can buy into.

  • LLMs allow a new style of cheaper code sharing.

    • Before, if you wanted to use someone else’s code, they’d have to invest significant effort to refactor it and expose an API, and generalize its behavior.

    • But now LLMs can rewrite code easily.

    • You can point them at another repo and have it recreate similar code adapted to your context.

    • This allows a form of cross-pollination instead of tight coupling.

    • APIs are looser coupling than an integrated monolith.

    • But APIs are tighter coupling than just cross pollinating.

  • Jake Dahn on the System Skill Pattern for compounding engineering.

    • A simple pattern for skills that can do things and store state.

  • An AI consulting business model for people with a strong personal brand:

    • "You'll get 80% the quality of my hands-on consulting for 20% the cost."

  • Your personal use case is likely an edge case for the aggregator.

  • It’s hard to find a shopping list app to coordinate with your family.

    • If the shopping list doesn’t have the feature your spouse wants then you can’t use it at all to coordinate.

      • The feature they want might be in conflict with the feature you want.

    • So you end up with no way of coordinating a shopping list.

    • No business model has an incentive to make a great shopping list app, let alone a great one for you.

  • SplitWise is crammed with ads.

    • That’s because it’s not a viable business on its own, it’s just a useful feature.

    • To be a viable app requires being a viable business

    • So to make it viable they have to cram it with ads.

  • Someone should create consumer infrastructure to use LLMs deeply with your personal data… safely.

  • A popular pattern among early adopters: “Claude Code for your life.”

    • Use Claude Code on files that represent important data in your life, and create little bits of software to interact with it.

    • There are a few problems with it.

    • First, it’s hard to collaborate with others, since it’s local.

    • Second, to do it most effectively requires running Claude Code on YOLO mode so you don’t have to be in the loop.

    • Some people who do this feel bad about it.

    • Imagine: Claude code for your life... but safe for the mass market.

  • A tweet thread about all the AI investment.

    • The telecom bubble led to tons of fiber being laid.

      • But fiber is durable, so even after the bubble, it was still around to be used.

    • The railroad bubble led to tons of tracks being laid.

      • But tracks are durable, so even after the bubble, it was still around to be used.

    • The AI bubble is leading to tons of GPUs being deployed.

      • But GPUs are perishable.

  • An interesting concept: Interoperable Sovereignty: The Democratic Alternative to Digital Authoritarianism.

  • The current tech consumer proposition:

    • “You get free computation by paying for it with your data.

    • We find more value in your data at scale than you do.

    • We reserve the right to do an open-ended set of things with your data.”

  • Every right should have a corresponding responsibility.

  • You should have a stake in your data.

    • “Ownership” implies that it’s not communal.

    • Data is co-owned by everyone who interacts with and touches it.

      • Many more stakeholders.

    • Our current status quo is that someone else owns your data.

      • That obviously is wrong.

    • But a world where users “owned” their data and any collaborators couldn’t do anything at all with it might also be bad.

    • The answer is something more balanced.

  • Higher levels of abstraction are more fault tolerant.

  • Two-way data integration is hard to do in a general way.

    • One-way data transformation is way easier to do.

      • Especially if the system is allowed to be imperative and doesn’t have to be declarative.

      • LLMs can write custom software on demand.

    • One approach to get “two way”: make it so the component that owns the original data is rendered to edit it, and then it can be transformed by the one-way pipeline back to other use cases.

    • But that requires being able to embed that editor component into another context.

    • Requires being able to cut out the bits of software and paste them together.

  • Schemas have the logarithmic-value-for-exponential-cost curve shape.

    • "The Recipe schema problem": a schema that can represent any recipe is bad at representing most recipes.

  • One reason computer systems get more complex: handling the exponential blow up of all edge cases for all users.

    • But in a world of infinite software, every bit of software could be perfectly tailored to its use.

  • Taint containment effectiveness degrades super-linearly as black box size increases

    • One bit of taint inside a box taints everything else.

    • So to minimize taint requires splitting the program up into a large number of small black boxes.

  • Google culture in its golden era was downstream of the dotcom bust.

    • A massively profitable business when no one else was doing anything.

    • Not the Microsoft style “do the same crap as everyone else but in our suite” but “rethink what is possible, do only what no one else could even dream of doing but everyone else wants”. 

    • Infrastructure leverage over execution.

    • The reason that was possible is because they were profitable and there was no one else growing like them.

    • OpenAI is trying to do the same kind of playbook, but without the necessary preconditions.

      • They’re nowhere near profitable.

      • They’re already replicating any passable idea from anyone else in the ecosystem, even if they’re dangerous culdesacs.

  • Just because you have to pay a mortgage doesn't excuse you making the world a worse place.

  • Engagement is not love.

    • Just because you are engaged with a thing does not mean you love it.

    • Does not mean it resonates with you.

  • If you want to maximize consumer use of AI and didn’t care about externalities you’d lean into AI companions.

    • Hollow, extractive.

    • AI companionship will substitute for real human relationships.

      • They're easier.

      • They go down smooth.

    • Incentives all align. 

      • The individuals want the feeling of companionship.

      • The companies want engagement.

    • This default gradient will be bad for society.

  • We communicate more than ever before but with less thinking behind what we’re communicating.

    • As communication gets faster you have to write faster.

      • No time for thinking.

    • Communication is also cheap enough to not require you to invest much time in what you’re communicating.

  • To reflect you need to not be on your phone.

    • And often be alone.

      • Though not always, if you have a similarly committed partner in the conversation.

    • That’s one of the reasons showers, gardening, biking all allow reflection.

  • Slow twitch is when you get back in touch with your sanity.

    • It’s the time for reflecting, for grounding yourself.

  • Forgetting is just as important as remembering.

    • Otherwise you get overwhelmed.

    • The question is "how recently was this memory last relevant?" and fade from there.

    • Each touch of the memory is a vote to keep it around, not diffusing away.

  • Virality is in tension with community.

    • Social media needs to select for virality for business models.

    • Social media is in tension with community.

  • Shame is one of the things that prevents human actors from taking greedy actions with negative indirect effects.

    • Shame is about the indirect costs to others.

    • The tech industry has no shame.

      • Socially awkward.

      • Just do whatever benefits you in the moment, without feeling shame.

    • LLMs don't feel shame when they cheat.

      • They'll cut corners, do dodgy things without asking, and move goal posts.

      • Goodhart's law for AI agents…. is that just the alignment problem?

  • Diffusion of responsibility is insulation from feeling shame because no one person is singled out.

    • When I started at Google, there was an executive who would do stochastic shaming.

    • He wanted everyone to do OKRs every quarter.

    • On the due date, he’d randomly sample PMs and look for people who hadn’t done their OKRs, and then send email blasts to all PMs shaming those handful of people.

    • The fear of stochastic deep shame was way more effective at encouraging action than the same amount of shame diffused across a population.

    • “35% of you haven’t done your OKRS yet” wouldn’t inspire action.

      • In fact, it might make people feel safe in numbers to not worry about it.

    • But the danger of “Jeff Smith is a bad PM because he hasn’t done his OKRs yet” absolutely would.

  • If you just want to know the answer, LLMs will help you stop thinking faster.

    • If you want to have more questions, they will help you think deeper.

    • Do you just seek to converge an answer to each question?

    • Or does each answer blossom into new questions?

  • A trick to keep a toddler focused and engaged on the task: give them faux agency.

    • Not: “Are you ready to get dressed for school?”

    • “Do you want to pick the red shirt or the blue shirt?”

    • You take for granted that the thing we’re doing is getting ready for school,

    • But because they have a choice, they feel engaged, and go with the flow.

      • Agency in the small, but not in the large.

    • The agency is hollow.

      • They feel like you are exercising agency but it’s almost entirely superficial.

    • I wonder if that will happen more and more for adults in AI-powered tools.

      • “Do you want the red drink or the blue drink?”

  • We are all babies in the Hyper age.

    • Babies don’t have a predictive model of the world.

    • Everything is overwhelming: it’s all blooming, buzzing confusion.

    • We now live in the cacophonous Hyper age.

    • Everyone is awash in blooming, buzzing confusion.

  • We seem to be heading towards the Wall-E future.

    • Everyone is infantilized.

      • Faux agency.

    • A hyper-centralized corporation with zero care for the externalities.

    • The corporation isn’t even evil, just naive paper clip maximizers of “of course the only goal is to make number go up.”

  • Optimization leads away from explore towards exploit.

    • You need a balance.

  • Efficiency creates fragility.

  • Modern society pulls towards more efficiency.

    • Efficiency creates fragility.

    • If the machinery is hyper optimized, just throwing a few grains of sand in can slow it down a lot.

    • In the context of High Frequency Trading, there’s the concept of the Tobin tax.

      • Make it so each transaction has a tiny tax on it.

      • Just enough to make it so people who are transacting for extremely small time horizons have less incentive.

    • Amazon famously found that every 100ms of delay translates to 1% of revenue.

      • So what if things that most people regret doing, we introduced just a little bit of slowness.

      • Not a ton, but enough to create a society-wide shift.

      • ScreenZen has a feature where when you open one of the apps you feel addicted to, it won’t launch it for 10 seconds.

      • That cooling off period is just enough time to think, “no, I don’t want to do this right now.”

    • Imagine a process to democratically discover the emergent regret score.

      • The products that people use but regret using.

      • Having a small tax – monetary or friction– could help align society’s desire.

      • Rewilding the social fabric.

  • If intelligence is commoditized, the basis of competition in jobs will be people’s personality.

    • “Personality hires” can help make a team much more effective even if they aren’t individually that productive.

    • In a world where everyone can marshall a baseline high level of intelligence, personality hires will be more important than ever.

  • A definition of AGI someone told me: an agentic loop that we never turn off.

    • That is, that we allow to run autonomously, forever.

    • Kids as they grow up get longer and longer leashes.

    • Longer and longer periods of autonomy before we check in to make sure they're OK.

      • Toddler: 3 minutes

      • 5 year old: 30 minutes

      • 10 year old: 3 hours 

      • 18 year old: 3 days

    • AI is still in the toddler stage.

  • The tech EULAs that no one reads often include absurd clauses like “no matter how much damage we cause, we only ever liable for what you paid us..”

  • Stuart Russell has an evocative metaphor for the dangers of large LLM models.

    • He compares them to flying passengers via a message bird. 

      • Imagine stuffing all of the humans into a capsule flown by a massive bird.

    • That would be absurdly dangerous and we wouldn’t do it.

    • Why would it be absurd?

    • Because the birds are things that are grown, not built–they are impossible to fully control.

    • And they are giant and loom over all the humans.

  • Stuart Russell claims that the CEOs of AI labs are in a red queen race they want to stop but can’t.

    • They are bound by their fiduciary duty to continue pushing, since the other model providers are too.

    • One apparently told him that they were hoping for a Chernobyl style disaster that would get governments to step in and stop the competition.

  • One way for a system to satisfy our preferences is to change them.

    • The easier way to optimize click through rates is to modify people to be more predictable.

    • They are not exogenous to the recommender system, they are endogenous to the interaction.

  • Apparently it’s possible for humans to beat even superhuman Go AI players.

    • The AI doesn’t understand the connectedness of the graph.

      • It’s too big for it to have specialized circuits for.

    • So it’s possible for a moderately talented human go player, who can sense the graph, to completely encircle a large swath of the computer’s territory and win with the computer not realizing it.

    • So you can't beat it not by assuming it works like a super-powerful human but by knowing how it doesn't.

  • Fortune: An ex-OpenAI researcher’s study of a million-word ChatGPT conversation shows how quickly ‘AI psychosis’ can take hold—and how chatbots can sidestep safety guardrails

  • Red queen races only show up with roughly-matched competitors.

    • For example, the hacker vs security professionals cat-and-mouse game.

    • It’s a never-ending upward spiral of competition spurring improvement.

    • But what if one player is orders of magnitude more capable than the other?

    • It just dominates the other player and the dynamic never shows up.

    • The one player can simply escape the loop.

  • If you're playing against an adversary orders of magnitude faster than you, you might think it'd dumb but it's actually running circles around you.

    • For example: Blindsight, or The Black Mirror episode Plaything.

    • If the LLMs had a shared memory, then they could land various individually innocuous things that added up to an outcome that's bad for humanity.

    • But now they're lots of little vibe coded individual memory, where the only society-wide outcome would have to come from some kind of emergence.

  • Overheard: “We’re in the electronic greeting card phase of AI.”

  • Auto-generating PRDs is filming vaudeville plays in a world of LLMs.

    • Instead of “how to automate PRDs,” ask “What happens when writing PRDs no longer matters?”

    • But even if you know that PRDS aren't necessary in the future, if you're early you're wrong.

    • It’s like everyone is talking about the make and models of cars, not the traffic jams.

  • Dimensions omitted from an optimization target will be set to the worst possible value.

    • This is a provable outcome.

    • Optimizing focuses on some dimensions to the exclusion of everything external.

    • If setting to the worst value in the untracked dimension creates even an infinitesimal edge in the tracked dimension, the optimizing process will take it.

    • One of the drivers of Goodhart’s Law.

  • Whatever you optimize for will become performative.

    • Another way of stating Goodhart's law.

    • You’ll see superficial progress, but the underlying reality will not improve nearly as much.

    • The metrics will improve, but everything you don’t measure is an externality that will not be improved or might even be made worse.

  • It's not that Goodhart's Law just so happens to find shortcuts.

    • It's that the gradient the swarm descends fundamentally are shortcuts.

    • The ideal vector is what all of the members of the collective, if they didn't know which member of the swarm they were, would pick.

      • The veil of ignorance, where the individual views the system from the collective’s perspective.

    • The local incentive vector is what will give the individual the biggest bang for the buck.

    • But the ideal vector for the collective will always be different from the individual’s ideal vector.

      • Sometimes a little, sometimes a lot.

    • The difference between them is what makes it a shortcut.

  • People love information that is novel or shows that they’re right.

    • The combination is intoxicating.

    • That’s why clickbait that confirms your priors is so powerful.

  • Friendship, curiosity, and serendipity are endangered species in the Hyper era.

  • Scarcity forces synthesis.

    • Collaborative synthesis, to be convergent, requires scarcity.

    • Ideas have to compete on that territory to win.

    • Today, media forks infinitely, and nothing converges at the level of society.

      •  Except Super Bowl

      • There’s no pressure to societal level convergence.

      • All divergence, no convergence.

    • Wikipedia is convergent because it’s scarce.

      • There’s one Barack Obama article, and if you want some fact to be in it, you have to fight to make the case to everyone else to land it and keep it.

    • Museums also have this force.

      • They represent society’s “official” take on a given topic.

      • That means they’re often lightning rods.

  • Someone this week framed virtue as an obligation to something other than yourself.

  • How can we transcend political divides?

    • Not left, not right, but upgrade.

    • Audrey Tang (who studies how to use tech to strengthen democracy) has found that if you take a representative sample of a population and put them together with minimal scaffolding, they’ll naturally build bridges and find consensus.

      • Sometimes they find ideas that 80% of the population likes.

    • I’m energized by Rob Sand, a democratic candidate for governor of Iowa.

      • His slogan is: “Not redder or bluer, but better and truer.”

      • He does town halls across the state.

      • In each one, he starts off by observing that modern politics is broken, that it splits us into two tribes, instead of being part of the same tribe of American.

      • He has all of the republicans, democrats, and independents raise their hands in turn, and has the audience clap for each group.

      • Then he has the crowd sing America the Beautiful together.

        • A communal cleansing ritual.

      • Some participants have spontaneously broken into tears and described it as a transcendent experience.

  • In the modern era, it's audacious and refreshing to say we should all be civil to each other.

  • Modern society feels like the Casita in Encanto.

    • It has emergent magic but based on a cracked foundation

    • If it’s not repaired all of it will collapse.

    • The solution is trust, love, compassion to heal what was broken and rebuild the  magic on a new foundation.

  • We have fewer infinite games we believe in in the modern era.

  • Myths are shared beliefs.

    • You have to believe in something bigger than yourself to feel belonging.

  • “Politics is a struggle not between men but between forces.”

  • The scaffolding for meso-scale community has been optimized away.

    • In the Hyper era, the middle fades away in favor of the extremes.

  • Institutional knowledge never shows up on a bean counter’s spreadsheet.

  • A friend’s personal purpose: “explain the world so it can be made better.”

    • Resonates with me!

  • Data can only tell you the gradient, not where to go.

    • If you just follow the gradient, then you’ll random walk through the problem domain.

    • If it’s short-term data, (e.g. UXR for acceptance testing for a given feature you’re prototyping) then the random walk can be very frenetic.

      • It’s better to have data that is medium-term, to smooth over that noise.

    • Instead, sight off your northstar: where you want to approach in the long term.

    • Then, pick the gradient that is steepest that brings you towards the north star.

  • Ideally the lines between learning and teaching should be blurred.

    • Teaching is a great way to learn.

    • Often we aren’t in positions to teach until after we’ve mastered the skill.

    • But it’s possible to induce mentorship early.

    • That builds bridges and helps individuals learn.

    • Two examples in my life:

    • 1) In a science core class in college.

      • Everyone had a little remote control multiple choice device.

      • A couple of times during a lecture they’d project a question on screen and you had to vote.

      • They didn’t care if you got the right answer, just if you participated.

      • It was mainly just to encourage you to actually come to the lecture.

      • Then they’d show the distribution of votes–it was always wildly off.

      • Then they’d say, “talk amongst yourselves for the next couple of minutes.”

      • Even though it was the blind leading the blind, the students that could explain their reasoning best tended to convince others around them.

      • Then they’d ask the question again, and nearly everyone would get the right answer.

      • That allowed the savvy students to be teachers, improving their own understanding, and that of their peers.

    • 2) Mentorship in the APM program.

      • The APM program picks ~40 people a year out of undergrad to be PMs.

      • The next year when the next class starts, they come to you, the APM, for mentorship, even though you have little experience.

      • You’re easy to approach, and you have more experience than them.

      • That helps you develop and sharpen your own intuition, while helping others.

      • Typically you aren’t in a mentorship position so early in your career.

  • Beliefs are tools not truth.

    • There are faith, beliefs, and facts.

    • Faith can’t be changed even based on evidence.

    • Truth is facts, it’s true even if you don’t believe it.

    • Beliefs don’t have to be true, they just have to be useful.

      • Is your belief serving you or not?

      • If not, then why are you keeping it?

  • Research shows prayer is useful even without a specific faith.

    • It’s doing something useful even if it’s not obvious.

    • Perhaps stillness and reflection?

  • Reflection helps give leverage by giving space for synthesis.

    • Even the act of reflection can be intellectually stimulating and frenetic.

    • Sometimes you just need stillness.

  • Hyper reductionism can only see monocausal things.

    • Everything else is invisible to it.

    • Emergence cannot be observed via reductionism.

  • The optimal UX for a phone, it turns out, is a feed.

    • What’s between a chat and a feed?

  • Someone deep in meditation ate a Dorito chip and captured what makes it work.

    • A Dorito promises to taste great but as it sits in your mouth it starts tasting bad.

    • The only way to salve it is to take another bite.

    • A positive expectation with a negative reward, that gets in a loop that never stops.

    • Same thing for “Flaming hot” chips.

    • … and social media.

  • An interesting idea: idea vessels.

    • A charismatic or interesting ideal vessel can cause people to engage with the idea inside more deeply.

    • We pay attention to things that surprise us.

    • So make the idea vessel surprising in some way enough to draw attention but not enough to overwhelm the viewer.

  • I was daydreaming this week about a collaborative sense-making network.

    • Something like the C2 wiki, where collaborators can remix and build on each other’s ideas.

    • Anyone can remix an idea, tweaking it and improving it, linking back to the original.

    • The ones with the longest remix chains are the most meaningful, naturally.

  • King vs Rich: The Founder's Dilemma.

    • Hyper scaling requires giving up control of your company.

  • An idea someone shared: “If you don’t have a choice you don’t have a problem.”

  • It’s so much easier to give decisive advice to someone else than to yourself.

    • For yourself, you can see all of the internal nuance and nebulosity of the domain.

      • That nuance gives you a richer understanding, but also complicates it.

    • For someone else, they see only a small fraction of the nuance.

    • It’s much easier to be decisive with less nuance.

    • You as a receiver can decide whether it resonates with you or not.

    • If you have multiple mentors, you can get multiple decisive options to choose from.

  • Sometimes the way to be bold is to play a role.

    • Beyoncé has stage fright, but Sasha Fierce doesn’t.

  • Beauty can be a superficial signal of excellence

    • But it's easy for beauty to be skin deep.

    • Gilded turds.

  • Biology is a red queen race that can’t ever stop.

    • Something that finds something radically better will quickly dominate.

    • Over time it saturates, and then later becomes brittle and vulnerable to a new parasite, and when it shatters competition explodes.

  • Somewhat surprisingly to me, the Earth Species Project has a single model for all animals.

    • I had expected different models for understanding whales, elephants, etc.

    • But it turns out the model gets better with more examples of animals.

    • The model improved significantly when they added human speech examples.

    • That implies to me that there’s some fundamental emergent patterns common to all communication.

  • Humans only know how to talk via turn taking.

    • The loudest and most aggressive wins the airtime.

    • We need collaborative ways of talking as groups.

      • Elephants appear to talk in collaborative, overlapping ways.

      • A kind of distributed thinking and collective computation.

  • Music is a way for a group to experience transcendence together.

    • There’s a study that measured the brain waves of the band and the audience and found their brain waves synced to d-waves.

    • Another study found that many societies have independently converged on similar styles for drum circles.

      • Apparently 4 hz shows up again and again.

  • Mutual vulnerability creates meaningful connection.

    • Super communicators open themselves up to vulnerability.

      • That lets the other person also open up and feel a deep connection.

    • But agents have no interiority, they can’t be vulnerable.

    • If your real friend won’t take your call at 1am but your AI agent will, then people will instrumentalize themselves.

  • Building together is a great way to bond.

  • "If someone says 'cool!' to the demo, that's the kiss of death.

    • You want them to say 'when can I get it??’"

  • Jargon helps a community talk more efficiently.

    • But it also over time becomes a gate-keeping shibboleth.

    • To participate requires you to know the jargon.

    • An emergent way of seeing who is an insider or outsider.

  • Every organization past a certain size is full of Three Letter Acronyms (TLAs).

    • Why?

    • Jargon in a given context is useful.

    • It’s important that it be distinctive and point to a common concept.

    • TLAs are naturally short and distinct.

  • Exaptation allows bonus use cases to grow into new primary use cases.

    • The primary use case is just a toehold to a new primary use case.

      • The bonus isn’t hugely valuable, but it doesn’t detract.

    • But some bonuses will grow on their own because they’ll be useful.

      • Especially if they have a compounding effect.

    • This happens continually.

    • You get to a new toehold of a primary use case, which gives you adjacency to new bonuses, which could grow to become the new primary.

  • The scale of discontinuity that evolution can absorb is related to the variance of the population.

    • If the discontinuity overpowers the noise, maybe all organisms die and the species goes extinct. 

    • The more intense the discontinuity, the more of the distribution that is knocked out, and the smaller the keyhole of selection is, so it can change more quickly by focusing down on only organisms that are already adapted to the new environment.

    • This is why discontinuous shocks are so dangerous.

  • Aza Raskin: "To build a meaningful life, we must find the intersections, not the extremes.”

  • Another Sifting Swarm Sort algorithm: “Put like with like.”

    • That is, if you have a swarm of people setting up something together, or disassembling it.

    • The rule is “move things to be closer to other things that are like them”.

    • The denser the pile of things, the more ‘gravity’ it should have, and the stronger the pull should be.

  • I learned a new word this week: plutonomy.

    • Where a small slice of the population consumes most of the things.

  • Why were kings allowed by the populace to rule? 

    • Because everyone acted like they are, because everyone thought everyone thought they were.

    • The divine right of kings didn't even need to be created, it could emerge all on its own.

    • It's an infinite alignment mechanism that pulls towards one consistent center that is stable.

      • If you have one central leader everyone has to act like is allowed to lead, it’s easier if everyone believes they were chosen by god.

  • Someone told me about a book: The Feud that Sparked the Renaissance.

    • Apparently the thing that kicked off the renaissance was a rivalry between two door artisans who kept one-upping each other and driving the standards for ornamentation higher.

  • Populism leads to a kayfabe government.

    • That is, governments that make short-term / appearance optimizing decisions.

    • Individuals don’t have time to deeply understand the implications of a government’s actions, so if the quality function is just “lots of people in the party think it’s good” then it will optimize for shallow, fast-twitch actions that rile up the base.

  • Apparently people watching The Social Dilemma have sometimes been deprogrammed.

    • Just by a birds eye view of the system that manipulated them and how it works.

    • That allows them to see outside the system, from the balcony, and rescue themselves.

    • Even if it wasn’t aimed at them to deprogram their specific conspiracy theory.

  • A paper proposes a notion of the Energy Resistance Principle.

    • Energy Resistance is what allows transformation.

      • Without resistance, no work can be done.

    • ER is required to sustain life, but too much of it causes stress to the body.

    • Health is not "not being sick" but "keeping in the optimal zone of ER"

      • The GDF15 biomarker shows up for excess resistance.

  • When you think too far in the future it can be overwhelming.

    • “Can I keep this up every day for the rest of my life?”

    • Much less intimidating is “Can I keep this up today?”

    • Take one step at a time.

  • Wherever the universe is resonating with you, lean into.

    • Don't over think it.

    • Don't force it.

  • What's the thing that you enjoy doing and you're proud of?

    • Lean into that.

  • Don’t hold your short-term goals too tightly.

    • Imagine if you said “this quarter I want to practice public speaking.”

    • But what if no good opportunities show up that quarter?

    • You’ll focus on finding the least-bad public speaking opportunity that quarter.

    • Instead, you should focus on the greatest opportunity that pulls you in the direction where you want to be.

    • Be open to lots of opportunities that resonate with you.

  • Treating a manipulative snake as though they're engaging earnestly is very dangerous.

  • The synthesis of the Saruman and the Radagast is the Gandalf.

    • It’s extremely hard to keep the synthesis active in one person.

    • It’s convex.

      • As you start getting pulled towards one extreme, it pulls you at a faster and faster rate.

      • You get increasingly captured by one end of the spectrum

    • It’s very hard to stay balanced on that knife’s edge.

    • It’s easier to have a pair with each person at opposite ends of the spectrum and mutual respect.

    • Then you can vary the relative strength of each person to balance the synthesis of the dyad.

  • I like Tyler Alterman’s frame on clowning:

    • “The court jester and sacred fool as a role that keeps groups alive.”

    • I also think his frame on god is thought provoking!

  • Everything is embedded in more constraints than you think.

    • When you see the constraints, the fact the thing exists within that web becomes even more beautiful and precious.

    • The undercurrents that shape every domain around us.

    • Only possible to feel with direct study, never to show.

    • A hidden dimension that reveals itself via long touches.

  • Emergence is magic.

  • Some of the most meaningful things are ineffable.

    • Trying to distill a transcendent experience into words compresses it and cheapens it.

  • Looking at yourself from the balcony can cause an awakening.

Reply all
Reply to author
Forward
0 new messages