Bits and Bobs 10/13/25

19 views
Skip to first unread message

Alex Komoroske

unread,
Oct 13, 2025, 10:31:48 AM (5 days ago) Oct 13
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.76ltqcrnnjts .

ChatGPT is not like Windows.  Canned corn from the convenience store. LEARNINGS.md for compounding agent performance. LLMs as meta-boundary objects. Do-think vs do-do.  Seeing Like a Language Model. The Hyper Era. Self-guaranteeing promises. Data as not oil but sand. Wide tech. "How to Citizen". Sousveillance.

I overflowed the Google Doc again, so we're now on the third Google Doc!

----

  • Anthropic showed how a very small number of samples in training can poison even large model’s outputs.

    • This isn’t a surprise to me.

    • Even a small bias, if it’s consistent, stands out from noise.

    • That’s true any time your system assumes, implicitly, that all of the samples are independent.

    • An attacker who can coordinate multiple samples can have an outsize impact on the signal.

    • The very same phenomenon is why Googlebombing was a thing… A small amount of coordination can have a significant impact on the overall output.

  • An excellent piece from Paul Kedrosky about LLMs shifting the baseline of ability.

    • “Extraordinary outcomes can (seemingly) disappear when the baseline rises because everyone is better."

    • A distillation of the piece from a friend: "ambition becomes the new differentiation in a world where everyone can make a passable attempt at any task."

  • This week in the wild west roundup:

  • A danger of prompt injection: it could create extremely convincing ads that get the user to do something not in their interest.

    • Even if the system isn’t tricked, the user could easily be.

  • I disagree with Stratechery’s take that OpenAI has a Windows Play.

    • An open system can be on top of a proprietary system if the proprietary system can't cut someone else off on top.

      • Open access to APIs, no control on distribution.

    • But an aggregator is entirely in control of the ecosystem.

    • It cannot be an open system if an aggregator controls the final pixels all users see.

    • The test: can 3Ps get started without the platform owner’s permission?

    • It’s a fundamentally different kind of thing.

    • So ChatGPT is not like Windows, it's like Facebook, but turbocharged.

    • Because even on Facebook, it’s only valuable if other people and other content is on it.

    • ChatGPT is 99.999% of the value of the interaction.

    • Note that using OpenAI’s APIs to create an experience is much closer to Windows.

      • Yes, they could cut you off from API access, but there are a number of similar-quality and -price alternatives that you can drop in.

    • OpenAI’s main business is not the API, it’s ChatGPT.

      • We shouldn’t get the two confused.

  • Apps in ChatGPT are underwhelming.

    • It’s exactly what you’d design and implement if you said “given we have a popular chatbot, implement the aggregator playbook.”

    • Very similar API design to what existed in Google Assistant.

      • Even the same early adopter partners!

      • Those integrations got tiny amounts of usage.

      • They demo well enough to get internal credibility from partners to authorize shipping an integration, but no one uses them.

    • The interaction is just a poor fit for chat.

    • Every time you add a new message in the chat, the app UI scrolls up and off screen.

    • The apps feel like little bits of fruit embedded in a constantly-scrolling fruitcake.

    • A silly interaction.

    • This is not malleable software, it’s little bits of software embedded in a gelatinous mass.

    • Living inside a Chatbot is not a great idea for anyone other than the chat bot creator.

  • Software today is like eating canned corn from the convenience store.

    • What if it could be more like a personal farmer's market?

  • Imagine software that doesn't feel like visiting the DMV, but like working with a master craftsperson who knows exactly what you need.

    • Software that helps you live aligned with your values, not manipulated toward someone else's quarterly earnings target.

  • A key approach for agent learning: accumulating insights in a LEARNINGS.md.

    • After it does a task, have it distill insights that it gained that would have made this run easier.

    • This makes the next run faster and higher quality.

    • This is where the feedback loop closes and becomes a meta, compounding loop.

    • That LEARNINGS.md is a file that a human can help curate and steer.

    • That’s what gives you compounding quality even with a fixed model.

  • Specs are the extended mind thesis applied to agents.

    • If you can externalize aspects of your cognition then by manipulating the external thing you can ‘think’ better.

    • True for humans, but also for models that are fixed and need an off-board place to store insights.

  • Claude often says "You're absolutely right.”

    • But it rarely says "you're absolutely wrong."

  • ChatGPT Commerce shines when finding candidates is easy but verifying they match your goals is tedious.

    • LLMs have infinite patience and OK judgment.

    • Two concrete examples:

      • Finding a furniture cover for garden furniture where there are lots of SKUs that are different shapes, and it’s hard to search by the shapes.

      • Find a flower shop halfway between me and my parent’s house.

  • LLMs are meta-boundary objects.

    • Boundary objects are things that can be understood by different specialties.

    • They allow communicating between two types of expertise that otherwise struggle to talk.

      • For example, between artists and scientists.

    • LLMs are universal translators, they can translate jargon from any domain to any other.

  • A positive take on Sora: "Sora feels like enabling everyone to be a TikTok creator."

    • TikTok and Reels are all about addictive consumption.

      • After you’re done using it, you’re left feeling regret.

    • Instagram at the beginning made everyone feel like creators.

      • But then they climbed up the engagement ladder into a TikTok clone.

      • That left a hole at the bottom of the market.

  • When ChatGPT Turns Informant

    • “The largely overlooked privacy risks of using AI apps that not only remember your conversations, but are capable of using these to reveal your deepest secrets to others”

    • It can reveal those secrets intentionally, or unintentionally.

    • It’s important that the system that knows deep things about you is responsible with your data.

      • That means that it doesn’t unintentionally reveal it to a hacker or to someone looking over your shoulder.

      • Also, it uses your data in ways that wouldn’t be surprising to you and are aligned with your interest.

  • Do-think is better than do-do.

    • Looping on do-think allows reflection, growth, compounding clarity.

    • Looping on do-do makes a stinky mess.

  • Sprinting into the abyss.

    • All doing, no thinking.

    • That’s the modern tech industry.

    • "Aren't we heading into the abyss?"

    • "Yep, but look how boldly we're sprinting!"

  • Avoid the Copilot Pause

    • When interacting with agents, they do work and then ask for your judgment.

    • If there’s one agent, either the human or the agent is blocking on the other.

    • The human’s time is valuable; the agents have infinite patience.

    • This article is about having a swarm of agents, so one is always waiting for you and the human is never blocked.

  • Excellent piece from Dan Shipper: Seeing Like a Language Model

    • The Bitter Lesson is that emergence beats reductionism at scale.

    • Reductionist approaches have been beaten out by emergence.

  • When vibecoding a code base without code review, remember that the model is reward hacking.

    • Doing things that just so happen to work, but for fragile reasons, by brute force.

    • They have infinite patience to throw a lot of spaghetti at the wall.

    • What sticks might happen to look beautiful after enough time, but it's just spaghetti on the wall.

  • A frame for a class of AI-generated content: para-content.

    • Original thought or reporting is expensive.

    • Contextualization is easy content to produce.

    • Humans like narratives, so a lot of contextualization will be narratives applied to situations.

    • A whole bunch of shallow analysis, very little novel reporting.

    • Drowning in a sea of shallow takes.

  • A few friends and I wrote the Resonant Computing Manifesto.

    • Hollow things leave you feeling regret.

    • Resonant things leave you feeling nourished.

    • AI has the potential to supercharge tech, so it’s more important than ever before for us to know the difference.

  • We’re in the Hyper Era.

    • Everything is “too much.”

      • Business, tech, politics.

    • A late-stage phenomena.

      • Short-term to the point of self-destruction.

      • Tearing themselves apart.

    • Just “number go up,” nothing about the qualitative externalities.

    • The Hyper Era is frenetic, cacophonous, overwhelming.

    • The Hyper Era is hollow.

  • We arrived at The Hyper Era by following the guidance of technocrats.

    • No taste, no curation, no judgement, no principles, no resonance.

    • Just a/b tests and following the data wherever it leads, even off a cliff.

    • "Do whatever it takes to make the number go up"

    • “Our data shows that people love doomscrolling.”

    • Technocrats got us into this mess, but they cannot get us out of it.

  • Brian Chesky on the hollowness of the Hyper Era:

    • “Your Instagram followers won’t come to your funeral.

    • No one will change anyone’s mind in the YouTube comments.

    • And soon all of your ‘friends’ will be AI.”

    • Social networking is the most successful product that was ever uninvented.

    • Social media grew to eclipse social networking.

      • Friends to followers.

      • Connection to performing.

    • We’re more isolated and lonely than ever before.

    • Real connection happens in the real world.”

  • What happens in your community is more real than what happens online.

    • But we judge our priors based on what we see, and that’s now majority online.

      • Our priors are now set totally incorrectly.

    • In the future everything on a screen will be assumed to be artificial.

      • The real world will stay real.

    • Returning to the real world and communities will be existentially important in this era.

  • Hollow things have faux resonance.

  • In the Hyper era, authenticity is the new scarcity.

  • Sora seems like a cynical approach to move the Overton window on deep fakes of people.

    • It’s getting the first interaction with “someone else using your likeness in AI generated imagery” by having it be by your friends in a low-key way in a cozy group.

    • As we acclimate to it, we’ll be more willing to let strangers do it to other people.

    • That will accelerate the Hyper Era significantly.

  • Meta internally apparently has a principle now to “demo, don’t memo.”

    • A prototype is worth a thousand meetings.

    • LLMs make it trivial to prototype.

  • Self-guaranteeing promises are a powerful idea.

    • Not won’t be evil, can’t be evil.

    • Files over apps is a self-guaranteeing promise.

  • You want a system where the model quality is not the ceiling but the floor of possibility.

    • Human ingenuity should have a floor to build off of, not a ceiling to hang from.

  • The LLMs strip-mined content.

    • They did it once to bootstrap the models.

      • No thought of sustainability of content production.

    • Search engines were kind of accidentally sustainable, since they delivered clicks to publishers that earned it.

    • But LLMs just slurped it all up like the content was in the commons.

      • The tragedy of the commons, but in many cases it wasn’t even the commons.

      • Just move fast and strip mine the content before anyone knows what’s happening and can push back.

    • The incentives for publishing have taken a nosedive.

    • I’m glad we have these planetary-scale models but I wish they had been grown sustainably.

  • AI psychosis is engagement to a grotesque extreme.

    • Engagement is a thing to optimize for hyper scale.

    • In the small it can be good, healthy even.

    • But where’s the line where it becomes antisocial?

  • For so long it was the people with the answers with the power.

    • Now because everyone can have the answers it’s the people with the questions.

  • Technology should resonate with what makes us human instead of exploiting our weaknesses.

  • When you get to the scale where qualitative measures are no longer viable and switch to quantitative, that's when Goodhart's law shows up.

    • Goodhart's law shows up based on the difference between qualitative and quantitative benefit in a given situation.

    • With the individual resolving it in a way that benefits themselves at the harm of the collective.

  • Bruce Schneier’s excellent Seeing Like a Data Structure:

    • “The ability to see like a data structure afforded us the technology we have today. But it was built for and within a set of societal systems—and stories—that can’t cope with nebulosity.”

    • The world is not clean and orderly like data structures imply.

    • Contains an excellent quote from Cory Doctorow: “We can’t add, subtract, multiply or divide qualitative elements, so we just incinerate them, sweep up the dubious quantitative residue that remains, do math on that, and simply assert that nothing important was lost in the process.”

  • Data is not like oil, but sand.

    • That is, individual grains of it are not valuable, but a large collection of it in one place is.

    • Data gets more value the more it’s aggregated, at a compounding rate.

      • Both for individual users and for collections of users.

      • This is why the aggregators have such large network effects.

    • Via Ben Evans, from an idea by Tim O’Reilly.

  • Chris Dixon’s frame of Web evolution:

    • Web 1.0: Read Only.

    • Web 2.0: Read-Write… but owned by the aggregators.

    • Web 3.0: Read-Write-Own.

    • I like the progression, but think the “own” does not necessarily imply “crypto.”

    • The point is that the users should co-own the benefit of the network instead of it being owned by the host.

    • Today crypto is only one way to do that, but it’s an approach with non-trivial downsides.

      • Hyper-financializing everything accelerates Goodhart’s law.

    • What if there were other ways to get a Web 3.0 that was read/write/own, but without crypto?

  • There’s no way to structurally guarantee Helen Nissenbaum's concept of contextual integrity today.

    • I think of Contextual Integrity as “responsible use of your data.”

      • Not accidentally leaking it where it shouldn't go.

      • Using it in ways that would not be surprising to you and are aligned with your interests.

    • Today it's a nice abstract notion, impossible to operationalize.

    • What if we built it into the fabric of computing?

  • Data flow analysis is the only structural approach to prompt injection.

    • But for that to work it has to be verifiable across multiple networked computers operated by different entities.

    • If you could pull this off it would address the “data is naturally viral” fundamental problem that most privacy models sidestep or handle in a simplistic way.

  • One way to sidestep the iron triangle of the same origin paradigm is by allowing the data itself to be the owner of the trust.

  • The inductive logic of folksonomies is not "from a standstill is this item good?"

    • It's "given that others liked this, do you think it's good enough?"

    • The "given that others liked this" is the compounding loop, a collective intelligence feedback loop. 

    • It allows people with little effort to go "... yeah, sure." Which is less effort than considering it from a standstill.

    • But that's why folksonomies don't pick the best, they just find a thing that everyone can agree is good enough.

    • This is true for any Swarm Sifting Sort.

  • Do you treat the model like a chef or a line-cook?

    • If a chef, you leave some ambiguity or open-endness.

    • If a line-cook, you specify everything and assume precision in execution.

    • If you think based on the cook’s skill that the downside is high, you’ll be more likely to treat them like a line cook.

    • Only a chef can surprise you in a positive way.

  • Reid Hoffman: “AI is the cognitive industrial revolution.”

    • Reid splits perspectives on AI into four buckets:

    • Doomer - AGI will ruin humanity and must be stopped at all costs.

    • Zoomer - AGI or bust. Foot on the gas.

    • Gloomer - AI will steal our jobs.

    • Bloomer - AI will help us blossom as humans.

    • Like him, I put myself in the Bloomer camp… if we can give it the right structure to flow into.

  • Nodes and wires authoring tools create rat's nests.

  • Aileen Lee points out that a lot of the AI companies today aren’t Unicorns but Icaruses.

  • A key observation: 'The distance between … “the world outside and the pictures in our heads” afforded vast power to those who managed information flows.'

    • Our modern society is increasingly defined by the growing gap between on-the-ground person-to-person reality and the distorted reality presented through our media feeds.

      • There’s a vast gulf between the clean feed we get and the cacophonous media landscape.

      • The entity that makes that curatorial decision has massive amounts of power to shape our world view.

    • The entities who rank the content that is shown care only about “number go up.”

    • The result is this toxic stew.

  • Nuggets of insight from Dara Treseder, the Autodesk CMO.

    • “Don’t compromise resonance for reach.”

    • “Trust is earned in drops but lost in buckets.”

    • “AI raises the floor but it should be human ingenuity that should raise the ceiling.”

    • “Brand is the sum of the promises we make and the experiences we deliver.”

  • Building a traditional software company in the age of AI is a bad idea.

    • There’s now more competition to compare against and need to stand out from, because it's made it so much easier to build.

    • “Wow, AI makes it 10 times easier for me to prototype and build a company.”

    • “Yes, for you… and everyone else!”

    • Differentiation has never been more important.

  • There’s not just deep tech, but also ‘wide tech’.

    • Technology where each individual component is not too deep, but there are a lot of them that have to be wired together into a novel combination.

    • Normal tech startups have zero tech innovation at all, it’s 100% execution.

  • Describing things for designers is hard.

    • An exercise for junior designers, give each other briefs, and then surrender control to your partner to execute, with only nudges.

      • It forces you to reflect and get better.

    • In enough turns, the other person will come back with something that surprises and delights you, and makes you realize why over-specifying can be bad.

  • AI will collapse specialties and thus lead to more small businesses.

    • Every generalist can be as good as a pretty-good specialist in any knowledge domain.

  • The primary use case has to be strong enough to stand on its own.

    • The secondary use case is a bonus, and can't stand on its own.

  • Chips are the physical sites of computing.

    • It is interwoven into all the objects around us today, often invisibly.

  • If your job is to turn a crank, it's your moral responsibility to know what it's connected to.

    • If you're doing good work in a bad system then you're doing bad things.

    • Take a step back and look at the system you're embedded in.

      • Is it good?

      • Is it producing outcomes you're proud of?

  • The value of resilience is impossible to show concretely because it’s like proving a negative.

  • A YouTube video about why restaurant food is so bland.

    • Sysco dominates food production.

    • Efficiency and hyper scale leads to focusing on numbers to the exclusion of qualitative things or externalities.

    • So it selects for efficiency and scale not “nourishing” or “sustainable.”

    • Efficiency dominates everything else.

    • The result is a lack of diversity in the system leading to less adaptability, and less resilience.

    • Hollowed out.

  • Brian Eno on AI and taste.

    • ‘That space between “Are you playing the technology, or is the technology playing you?” is a very tricky one. I think that one of my more dystopic versions of our A.I. future, my kids’ future in A.I., is a world in which they’ve given up a lot of their own agency because it seems a little bit ridiculous to take it.’

    • ‘I have an architect friend called Rem Koolhaas. He’s a Dutch architect, and he uses this phrase, “the premature sheen.” In his architectural practice, when they first got computers and computers were first good enough to do proper renderings of things, he said everything looked amazing at first.

    • You could construct a building in half an hour on the computer, and you’d have this amazing-looking thing, but, he said, “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.” '

  • Ben Mathes distilling Babak Nivi

    • “The meaning and soul went into the training data, and it's in us as we read the text.

    • It's not in the LLM anywhere.

    • But we can get it as a result of reading the output."

  • Can the machines ever have knowhow?

    • Knowledge yes, but not knowhow.

    • Knowhow comes from experiential knowledge.

    • Maybe innovation requires knowhow.

  • Web 2.0 accidentally gave us extremely powerful aggregators.

    • Before that, network effects were mainly individual.

    • But Web 2.0 was about leaning into the ability of the cloud to be networks.

  • Value is relative, so something is always valuable. 

    • Just what that is can change.

  • Law of Conservation of attractive profits.

    • Commodity at the bottom, differentiated at the top.

    • But it keeps on ratcheting up, with new layers added on top.

    • The winner at one level might think they'll always be on top so they research and even give away the thing that ends up being valuable for the layer on top, making them just a commodity.

      • Windows with IE.

      • Google with BERT.

    • They give away the secret sauce without realizing it.

  • People use AI inside their OODA loop.

    • "Take this process I already do and go through the steps faster."

    • But what about using it outside their OODA loop?

      • An end run around their process.

    • Put your focus on the parts you have an advantage on.

    • Although note that processes are also a coordination point between multiple people, so one person can't change them unilaterally.

  • People bought windows (and even iPhones) largely because of 3P apps.

    • But with ChatGPT, 99.999% of experiences are 1P.

    • An ‘ecosystem’ that is so hyper-concentrated is not resilient.

  • The faster you can transact stock, the easier it is to hold it for micro amounts of time.

    • So as we made it easier and easier to buy and sell, the more short-term we got as a society.

    • Everything bottoms out in “maximize shareholder value.”

    • When shareholders hold a stock for an average of something like three months, that reduces down to “short term profits over all else.”

    • A toxic blood coursing through the veins of society.

  • Silicon Valley has a short average tenure for employees in companies.

    • That gives fast adaptive switching and also cross pollination of good ideas.

      • That is one of the reasons that Silicon Valley as a culture is so adaptive.

    • But a downside is that everyone is a tourist in their company.

    • So everyone's making short-term decisions, because they don’t expect to be there when the chickens come home to roost.

  • I was once told by someone trying to forcibly mentor me that “you’d be a VP by now if you stopped thinking through the implications of your actions.”

    • I imagine her perspective would be something like: 

    • “I won’t be here in 6 months. If I don’t cut this corner someone else will and they’ll get promoted. So why not get the benefit for myself? I'm not preventing harm anyway since someone else will do it, and the person who does it might be even less scrupulous than me.”

  • The tech industry is Goodhart's law-ing society.

    • Because it's not thinking through the implications of its actions.

    • It doesn't believe in the collective, only the individual.

    • The tech mindset assumes "if a thing is short-term good for me it's also on net long-term good for society".

    • But that's absolutely not the default case.

    • The individual and the collective can never have their interests perfectly aligned.

  • Long-termism that is infinitely far in the future gets weird.

    • Long-termism is better than short-termism

      • For example a 10 year time horizon, or definitely within a person's lifetime.

    • A time horizon that is unbounded leads to smuggled infinities in your analysis, which leads to weird and even dangerous conclusions.

  • Brian Chesky points out that when you travel you’re more open minded.

    • You talk to the uber driver when you’re traveling but not at home.

  • Giving a shit has to blossom from within.

    • David Chang, creator of Momofuku: “You can teach anyone the right technique. But you can’t teach someone to give a shit.”

    • “You know you love it when it serves you a shit sandwich and you happily eat it

  • In uncertainty and volatility, don’t focus on the frothy waves on the surface.

    • Focus on the deep undercurrents.

    • The things that you know will still be true in 20 years.

    • Use those to set your northstar.

    • In times of volatility hold on to something deep. 

      • Principles.

      • To each other.

      • To our power to choose.

  • Extraordinary things are constantly done by ordinary people.

  • Of course the future shouldn’t just be built by only engineers.

    • The future should be built by as many people as possible, deploying as many skillsets and perspectives, so that it can be the best they can be.

    • Also so we all cocreate the future and have ownership over it.

  • Famous African proverb: “If you want to go fast go alone, if you want to go far go together.”

  • Committees diffuse responsibility so no one owns downside…

    • but also no one owns the upside.

    • Compare that to having a clear DRI who feels on the line for the output.

    • The DRI,  if they have the responsibility they must also have the authority, or you get the worst of both worlds.

    • It’s much harder to align authority and responsibility outside of a strict hierarchy.

    • A non-committee non-hierarchy approach is a swarm approach.

    • That requires setting the laws of physics so things converge to good outcomes, emergently.

  • You can’t start a network effect on a base with only bonus value.

    • It needs to start from a nugget of primary value.

    • Single-player value must come before multi-player value.

  • The logic of scenario planning is imperative in times of volatility.

    • You can't predict the future, so figure out the cone of possible outcomes and think about what you'll do in those extreme cases.

    • Signposts help you figure out which ones are coming true, before it's obvious.

  • Alex Russell’s excellent distillation of the power dynamics of standards: 

    • "Working Groups don't gate what browsers ship, nor do they define what's useful or worthy."

    • "In practice, they are thoughtful historians of recent design expeditions, critiquing, tweaking, then spreading the good news of proposals that already work through Web Standards ratified years after features first ship, serving to licence designs liberally to increase their spread."

  • A defining characteristic of the USA: celebrating greatness.

    • As well as a belief that anyone can be great.

  • In a time of despair, the beacons of hope become schelling points.

    • Schelling points allow coordinated action.

    • In a time of darkness, even a dim light can be a beacon of hope.

  • A name is necessary for identity.

    • When a person is named is when they become truly human.

    • The moment you are named is someone else saying “I care to refer to this being as a distinct entity.”

  • When you name an emergent but omnipresent force, it can unlock discontinuous understanding.

    • Emergent forces are inherently invisible.

      • They fundamentally cannot be seen concretely.

    • Think of them like lasers in a security system: there, but invisible.

    • This means that powerful forces, like the coordination headwind, can be crushing us, invisibly.

    • Because we can’t see them, we don’t even know why we feel so crushed.

    • But then someone comes along and names that force, they point it out in an intuitive way and give you a handle to talk about it.

    • It’s like spreading fog to make the laser beams visible.

    • It feels like an epiphany; an insight everyone has to share.

    • Now that everyone knows about it, you can navigate it, instead of constantly getting stuck in it.

    • If it’s something everyone can sense but no one can describe, the artifact that describes it will go naturally viral, unlocking understanding from many people at once.

  • Chip and Dan Heath have a frame of Defining Moments.

    • Similar to my REMs frame from last week.

    • “Moments of elevation transcend everyday events and stir up positive emotions like motivation and engagement.

    • Moments of insight spark transformative realizations and create meaningful learning opportunities.

    • Moments of pride surface and celebrate your best self-the "you" who earns recognition for hard work, crushes goals, and acts with courage when it's needed.

    • Moments of connection deepen your ties to the people around you and invite vulnerability.”

  • Lower pace layers must go slower.

    • They are higher leverage but must move slower.

    • If you think you’re at a higher pace layer but are at a lower one, you’ll have a bad time.

    • Most innovation happens at the higher pace layers and can thus move quickly.

      • They can go fast by innovating in a shallow way.

    • Some of the highest leverage innovation, the ones that can get the whole industry off of a hill they’re stuck on, requires working at lower pace layers.

  • Research treats knowledge as the end.

    • If you find a better way to do a thing you drop the old way and do it with the new way.

    • Instead of saying “good enough, this works”

    • If the way it’s built is an end in and of itself then a thing that works is not good enough if it’s not elegant.

    • In the real world, all that matters is that it works.

  • Do you cut scope to fit the timeline, or change the timeline to fit the scope?

    • If you have a runway (you haven't achieved liftoff), you have to do the former.

  • When there’s a buzzer that will sound, you need to maximize the points you collect before time is up.

    • This happens for example in test scenarios.

    • Get as many points on the board as you can before the buzzer sounds.

    • Most people default to going deep on each problem, keeping it loaded up in memory so they can effectively make progress on.

    • But this is a bad strategy in most test taking scenarios.

    • Some problems will be much harder than others but still give you the same number of points.

      • You could waste most of the time on a hard question, leaving easier points uncollected.

    • You should focus on the points that are easy for you to get.

      • Either because the question is easier in general.

      • Or because it leans into one of your differential advantages.

    •  So go breadth first so you never get stuck without points.

      • You have to bottom out a line of analysis to get the points, so depth takes longer.

      • Depth is only worth it if it gets you more points.

  • Things that take a long time are impossible to predict.

    • You’ll mispredict how long it will take, and you won’t get feedback until the end.

      • Your estimates will get inaccurate at a compounding rate.

    • Slice it into small slices that you can get feedback and learn quicker.

  • Growth mode and decline mode require different approaches.

    • Peace vs war.

    • It’s not about resources on hand, it’s about the slope of the traction.

      • Runway length and momentum are distinct.

    • Growth: spread out, own it all.

    • Decline: double down on what’s working, focus.

    • If you aren’t in growth mode, you’re in decline mode and need to act like it.

    • You’ll stay in growth / momentum mode longer than you should, because you'll take it for granted.

      • Or look at your resources, not your slope of traction.

    • That puts you in a Wile E Coyote moment.

  • Scale doesn’t have to be inhuman.

    • If it's backed up with philosophy and humanism.

    • It used to be hard to do it at scale while remaining human.

    • LLMs allow qualitative nuance at quantitative scale.

    • We just have to choose to apply them in a humanity-affirming way.

  • We’re in the climax forest phase of the tech ecosystem. 

    • Late stage.

    • A small number of massive trees shading everywhere, preventing anything else from growing.

    • What breaks the late-stage?

    • Disturbance at multiple scales: 

      • Individual trees dying and creating light gaps,

      • storms taking down patches,

      • diseases targeting specific dominants,

      • occasionally catastrophic fire. 

    • Forests are constantly churning, not waiting for an apocalypse.

  • In an emergent paradigm, it's all about setting the laws of physics properly so good things emerge.

    • A very different skill from building things directly.

  • Economies are supposed to be about circulation of capital, not accumulation.

    • It's the movement of capital where work happens.

  • AI slop swarm balances out to nothing, noise.

    • Because each agent pulls in a random direction.

    • But each individual agent can get going in a coherent direction individually.

  • I don’t think AI is overhyped because "overhyped" is about the future trajectory of development.

    • Even if it didn’t continue improving from here, just the diffusion of what we have now will create extraordinary value for decades.

    • Of course, the debt-fueled capital outlays for the underlying production of data centers is likely a bubble.

  • A probing question for entrepreneurs: “Why does your company deserve to exist?”

    • Most answers are very specific.

    • A good general answer: “Because if I don’t do it no one else will.”

  • A rule of thumb in the military: “Take the first punch so you have the moral weight on your side.”

    • But only if the damage won't be existential.

    • A scenario: an AI says "country X will attack you tomorrow and deal existential damage."

    • The analysis is too complex for a human to check it in that much time.

    • Do you take a preemptive move or not?

  • Baratunde Thurston has a frame on “how to citizen.”

    • Citizen not as a noun, as something you are.

    • Rather citizen as a verb, as something you do.

    • Rights come with responsibilities, like participating as a citizen.

    • A never-ending reaffirmation of a commitment to the collective.

  • Society needs more people holding one another accountable.

    • Holding everyone to their principles.

    • Not just “number go up” but that you did what you’re proud to do.

    • The way your mom would tell you if you’re not behaving well.

  • When the other people are anonymous to you is when you don’t bother living aligned with your character.

    • The modern world is so fast moving that we rarely see the same people again.

  • When you're authentically engaged you learn orders of magnitude better.

  • Film made us a viewer not a participant.

    • Until cameras we were in the physical space with the objects we were viewing.

    • We could choose to move our perspective.

    • Film holds you hostage.

  • Language is highly inefficient for describing 3D spatial relations.

    • A picture is worth a thousand words.

  • Some notes from a talk by Asa Raskin.

    • Asa co-founded the Center for Humane Technology and is now working on the Earth Species Project to use AI to understand animals’ communication.

    • “The way we treat animals is the way AI will treat us.”

    • He talked about a dolphin study where dolphins were trained to do something novel on a cue.

      • Keeping track of what they had previously done so they didn’t duplicate an action required significant intelligence.

      • They then asked the dolphins to do something novel as a pair.

      • The dolphins would communicate back and forth and then simultaneously do the same novel action, demonstrating extremely rich communication.

    • “It’s not when we speak that we change, but when we listen.”

      • That’s why it’s not about “communicating with animals” but “understanding animals” that is his goal.

  • Intelligence at speed is the modern demand.

    • To do that you need to slow down time.

    • The way to slow down time is anticipatory awareness. 

    • Pocket presence is what quarterbacks have to read a field they can’t see.

    • Impact over action is what actually matters.

    • This and the next six bits are inspired by a Fireside by Brene Brown.

  • The job of a leader is to create time where it doesn’t exist.

    • Unproductive urgency is bad.

    • It’s productive urgency we need.

    • Sometimes to be productive we have to take a step back.

  • In times of disruption ground truthing is even more important.

    • When you don’t have a permeable membrane then you’re a self referencing system.

      • You’re not ground truthed.

    • “Are we good?”

    • “Yeah we’re great, let’s keep going!”

  • Americans reduce emotions to three: happy, sad, pissed off.

    • True empathy requires granularity of understanding and recognizing emotions.

  • In person you see almost entirely people who are earnestly trying to do the right next thing.

    • It’s only in media that you see the opposite.

  • Why are business leaders saying empathy is less important now?

    • Because it’s convenient to stop caring.

    • To not think about the indirect impacts of your actions.

    • “Make number go up, don’t think about anything else” is what business leaders tell themselves is boldness.

    • It’s cowardice.

  • To be courageous it’s not about not being afraid it’s about knowing what your armor is to protect yourself.

    • Your armor is how to turn into an asshole when you feel threatened.

    • Your reports all know what your armor is, but they won’t tell you what it is.

    • But you know your own armor.

    • If you don’t know your own armor type then you shouldn’t be a leader.

  • Proprioception is mostly external facing awareness .

    • Interoception is internal facing.

    • “Are we healthy?”

    • “Are we metabolizing our inputs correctly?”

  • A techlash perspective: “do the benefits of this technology outweigh the societal harm that will come from concentrating even more wealth and power in the hands of these jerks?”

    • This perspective has been earned by the tech industry in the last decade.

    • It's a harm the industry might never come back from.

  • Fascinating perspectives from Fawn Weaver:

    • It’s not a fear of failure we have, it’s a fear of public embarrassment.

      • No one cares about failure if no one else sees it.

    • A mindset inspired by her faith:

      • When someone tells you ‘no,’ it’s not them, it’s God speaking through them.

      • Just like God made the Pharaoh say no to Moses, forcing the exodus.

      • This mindset allows you to accept rejection with grace.

    • There’s a secret power:  the power of love. 

      • Self love.

      • Open to being loved by others.

    • Don’t get mentorship from someone who doesn’t have their personal life together.

      • You have to want to be like the mentor.

      • If they are successful in their career but a mess in their personal life, they aren't a good role model.

  • An exercise to get practice with rejection.

    • Go into a pizza place and ask for a free piece of pizza.

    • Go into a mattress store and ask to jump on the mattress.

    • Situations that obviously everyone will say no to.

    • Helps you see rejection as a natural, inescapable thing, not an embarrassing game over.

  • Seema Reza, the CEO of Mission Belonging.

    • Mission Belonging heals veterans through creative expression and community.

    • “Poetry is an ancient technology that ties individual experience to universal truths”

    • “Shame can’t be processed alone”

    • A key toehold for someone to be able to pull themselves up and improve: “I’m good enough to get better.”

  • Ben Follington: Interfaces are languages.

  • My daughter spontaneously created a simple card game with dynamics like War.

    • A few dynamics of the game popped out to me, that I confirmed with Deep Research.

    • Assuming no-op ties, in a game of War, whoever holds the high card can never lose it, so they never lose.

    • If each card is unique, that means there's only one person with the highest card, and they are guaranteed to win.

  • When there's too much capital, it's the finance class that picks winners, not the best PMF.

    • The investors look for the person who has sufficient PMF who has the skills and will do whatever it takes to dominate.

    • Those are not normally the nice guys!

  • Metacognition is a self-transcending mindset.

    • In Adult Development Theory, one of the later stages.

    • It requires significant emotional and intellectual maturity.

    • People who have accessed that level will do a better job of interacting with and co-working with LLMs.

  • Increased sousveillance structurally selects for docility and polarization.

    • In college it felt like every class I took included the lens of game theory, evolutionary biology, or the panopticon.

      • The fear was centralized surveillance.

    • But sousveillance turns out to be weirder and stronger.

    • Sousveillance: everyone now has a high-quality camera on them at all times that can also shoot video... and they also have social networks to distribute it instantly to potentially billions of people.

    • If you see something you think is a transgression--of a law, or of your tribe's norms--you can video it and upload it and distribute it.

    • People know this, which means they become more docile, less willing to take actions that might be used as transgressions.

    • But the ability to sousveil is also used in anger.

    • People cherry pick the transgressions that confirm what their tribe already believes, producing a never ending stream of confirming evidence, no matter the ground truth.

    • In a cacophony of information, you can choose which subset to pay attention to.

    • Our brains make the implicit assumption that what we see (a subset) is a random / representative subset.

    • But when it's been curated by humans with a bias it absorbs the bias of the curators.

    • For swarm sifting sorts of social, that bias can be massive even if each individual action doesn't feel that biased.

  • Barbara Walters, author of How Civil Wars Start: And How to Stop Them:

    • A CIA study showed the two most predictive factors of a society having significant political violence in the short term:

      • 1) A slide from full democracy to partial democracy (anocracy).

      • 2) Identity-based politics

    • Anocracy, the middle stage between democracy and autocracy, is violent and unstable.

    • For countries that slide from democracy to anocracy:

      • 35% linger in anocracy indefinitely.

      • 45% slide to autocracy.

      • 20% u-turn back to democracy.

    • The return to democracy has to happen soon, within 2 to 5 years.

    • One successful approach used in Brazil to u-turn was business leaders banding together.

      • Business can be very powerful when you solve the collective action problem.

      • Before the 2022 election that could have cemented Bolsonaro permanently, they ran a public service campaign not denouncing him but talking about the value of democracy for prosperity.

  • Imagine you’re a CEO in a retributive proto-authoritarian regime.

    • Do you raise your hand and say “no?”

      • No, you kiss the ring.

      • It’s all downside to raise your hand, all upside to comply.

    • A collective action problem, a prisoner’s dilemma.

    • One solution: to change the payoff matrix.

    • Make it so complying is also not cost-free.

    • Make it clear that when the rule of law returns those complicit will be punished according to the law.

    • Add constraints, e.g. by states, that punish compliance and make it not zero-downside.

  • Unpopular autocrats have to speed through corruption of the system.

    • That makes it more likely the populace notices.

    • It’s a Hail Mary.

    • Autocracy is possible with a fast, too-late-to-resist shock-and-awe or a Hungary-style slow boiling of the frog.

    • But the middle is fast enough for the populace to notice and slow enough for the them to be able to do something about it.

    • Humor that calls out the leader is the way to push back.

    • Remember, a would-be authoritarian wants a violent confrontation.

      • They can use that to justify further lock downs.

      • Don’t give it to them.

      • Strict non-violence helps prevent a slide.

  • The “reality-based community” is the Saruman worldview, in geopolitics and tech.

    • The doers create the reality, and the thinkers have no choice but to follow along.

  • We’re in an interregnum period

    • Between two paradigms.

    • The old world is dying, and the new world struggles to be born.

    • Now is the time of monsters.

  • Jason Halbert, a former interrogator, has advice on conducting high-quality interviews of candidates.

    • Audacious acceptance allows people to feel unlocked.

    • Don’t force people to lie or present the face they know you want to see.

    • Give them the permission to “sing their truth.”

    • Tell them to tell you the thing they are the best in the world at, and where it comes from.

    • Also give them amnesty for their deep secret.

      • “There’s a deep secret, a thing you’re holding onto that could derail this interview. Hiding it now doesn’t do any good for either of us, because it will come out, maybe a week from now, maybe a year from now, and it will feel like betrayal. So tell me it now, so we can name it, decide if it disqualifies, and move on.”

    • When someone shows you their scars, and you don’t run away, it builds trust.

    • In many cases, it’s not actually disqualifying, but having it on the table establishes trust.

    • Authentic, audacious acceptance.

  • Tech industry conferences often trot out the most recent lottery winner as though they’re some wise oracle.

  • A downside capping maneuver: make the writing fun to read so even if readers don't agree with the conclusion it was still not a waste of time.

  • “Action without reflection” is sometimes positively framed as “fearlessness.”

  • A rule for good conversations: “conversations, not pontifications.”

    • Conversations should be a discussion, not a lecture.

  • A model of leadership: the inverted pyramid.

    • Instead of the leader being at the top of the org, they’re at the bottom.

    • Their job is to support their team to be as high-performing as they can be.

    • Servant leadership.

  • A tip from Sesame Street for kids feeling big feelings to reset: Be a Tree.

    • Imagine your feet being rooted into the ground, your body stretching up strong like a trunk, your arms stretching out like branches, soaking in the sunlight.

    • The act of imagining a wholesome, nourishing thing breaks the loop of negative emotion to allow taking a breath.

  • You know it was a good movie when you wake up the next day you still want to talk about it.

  • Prepare the child for the road, not the road for the child.

  • An aggravated kid isn't giving you a hard time, he's having a hard time.

  • "The opposite of love is not hate, it's indifference."

  • When the situation is easy, it's trivial to live aligned with your principles.

    • The only question that matters is: how do you live when the situation is hard?

  • An idea from Epictetus: Blame no one, including yourself.

    • Blame is a toxic circuit that compounds.

      • Similar dynamics to lying.

    • Simply don't ever start.



Reply all
Reply to author
Forward
0 new messages