I’ll skip Bits and Bobs next week due to the holiday. They’ll return on Monday, January 6th.
Last week I framed LLMs as dowsing rods.
The more I think about it, the more I like that frame.
A dowsing rod is a fuzzy kind of imprecise ‘magic’ that you should hold lightly.
It’s also principally animated by the intuition of the operator.
Better intuition leads to better results.
It’s an object that distills and focuses that intuition into a convenient interactive package.
For the LLM this “distilled intuition” works at two levels.
1) The intuition of the questioner to drive the LLM to useful insights.
That is, the most important thing to get good answers with LLMs is to ask good questions.
2) The LLM itself is the distilled intuition of all of society into a cultural technology you can talk with.
The hive mind with a voice.
LLMs make magic a commodity feature.
LLMs are magic pixie dust you can sprinkle on just about anything to make it magic.
The question now is finding the richest substrate that unlocks the most value from the magic.
Before, it took a lot of proprietary effort to make your thing magic.
Now it requires just cheaply applying commodity magic that anyone else could apply, too.
It's magic... but the same magic anyone else could have applied, too.
If everyone’s sprinkling the same pixie dust, their magic has no differentiation.
Who will use the magic pixie dust to kick off a self-catalyzing, differentiated quality loop, where the pixie dust doesn’t give the magic result, but enables a self-catalyzing quality-increasing process?
LLMs aren’t magical duct tape themselves, they’re just the magic.
I’ve been calling LLMs “magical duct tape” for a couple of years, but now I realize that’s wrong.
The duct tape is the substrate you sprinkle the magical LLM pixie dust on.
Today, people aren’t sprinkling the magic on general purpose substrates.
Well, with some exciting exceptions like TLDraw’s Computer.
The question now is to find the most powerful open-ended complement to LLMs, the best duct tape to make magic.
The activated energy of LLMs is tied to the substrate they’re applied to.
Imagine LLMs as producing little seeds of potential energy.
Today most companies are spraying those seeds across a barren concrete surface.
Very few seeds are taking root, although the magic is high enough potency that even the infrequent and meager sprouts are impressive.
Who will figure out the correct fertilizer to spread the seeds in, that will allow all of that potential to grow into a rich forest ecosystem of activated energy?
Fertilizer is powerful, but if you’re not careful, you can get stuck in a muddy pit.
The center of your universe should be your data.
The goal is how to make it so as much of your data's potential energy is converted into kinetic energy that benefits just you.
Your personal data store (PDS) and the LLM you use should be distinct entities, controlled by different parties.
Ideally you should be the one with control over your PDS.
The power, the center of mass, should be in your PDS, not your LLM.
Luckily with all of the amazing competition and progress, it looks like there won’t be a single model that is orders of magnitude better than all of the others.
The more good options there are, the less power any one option has over the ecosystem.
What you want is your own personal data koi pond.
A data lake is industrial scale, overwhelming.
You want something cosy, human scale, calming, and fully owned by you.
LLMs are a superintelligent rubber duck.
“Rubber ducking” is the phenomenon where when faced with a programming bug, and explaining it to another person (who is just nodding along, like a rubber duck) you solve the problem yourself.
In explaining it to someone else for them to understand, you had to explain it to a level where the problem pops out to you, too.
You were missing the problem before because in your mind you were skipping over an important detail with a hand wave, but trying to explain it to someone else required you to engage with that detail and thus discover the problem.
LLMs are a great conversation partner for you to figure out the answer yourself.
Some AI video output is mind-bendingly bizarre.
For example, some of the examples in this model shootout where the different models are compared on how they handle the same query about cutting a steak.
The AI video output looks totally reasonable on a given frame, but as the video plays and the AI has to try to make sense of potentially ambiguous or weird details, it sometimes resolves them with impossible, unrealistic solutions.
An example I experienced this week: a video of a Christmas village from a birds’ eye view, with bokeh around points of christmas lights down below.
So far, so good.
Now, the camera dollies forward through the sky, towards the village.
The model didn’t realize that the big balls were bokeh that should stick to the physical location they are emanating from.
Instead, it interpreted them as giant floating light emitting orbs over the village.
Bizarre!
When you watch the model make weird decisions about the world in the video, it gives a very disconcerting vibe.
It’s like the model is trying desperately to make sense of the world depicted in the frame, and sometimes making weird decisions.
This, by the way, is how human minds work too.
Our minds are constantly trying to predict what they’ll experience next, by building up an implicit model of the world.
Sometimes our brain guesses wrong and then later more signal comes in that requires our brain to snap to a different mental model.
Various optical illusions trigger this reliably.
When it happens, there’s a kind of whooshing vertigo feeling as the whole world reorients around you… but nothing visually changes.
Kind of like the dolly zoom camera move Jaws made famous.
A “wait what is even happening” kind of disconcerting effect.
We’re trying to make sense of an actual physical reality that has certain constraints, so the visual field doesn't change in that moment, just our interpretation of it.
The AI is trying to simulate a coherent reality, so when it makes a bad implicit world model choice, it leads directly to odd, unrealistic visual artifacts.
For humans, we have tons of experience with the real world, and also the physical world is primary and our perception of it is secondary.
For AI video models, the visual perception of it is primary and the world model is secondary.
AI video models also have much less ground truth experience in the real world than humans do.
Watching when the model makes a weird interpretation that goes against your expectation gives that same disconcerting world model swapping feeling.
Imagine being in the biggest library in the world, a Borges-ian infinite library, and not knowing the dewey decimal system to find anything.
The problem in an infinite library is not whether the information exists, it’s how to retrieve it.
When extracting information from LLMs, we’re like cavemen poking them in the dark.
LLMs encode vastly more information than we know how to retrieve.
We’re in the very early stages of figuring out how to wring out all of the information they encode.
Getting great results out of LLMs is entirely the domain of folk knowledge, with people like Ethan and Lillach Mollick the undisputed champs.
For example, like having LLMs have conversations with themselves to distill and dive deeper into the most promising options can give better results.
You can look at the approaches that scale test-time compute (e.g. the approach that O1 and others use to get higher quality reasoning) as a savvy technique to wring more baseline knowledge out of a system.
LLMs never get bored, and never run out of ideas; if you give them space, they will spew out all kinds of ideas.
Most of them will be crap, but some subset will be good.
If you give them the space to spew, and have some way of sifting through what they produce, you could find high quality results.
Scaling test-time compute allows the LLM to unspool much more approximate knowledge in its own “internal monologue” and then select and synthesize the subset that is most promising.
In some domains, like math proofs, you can use formal systems like Lean to cut through all of the noise and zero in on the formally plausible answers.
In other domains, you can train a reward model that learns which kinds of intermediate thoughts are most useful.
Computing inside AI frames our current ways of interacting with LLMs as like interacting with computers before they had a GUI.
What other techniques will we develop to extract orders of magnitude more insight out of these models?
For technology to be deployed it has to not just be technically feasible but also make business sense.
You need both!
We can make supersonic passenger flights if we wanted to.
We have the tech, but we don’t have the demand.
Looking at the chart in this O3 analysis, there seems to be clear logarithmic quality curves.
That implies a ceiling on the quality/cost curve.
You can get a linearly better result… but it’s going to cost you non-linearly more.
Many possible uses won’t be viable at that marginal cost of quality.
An interesting use case for LLMs: on-demand cozy schlock novels.
For example, fan fiction or formulaic romance novels.
These novels already aren’t great literature, they’re formulaic and basic.
What character growth happens is not novel and interesting but formulaic and predictable.
These schlock novels are read mostly because they are comfortable.
For example, reading cozy gay romance schlock novels is the way I turn my brain off and help me get ready for sleep.
But if you could come up with a paragraph describing what you wanted the book to be about, and could get an on-demand custom schlock novel produced, that would be fun and empowering.
It shouldn't be that hard, if you aren’t aiming for high art but cozy schlock.
Last week I spent a few nights trying to get Claude to write the short story concept I sketched out as an experiment.
I made much more progress than I would have thought I would.
My strategy was to iterate with it to pick a story synopsis I liked, and then iterate on a number of options for story outlines, and then finally have it generate pages using other stories of mine as a style guide.
In the end I wasn’t able to wrestle the model to the ground to make enough details consistent; every so often at a given stage (e.g. when converting the story outline to actual pages) it would get a little bit off and would need to be re-steered.
But I imagine that just very lightweight scaffolding to allow a tree of prompts, the ability to regenerate a few options for each node and pin the ones I like would get me surprisingly far.
Collapsing intuition to formal rules is an expensive, combinatorial process.
The intuition is squishy and fluid, but the rules are hard.
To capture one unit of squishness requires an order of magnitude more hard rules.
This combinatorial explosion is what A Small Matter of Programming ran into.
Before, the only way to do squishy things was to have a human in the loop, but humans are expensive and get bored.
Now we have LLMs to do some of the squishy, high-context things that can float around the problem domain.
But that means that if you’ve iterated to find something you like, you want to “pin it down” as you have it go into the details.
You still want to “pin down” the parts that you like into formal rules to make it not be fully free-floating, and make the output be more predictable.
It should be possible to have a continuous gradient from nothing pinned down to everything pinned down, where the user can decide when it makes sense to dive into the details to pin them down.
Plus, the LLM is much better at coming up with proposed formal rules that capture the intuition that you can simply react to.
Instead of having to draft the rules, you can see what the LLM generated, pin the ones that were good, and spin the LLM roulette wheel again on the ones you didn’t like.
I was debating with someone if the vertical SaaS business model will persist in the world of AI.
I think it will, but that the new businesses will fight over ever-smaller niches.
The vertical SaaS business model is so powerful because of the opinionated, integrated operating system for a specific kind of business: a system of record that everything at the customer business revolves around.
The opinion is part of the value; it can encode best practices discovered from all of the other customers in the vertical, guiding a given customer business to reasonable, effective defaults.
The gravity of that operating system is extraordinarily powerful, and extremely valuable for the customer business, which makes the products very sticky.
None of that power goes away in a world of AI.
However, two things lead to ever smaller niches:
1) The maturity of the playbook in general; all of the good niches already have well established players.
2) AI makes it faster to write straightforward software.
Vertical SaaS is very straightforward software.
This means that it’s cheaper for new entrants, which will drive competition further into niches than would make sense if software were still as expensive to write.
A number of smart people are focused on the agent frame for using AI.
Some are focused on making a small number of powerful agents in particular domains.
Others are trying to help orchestrate swarms of dumb agents.
Agents are defined by doing something, which might have significant side effects, perhaps with some significant downside.
In the “orchestrating a swarm” version, the user becomes like Mickey in The Sorcerer’s Apprentice, trying to direct an unruly swarm.
Powerful, but also overwhelming and potentially dangerous.
The point of a browser isn't any particular website.
Imagine if you didn’t know what the web was and someone wanted to demo it to you.
They show you a random web page, and it’s some crappy page by a high schooler about their Beanie Babies.
"...Oh, um, I don't think I need that, thanks!"
But that would be missing the whole point!
I do not believe the “good, fast, cheap: choose two” maxim is primarily devious misinformation spread by the slow.
Or rather, sometimes it is spread by people making excuses in their particular domain, but the tradeoff is real, a fundamental, inescapable phenomena.
Some slow people do over-apply the maxim to their domain, but “people are slow” does not explain away all, or even most, instances of the maxim.
The maxim only applies if you are on the efficient frontier to actually activate the tradeoff.
It is very, very easy to not be anywhere near the efficient frontier.
It is a convenient excuse for people who have not rigorously ground-truthed how close they are to the efficient frontier to claim the maxim as a hand wave.
(Although you could frame the additional work to prove you are at the efficient frontier as consuming effort in the time dimension, thus fitting into the maxim, and choosing the good/cheap quadrant.)
Another source of confusion: you can “solve” the tradeoff by moving in a fourth dimension.
If you are in a position of unquestioned power in a given domain, you can simply change the requirements.
The changed or loosened requirements might permit a solution that previously was not permitted.
(You could frame this as choosing a point closer to the fast/cheap and relaxing ‘good’ if you wanted, staying within the fundamental tradeoff.)
If you were in a position of unquestioned power to change the requirements, you might not even realize you were doing it.
“Why doesn’t everyone simply boldly change the requirements like I do?”
Often there’s precisely one person in a context who is actually permitted to do that.
Today when you hear “designer” you think about someone who can churn out high-quality mocks and redlines.
But that’s a small sliver of what “design” means.
Design is not how it looks, it’s how it works.
Design is holistic, multi-dimensional synthesis, an ability to see the whole in a way no one else can see.
A true design leader is transcendent.
They’ve become more rare in our mature, increasingly crank-turning industry, but they still do exist, and they are incredibly important.
When a true design leader tackles an important problem, the world tilts on its axis.
A few things I liked reading this week.
Amelia Wattenberger’s LLM fish eyes.
It does track to me that LLMs will be an ingredient that allows new kinds of UX to become possible that wasn't possible before.
The ability to generate high-quality summaries of prose at different layers of distillation for ~free seems like a big unlock!
Anthropic’s practical guide to agents is excellent.
Grounded, clarifying, direct, insightful.
Scaling Test Time Compute from HuggingFace.
Great overview of the techniques underlying models like o1 and how effective they are in practice.
Ryo Lu on How to Make Something Great.
Beautiful, insightful, and generative.
"We can call this the "Rice Knuckle Rule" Rule: what people are actually doing is following their experience and their complex personal judgement, while claiming to be following a vivid rule-of-knuckle everyone else does."
Michael Lapadula, an incredibly insightful engineer, has a new external blog.
He was my favorite internal blog at my last job, and I’m so excited that he’ll be publishing similar thoughts externally now!
If you're selling dollar bills for 90 cents, you'll think people love you.
Gather.town is the most effective metaverse I know of: cozy and lofi.
We use it in our small seed stage mostly-remote startup and love it.
It is unreasonably, surprisingly effective in that environment.
Gather.town is unpresumptuous; its lofi look is cute and disarming while still being sufficiently rich.
For example, we have a set of couches in the middle of the virtual office–if two people are chatting there, it’s obviously OK for someone else to sidle up to the conversation.
We also have a small, two person, cozy coffee table at the back of the office. It’s obviously not OK to sidle up to that conversation unless you’re invited in, because it’s small and off to the side.
This rich social nuance tracks immediately and obviously from real world social interactions, with very simple graphics.
It’s striking how rich of a consistent, rich cultural context it creates for everyone.
It’s also cozy. You know all of the people in the office.
Even when there are visitors, you still know most of the people.
You don’t need formal policies and rules to make sure everyone gets along; the situated social context of that group of people, with a rich-enough substrate to communicate social intent in a given situation, is sufficient.
Contrast that with Meta’s bizarre hellscape of the anonymous metaverse, which feels alienating and impersonal.
Ryan George has a great recent video on how bizarre it feels.
It seems to me that remoteness inherently makes deep social connection much harder; if a product has to rely entirely on interactions in the virtual world and can’t rely on pre-existing social relationships, it will always feel alien.
Many people think you can only do your best professional work when you're being serious.
I think you can only do your best professional work when you're being playful.
Being able to be playful in serious situations is a privilege.
It is much easier if you feel that you’ve been successful, if you have the self-confidence that comes from knowing that you’re good at what you do, and that other people know that you’re good at what you do.
An asymmetric edge: a thing that's important that feels like work for others but feels like play for you.
It's very rare for someone to be truly exceptional at a thing they don't enjoy doing.
If you wouldn't do it for $1, then maybe you shouldn't do it at all.
The most valuable things are the things we find intrinsically rewarding.
These are the things that we will enjoy and grow into, that we will be in our flow state.
Money is about extrinsic reward.
If you would only do it for the extrinsic reward, then it might not be worth it.
Ideally money helps you do things you would already do for free, not making you do something you wouldn't do otherwise.
Of course, this tradeoff is easier when you are more financially independent; you have to put food on the table after all.
It's distressing when Gmail completion accurately predicts precisely what you were going to say.
"Am I really that predictable?"
"... Yes?"
Sometimes people come up with conspiracy theories about it listening in the background to explain why things like it are so distressingly predictive.
We think it's listening, just because it's so existentially terrifying if it's not.
Which one is more scary: a massive surveillance company listening in on every facet of our lives or that most of our supposed individuality can be perfectly captured by a few markov chains?
When you wrap a feedback loop around an inner one, you help the inner one get tighter, better contextualized, more motivated.
It runs the inner loop hotter, more learning per cycle.
The inner feedback loop has to be strong enough to support that weight of the outer loop without collapsing.
Coordination and collaboration are hard for different reasons.
Coordination requires ridiculous amounts of communication and paperwork.
Collaboration requires hard work to create a high-trust environment for diverse perspectives to come together into something wildly better than any individual could have done themselves.
Coordination is not a creative act.
Collaboration is a creative act.
When you create together, the creative tension helps build trust.
How can you kick a coordination challenge into a collaboration one?
By the hard work of removing the emotional, zero-sum-y fear–which is much easier said than done!
Creativity and conflict are two sides of the same coin.
For a thing to be creative it has to be a thing that stands out from the consensus, the default.
That's what it means to be creative!
A thing that is different from the status quo; that is in implicit conflict with it.
Sometimes you have a story problem and sometimes you have a telling problem.
If you have a good story but you aren't telling it well, that's easy to fix.
But if the story isn't good there's nothing that can be done to fix it.
Is the problem the superficial qualities, or the fundamental ones?
A trick for charismatic names for concepts.
Make it so it’s intriguing but unclear.
Draw them in with a “huh?!” that is then followed up with a “aha!”
Bonus points if it’s subversive.
Then once they understand it clicks and now every time they see it they’ll know what it means.
People who already know it will feel like they’re part of the same club.
The “answer” could be a pop culture reference everyone already knows (like frog DNA from Jurassic Park) or a simple evocative metaphor (doorbell in the jungle).
Sometimes there’s a nut that you know you’ll have to crack.
Perhaps there is something inescapable in your product domain that means at some point if you don’t crack that particular technical or design nut you won’t be able to make progress.
One option is to go straight up to the nut and try to crack it now.
But that means that now everything is blocked on cracking that nut, making the process high pressure, and also you won’t yet be strong.
There’s a good chance you spend all of your effort trying to crack it and never succeed.
Are there ways to make progress and momentum, building up strength to be able to crack the nut on your own terms and timing?
Many problems have irreducible complexity.
If you don’t actually grapple with that complexity, you don’t make forward progress.
You might make the illusion of progress by pushing peas around on the plate.
If you sweep an unavoidable scary thing under the rug, it’s still there!
But now it’s hidden, and you might forget it’s there, which is even more dangerous.
The job of an operator is to create clarity and momentum towards something that the team believes could be great.
Building together on a thing that everyone believes in is magnetic.
Momentum solves all known problems.
If it's not converging, you need to create a schelling point that everyone can agree is reasonable to converge on.
Being a good systems thinker makes you a worse operator.
You can understand ambiguity abstractly and still get totally frozen by it in practice.
Operators who are too smart can't operate.
You need to be able to turn part of your brain off (the doubts, the "but what about...") and do a thing that others can believe in.
If your brain is constantly pointing out the indirect effects of each decision you'll be frozen and can't move.
You have to be able to move, even though you know it's not perfect.
You want speed but also coherence.
If it's not converging then speed destroys.
If it's converging then speed builds.
The difference between the two is if you’ve already found “grip”, the toehold of momentum.
Startups happen not because someone is smart but because they're courageous.
And secondarily they’d better be smart or it probably won't work!
Someone who is very smart but not courageous wouldn’t do it, while someone who is courageous but not smart definitely might.
Two different use cases for reviewing notes: memorization and generativity.
Things like spaced repetition are about memorizing.
This is fundamentally a converging process.
But you can also review notes to see what novel insights they spark.
For example, comparing two random notes in juxtaposition and seeing if they spark anything.
This is a divergence, creative process.
Newbies often would rather just take the expert’s opinion.
The expert might say, “you can do any of these five options!”
But the newbie says, “I don’t even know what the tradeoffs are, or how to even make this decision, just tell me your recommendation and I’ll do that.”
Making good, opinionated, balanced calls is hard and requires deep expertise.
It requires the calibrated curatorial ability to cut with confidence and to understand the implications of the decision, to not do it naively.
To cut you have to understand the tradeoffs that are only obvious with experience.
These cuts then make it wildly easier for beginners; instead of time swirling and lost trying to decide among other options, there isn’t even a decision to make.
This is one of the reasons I love Golang as a language.
You love your own mess more than anyone else possibly could.
It’s very important to not over-fixate on your toe hold.
Your toe hold is the thing that gives you “grip” and allows you to create momentum to rally around.
If you optimize too much for that niche you'll inadvertently create something that only works for that niche and increasingly doesn’t work outside of it.
Getting to a toehold (bootstrapping a feedback loop) is everything, and yet make sure that any specific toehold you find doesn't become your everything.
Imagine the team needs to make it through a scary jungle.
The team might be jittery or nervous about the dangerous trek.
One approach is to make a detailed map that lays out a path through the jungle.
But a plan is a liability; it could turn out to be wrong.
In a dark jungle, there will definitely be unknowns.
If anyone on the team finds any detail that’s clearly wrong, it makes them distrust the whole map.
“We're going into the jungle with a map that is clearly wrong! We'll die!"
Instead the better move is to lean into a clarifying and inspiring high-level vision.
"Yes, getting through the jungle will be hard, but we are wily and adaptable, and on the other side are mountains of gold. We can do this!"
It's easier to align on abstract things, and abstract things are more resilient to a complex/surprising environment, less likely to be revealed to be wrong.
True psychological safety creates the space for unsafe thinking.
Unsafe thinking means looking at disconfirming evidence, actually digging into disagreements, and considering other options.
Psychological safety, when done properly, can create the space for much better outcomes, because it allows rigor and ground-truthing that improves the quality of the ideas.
Superficial psychological safety is about being conciliatory, never challenging anyone on their beliefs, just agreeing all of the time.
This leads to brittle teams doing poor quality work.
To get strong teams doing great work requires true, earned psychological safety.
It’s very hard to do the true one!
For example, it’s easy to fall into a trap of either never challenging anyone’s thinking, or on the other side, intellectual bullying from powerful people, where they think they’re helping people be more rigorous and actually they’re just pushing others into a defensive crouch.
Building trust between people who are disagreeing and don't currently have trust is hard.
But it’s significantly easier if you find common ground and then build up incrementally from that.
When there’s no common ground, people are in the “they’re on the other team” mindset.
When there’s common ground, at least on that small bit of overlap, there’s “we’re on the same team”.
Find the inches of common ground first, celebrate those and then build up to the parts where you truly disagree.
Now you’ll be on stronger ground to collaborative debate them as one team as opposed to two battling teams.
When you're in your flow state individually you're 10x more productive.
When you're in your flow state together as a group, the group is 100x more productive.
To be in your flow state, you have to be in your highest and best use: working on things that are important and that you have the skills for but that push you to the limit–but not beyond.
At a big company, "making a good decision" is often the output.
At the beginning of a company, it's entirely about output: getting grip.
Grip: product market fit, PMF, toehold, momentum, etc.
Who cares if you made a good decision, it's all about "did you create grip or not"
Sometimes a “bad” decision that gives you grip quickly is better than a “good” one that does not.
Last week I riffed on how success creates conservatism.
My friend Anthea Roberts shared some interesting observations.
Apparently Howard Gardner did research on common factors that serial innovators have.
Their key distinguishing factor is that they deliberately go to the edges of the system and away from the center.
If their thing becomes successful, instead of staying at the center of this thing they created, becoming more conservative, they push themselves to the edge to find a new thing to create.
The two unteachable skills for momentum in ambiguity: an ability to see nuance and an optimistic curiosity.
Momentum through ambiguity allows you to differentiate from others.
It is hard to do.
If you can do it and your competitors don’t, you can find the truly great, differentiated ideas.
An ability to see nuance means that you know the world is not single dimensional and black and white, but multidimensional and shades of gray.
If you find something surprising, that implies there is a dimension or gradation you don’t yet sense.
But realizing there is something you don’t yet see is the first step to seeing it.
Optimistic curiosity is that when you find disconfirming evidence, instead of interpreting it as a threat, you see it as an opportunity, a way to gain more momentum through increased clarity.
Surprise should feel like potential momentum, not a setback or an excuse to become more cynical or disappointed.
Optimistic curiosity is what lets you go faster when you find disconfirming evidence.
The two skills together: understand and simplify.
You need both to be able to sense the wisdom to be able to absorb it, and you need to want to absorb it.
Without either skill, you will get stuck immediately in the mud of the ambiguity.
With only the optimistic curiosity, you will get momentum without absorbing the nuance, and you will speed yourself off a cliff.
With only the ability to see nuance, the more nuance you sense, the slower you will go, getting stuck in the jungle.
These two skills are unteachable. You either have them or you don’t.
If you don’t have them, the only way to gain them is to be reborn with them.
Rebirth must come via a crisis.
The rebirth must feel first like death.
Even if you know the path, you must walk it yourself.
Even if you have loving guides with you who want to help you with it, you will push them away before you are ready.
The crisis must be alone, even if you are surrounded by love.
This is because you must face yourself as you actually are.
All of the contradictions and self sabotage and imperfection.
Before the crisis you think you are becoming a more perfect version of you.
After the crisis you realize perfection is impossible, and that fact is beautiful.
If a given type of toxic situation has reoccurred in many different situations you've been in, the common denominator is you!
Perhaps you are causing it in some indirect and non-obvious way?
It is only through real self reflection that growth can happen.
People who are very smart can delay their emotional growth.
They can create rational-sounding protective emotional armor that prevents them from having to grow emotionally or confront hard truths.
When they’re doing that, they’re impossible to help.
They’re protected enough to not have the full on crisis, and also don’t know they need help to navigate it.
If a crisis that might lead to self growth is happening in an organizational context, it’s hard to trust it.
“Is this just the machine trying to manipulate me to get a better result for the business, in a way that might harm me?”
Often the crisis occurs in a work context, but the rebirth that comes after it can be authentic and personal.
When you have your crisis and rebirth, you are extremely vulnerable; it’s important to know you are in a safe space.
You have to actively remember to look for disconfirming evidence.
You don’t have to remember to look for confirming evidence.
We do it naturally because it feels good to have your beliefs confirmed.
But disconfirming evidence is how you get stronger.
That’s why it’s especially important to proactively look for disconfirming evidence.
You at your worst and you at your best are right next to each other.
You’ll think you’re still in your best zone and actually you’ve shaded into your hidden worst case.
The part where you're not creating value but you think you are.
Realizing that you are not creating value, or that you’re destroying value, will be even harder.
Because it will be intertwined with your ego.
“No, no, I can’t be doing a bad job here because a defining characteristic of what makes me good is this skill.”
It doesn’t matter if you’re right, it matters if the thing gets done.
If people think there are two camps, then it will become true.
A self-catalyzing belief, and very easy to fall into.
When there’s a boundary between two teams, that boundary will be strengthened and accentuated, even if it started off meaningless.
The most iconic example of this is the famous summer camp experiment that assigned boys into two camps randomly and weeks later it escalated to warfare.
"We're all on one team!" is important to repeat often to counteract this.
If you learn to survive on your own in the woods you’ll learn tactics that in civilized society might be directly counter productive.
Or learn tactics that actually aren’t as effective as you think but no one was there to guide you any better.
If you feel like you haven't had the success you deserve (e.g. relative to your peers) you get a chip on your shoulder, and you're less open to learning or growing.
You defensively crouch on things that aren't working for you, and that you don't even realize aren't working.
"I'm doing the same thing as them but what I'm doing isn't working for me, and it's working for them. That just shows how unfair it all is!"
Perhaps you’re only doing something that is superficially the same but is missing some fundamental component?
When you feel like you've been successful you'll be able to grow and look at yourself critically, to be more playful, open to change.
Remember "thinking you're successful" is not some external objective fact, it's a mindset.
A conscientious person can get in a toxic martyr loop.
(Something I discovered about myself, with great effort, in years of couple’s counseling.)
The martyr sees a thing the group needs, and they jump in to do it... but resent having to do it.
Sometimes they do work that no one asked for, and also don't do a particularly good job at it, and all it manufactures is resentment.
A savior needs a thing to save.
Which means sometimes they'll break things, or allow things to break, so they have things to save.
If you're looking for villains, you will find them.
Even if they weren't there.
You'll make a villain.
If you keep on finding villains everywhere you look, then you will never grow, because there’s always a convenient external excuse.
If you back someone into a corner and chain them up, you create a monster.
They will lash out in dangerous and grotesque ways, not because they are fundamentally a monster, but because you put them in a situation where anyone would be a monster.
Assuming bad faith is a bad faith action.
By assuming bad faith in the other you are being bad-faith in the first place.
Once one party assumes bad faith it is a spiral that cannot be pulled up from without significant, heart wrenching, vulnerable facilitated discussions… or everything exploding.
Bad faith actions are toxic; they start a toxic spiral, a bad faith cascade where both parties take actions to respond to the other’s bad faith, which are interpreted as even worse bad faith by the others.
So don’t take the first step and start the bad faith cascade.
One way this can start is someone playing back what the other person said but willfully creating a caricatured, unreasonable straw man that presumes bad faith motives, and presenting it as objectively true.
Fight the enemy, not the terrain.
If you try to protect your collaborators too much, you might infantilize them.
You insulate them from real constraints and realities, and make them more brittle to things that might need to change.
You also might think they have a stricter constraint than they actually do, because you don't know what they think about the real situation, but instead know what they think about a simplified one.
So you conclude they are more brittle than they are.
If you think someone is brittle, you will make them brittle.
You'll isolate them from things that could help them become stronger, and make it more catastrophic when they do interact with something outside what they were expecting.
While the people in the relationship both want it to work, it can work.
If either decides they don’t want it to work, that becomes a bit flip.
It goes from possible to converge to impossible to converge in a moment.
Parents of preschoolers have all seen: kids follow their teachers’ instructions way better than they do their parents’.
What’s going on?
I think part of it has to do with the kids seeing all of the other kids following along.
If the teacher issues an instruction, and 90% of their peers all follow along, it puts more social pressure on the individual to go along.
Everyone else is doing it, so it can’t be that unreasonable.
They don’t want to be the odd one out.
This is even more effective with mixed-age classrooms: the more mature older kids are a kind of seed crystal of rule following for the rest of the class to follow.
It seems like a metastable equilibrium, though… You could imagine on day one if most students don’t follow the teacher’s instructions, it could very quickly switch into another stable equilibrium of chaos where kids learn to not follow the teacher’s instructions because no one else does.
If I were a preschool teacher I’d be an anxious mess that very first day of class!