DD false modesty example in recent YT vid (Analyzing Lies example)

38 views
Skip to first unread message

Max Kaye

unread,
Aug 14, 2020, 12:42:02 AM8/14/20
to Fallible Ideas List
DD said this in a video he published ~recently (December 2019). In it, Lulie is asking him about the 'Fun Criterion'.
She does a bit of interviewing through the rest of the video, but it's mostly DD talking.

The part I want to focus on is ~15s to 21s. The quoted dialog starts at 9s. There's a quick 1s cut to Lulie after DD
says that sentence. I'm not sure what her facial expression is but the cut and the expression both seemed odd to me.

https://www.youtube.com/watch?v=idvGlr0aT3c&t=9

> Lulie: It seems like you're saying something epistemological instead of just like "this is David's life advice".
> David: Yes, well, uhhhm, far be it from me to give life advice; that would be terrifying.

I think it would be fine if he expressed that he wasn't aiming to give life advice (after all, what you do with an idea
is up to you), but if he wanted to do that he should have done so directly.

But instead he said it would be **terrifying**. I think that's a socially acceptable way to react but it's false b/c it
isn't terrifying to think about DD giving life advice. In fact, of all the people on the planet he's probably a **very
good choice** on the whole. Like there are not many people to choose who are better.

He's implying his life advice would be *bad*, and he's doing that even though he doesn't believe it, and Lulie
presumably doesn't believe it either. I don't believe it (at least generally, maybe his life advice on social dynamics
I'd take with more salt). What is that statement doing there? It is jarring and out of place to me - though most of
what I know of DD is via BoI, so maybe lack of experience on my part is a factor.

it's also condescending to his audience; an audience (some of whom) he's attracted by having *good ideas*! What else is
life advice meant to be made of?

--
Max
xk.io

I post my FI work/articles/exercise/practice here:
https://xertrov.github.io/fi

I sometimes stream to an FI specific YouTube channel:
https://www.youtube.com/channel/UCscXO9PxvE7M2YZQmJo6LwA

I track my work publicly (somewhat lossy) here:
https://trello.com/b/Rp09E9RI/max-fi-flux-work-other

I also keep some public FI notes in a google drive folder:
https://drive.google.com/drive/folders/1_CXXYqdyV8FE5BWXMGChlhd76vL4gc--

Justin Mallone

unread,
Aug 14, 2020, 9:21:38 PM8/14/20
to Fallible Ideas List

On Aug 14, 2020, at 0:41 AM, Max Kaye <m...@xk.io> wrote:

> DD said this in a video he published ~recently (December 2019). In it,
> Lulie is asking him about the 'Fun Criterion'.
> She does a bit of interviewing through the rest of the video, but it's
> mostly DD talking.
>
> The part I want to focus on is ~15s to 21s. The quoted dialog starts
> at 9s. There's a quick 1s cut to Lulie after DD
> says that sentence. I'm not sure what her facial expression is but the
> cut and the expression both seemed odd to me.

Ya I don't see the point of a cut to what seems like a
non-reaction/neutral face, unless you just assume that Lulie wants to
get her face in their as much as possible.

> https://www.youtube.com/watch?v=idvGlr0aT3c&t=9
>
>> Lulie: It seems like you're saying something epistemological instead
>> of just like "this is David's life advice".
>> David: Yes, well, uhhhm, far be it from me to give life advice; that
>> would be terrifying.
>
> I think it would be fine if he expressed that he wasn't aiming to give
> life advice (after all, what you do with an idea
> is up to you), but if he wanted to do that he should have done so
> directly.
>
> But instead he said it would be **terrifying**. I think that's a
> socially acceptable way to react but it's false b/c it
> isn't terrifying to think about DD giving life advice. In fact, of all
> the people on the planet he's probably a **very
> good choice** on the whole. Like there are not many people to choose
> who are better.

He also actually gave what many would reasonably view as life advice
regarding stuff like parenting ideas for quite a while. See e.g.
https://curi.us/tcs/Articles/index.html
I guess he doesn't want to admit that now though and instead prefers to
feign humility?

> He's implying his life advice would be *bad*,

Well, he's saying stuff that will be taken by many people in that way -
as meaning his advice would actually be bad/terrifying. This implication
was intentional IMHO.

But you could also read it as him saying that people would *find* his
advice to be terrifying, or even as the results of him giving life
advice being terrifying for *him* (due to people's negative reaction).
He omitted *why* it'd be terrifying and *for whom*, which really leaves
the interpretative playing field wide open...Him being vague about those
details doesn't seem like an accident to me.

> and he's doing that even though he doesn't believe it, and Lulie
> presumably doesn't believe it either. I don't believe it (at least
> generally, maybe his life advice on social dynamics
> I'd take with more salt). What is that statement doing there? It is
> jarring and out of place to me - though most of
> what I know of DD is via BoI, so maybe lack of experience on my part
> is a factor.
>
> it's also condescending to his audience; an audience (some of whom)
> he's attracted by having *good ideas*! What else is
> life advice meant to be made of?

Most of DD's fans are gonna be easily turned off by lots of stuff he
could say if he stated his full opinion, and that is particularly true
of stuff that implies that they're living their lives wrong.

-JM

Elliot Temple

unread,
Aug 14, 2020, 9:59:43 PM8/14/20
to FIGG
I think you guys are wrong (and aggressive and combative) and should drop this and do your best to hold no negative opinion about DD over that statement.

Elliot Temple
www.curi.us

Max Kaye

unread,
Aug 15, 2020, 12:53:56 PM8/15/20
to fallibl...@googlegroups.com
I'm looking for:
* particularly whether prioritising the first example is important
* advice or thoughts on what to do when stuck on feedback

when I say "judging feedback" I mean: it's meaning, purpose, reasons/explanation, quality, urgency, significance,
reach, etc.

I have two examples, the second of which I think I'm okay with (like not making any major mistakes currently)

Some feedback I can't discuss at the time because I don't know enough. The strategy I've been using recently is to note
it down and think about it in the background, or come back to it once I've learnt something relevant. If there's a
reason to prioritise it then I should consider that, but mostly it doesn't seem like there is. This is, in part, to
avoid becoming overwhelmed and keep to a load I can manage.

I expect this sort of thing to be a general problem people have.


Example 1:

(from thread: `DD false modesty example in recent YT vid (Analyzing Lies example)`)

On Fri, 14 Aug 2020 18:59:39 -0700 Elliot Temple <cu...@curi.us> wrote:

>On Aug 14, 2020, at 6:21 PM, Justin Mallone <jus...@justinmallone.com> wrote:
>
>> On Aug 14, 2020, at 0:41 AM, Max Kaye <m...@xk.io> wrote:
>>
>>> (omitted)
>>
>> (omitted)
>
>I think you guys are wrong (and aggressive and combative) and should drop this and do your best to hold no negative
>opinion about DD over that statement.

This is an example of feedback I don't know how to judge and learn from when I received it. I don't even know how to
prioritise it.

I'm particularly concerned with this feedback b/c it relates to stuff I'm actively learning and I don't see a reason I
should expect to understand the feedback in the near future. It's thread from was made, in part, b/c I've been doing
some Analyzing Lies stuff. Analyzing Lies is a topic that has a lot of reach but that also makes it potentially
dangerous, like applying it inappropriately is potentially serious (by making mistakes that are permanent or hard to
undo/fix). So I think it's somewhat important to at least figure out the prioritisation question (if it's low, then I
can just avoid a behaviour or conflict; not ideal but safe).

I think I am okay at judging what the feedback *means*, like grammatically. There are 4 "why?"s that come out and I
think I need to be able to answer them all to say I've understood what was meant. If I don't understand the "why"s I
can't detect or criticise an error well (the error could be in the feedback or my ideas; I doubt my ideas more).

The 4 "why"s are around the following:

1. "you guys are wrong"
2. "[you guys are] aggressive and combative" (**note: this is a parenthetical**)
3. "[you guys] should drop this"
4. "[you guys should] do your best to hold no negative opinion about DD over that statement"

I don't know the answer to (1). This is something I think I could learn about by continuing with Analyzing Lies.

I think I have half an answer to (2), but it's incomplete. I don't want to just like 'turn down a sensitivity knob' or
something without knowing why (though suppressing this sort of thing for a little bit is fine; like flagging it to come
back to later)

(3) is safe to do without understanding provided I come back and understand why later. It can relate to the answers
for both (1) and (2).

(4) is easy to do, though figuring out how to judge relevant stuff in future is still a problem. (stuff being both
the things ppl say (similar to (1)) and the things to think as a result (like, what should my reaction be?)) I think I
can develop ideas on this without much urgency provided I err on the side of caution.


Example 2:

(from thread: `low error rate`)

On Mon, 27 Jul 2020 22:58:08 -0700 Elliot Temple <cu...@curi.us> wrote:

>On Jul 27, 2020, at 9:38 PM, Max Kaye <m...@xk.io> wrote:
>
>> Why don't you split things into 'learning' and 'low error rate' categories?
>
>Learning is more efficient with a low error rate.
>
>An error is a failure at a goal.

My situation here is different to the last one but this example is older.

There's a conflict I have between trying to do things with a low error rate and efficient learning. I don't know how to
answer Qs like "what are the times where a higher error rate is okay?", "should you ever do things where you have a
higher error rate if there are lower error rate alternatives?", and more generally "how to learn?" I can give some
answers but they're not (close to) complete. I don't need to prioritise it very much though because I'm not stuck.

I've been gaining a better intuition for the difference in how beneficial learning certain things are (compared to
alternatives) recently. This is helping to resolve the conflict between what ET's said and the question of "how to
learn"? I don't fully understand everything yet, but I think I'm making progress; getting closer to a point where I can
discuss it more competently.

I feel like my 'think about it in the background' strategy is working okay here, it's not a bottleneck, etc.


(aside: I'm unsure about making this post. maybe because I think it's like somewhat unfinished or the ideas aren't
developed enough. Besides just posting it on my site or saving it I don't know another decent option than posting it,
and b/c the topic might be important it's better to post it than not. Also it's on/over the upper limit of
https://curi.us/1805-optimal-fallible-ideas-post-size-and-style)

Anne B

unread,
Aug 16, 2020, 12:05:24 AM8/16/20
to fallibl...@googlegroups.com
On Aug 15, 2020, at 12:53 PM, Max Kaye <m...@xk.io> wrote:

> Example 2:
>
> (from thread: `low error rate`)
>
> On Mon, 27 Jul 2020 22:58:08 -0700 Elliot Temple <cu...@curi.us> wrote:
>
>> On Jul 27, 2020, at 9:38 PM, Max Kaye <m...@xk.io> wrote:
>>
>>> Why don't you split things into 'learning' and 'low error rate' categories?
>>
>> Learning is more efficient with a low error rate.
>>
>> An error is a failure at a goal.
>
> My situation here is different to the last one but this example is older.
>
> There's a conflict I have between trying to do things with a low error rate and efficient learning. I don't know how to
> answer Qs like "what are the times where a higher error rate is okay?", "should you ever do things where you have a
> higher error rate if there are lower error rate alternatives?", and more generally "how to learn?" I can give some
> answers but they're not (close to) complete. I don't need to prioritise it very much though because I'm not stuck.

I feel more comfortable learning with a low error rate than with a high error rate, so that’s what I currently aim for. But I don’t know if it’s more efficient. I’m not putting much effort into trying to figure it out, just keeping it as an open question for now.

Max Kaye

unread,
Aug 16, 2020, 3:24:07 AM8/16/20
to fallibl...@googlegroups.com
My recent experience has been very different to the rest of my life. Particularly wrt feelings of exponential progress.
I have some thoughts about this in my [thought
backlog](https://xertrov.github.io/fi/thought-backlog/#conflict-learning-and-error-rate).

Some things that occur to me:

* pre-existing knowledge helps, e.g. grammar was maybe easier for me b/c of my knowledge of logic, trees, etc
* exponential progress is do to increasingly many *interacting* areas of knowledge;
* you can't always predict the skills you'll need for a particular task, so a large library is crucial
* when you run into a required skill you don't have you *might* get stuck
* at this point indirection is necessary, which can be frustrating
* also there's the q of whether it's worthwhile
* relevant xkcd: "Is It Worth the Time?" https://xkcd.com/1205/
* convergence of ideas in your library is somewhat unpredictable but basically only gives you bonuses. without
prioritising the breadth of one's library above all else (irrationally so), it's hard to see how one would be hurt by
increasing the breadth of one's library

so WRT programming say, if you're learning it for the first time and don't have a lot of helpful pre-existing skills
(e.g. practice w/ mathematical functions) then more stuff needs to be learnt => slower progress.

so in that example there's Qs around acquiring other skills. Let's say you brainstorm 5 skills that are necessary and
you have 3. You can practice the other 2, but that's only a finitely long path: you can't break stuff down indefinitely
to get like near-infinite progress. so there is some limit we can't practically exceed except with like more time
(proportionally). (more time absolutely is a thing you're doing anyway by continuing to learn.)

so the Qs are like "are there relevant things to learn as foundational elements first?" and "at what point does
something become borderline-relevant such that it might or might not be worth the time?"

Taken to an extreme one might conclude that *all* foundational stuff is worth focusing on first before advanced stuff -
and on an infinite time scale this might make some sense (excluding local urgency). But we're not on that time scale so
practical trade-offs will need to be made at least some of the time.

My rough idea at this stage is like: I need to re-adjust the *balance* i had between when to do low-error / foundational
stuff vs higher-error rate / ~aspirational stuff. I thought the aspirational stuff was more valuable than it is,
relatively speaking.

The XKCD comic is particularly about automation, which I think is very related b/c getting skills to the "autopilot"
level is similar to automating a task. Doing so reduces mental load, increases velocity and reduces cycle time, allows
you to build new things using the abstractions created by the automation, etc.


--

## self eval according to https://curi.us/1805-optimal-fallible-ideas-post-size-and-style

Bit long (470 words) but otherwise okay.

Alan Forrester

unread,
Aug 16, 2020, 6:49:04 AM8/16/20
to fallibl...@googlegroups.com
On 15 Aug 2020, at 17:53, Max Kaye <m...@xk.io> wrote:

> I'm looking for:
> * particularly whether prioritising the first example is important
> * advice or thoughts on what to do when stuck on feedback
>
> when I say "judging feedback" I mean: it's meaning, purpose, reasons/explanation, quality, urgency, significance,
> reach, etc.
>
> I have two examples, the second of which I think I'm okay with (like not making any major mistakes currently)
>
> Some feedback I can't discuss at the time because I don't know enough. The strategy I've been using recently is to note
> it down and think about it in the background, or come back to it once I've learnt something relevant. If there's a
> reason to prioritise it then I should consider that, but mostly it doesn't seem like there is. This is, in part, to
> avoid becoming overwhelmed and keep to a load I can manage.
>
> I expect this sort of thing to be a general problem people have.
>
>
> Example 1:
>
> (from thread: `DD false modesty example in recent YT vid (Analyzing Lies example)`)
>
> On Fri, 14 Aug 2020 18:59:39 -0700 Elliot Temple <cu...@curi.us> wrote:
>
>> On Aug 14, 2020, at 6:21 PM, Justin Mallone <jus...@justinmallone.com> wrote:
>>
>>> On Aug 14, 2020, at 0:41 AM, Max Kaye <m...@xk.io> wrote:
>>>
>>>> (omitted)
>>>
>>> (omitted)
>>
>> I think you guys are wrong (and aggressive and combative) and should drop this and do your best to hold no negative
>> opinion about DD over that statement.
>
> This is an example of feedback I don't know how to judge and learn from when I received it. I don't even know how to
> prioritise it.

The comment states what you should do. And do you think that modifying your behaviour to be less aggressive is a low priority?

Alan

Max Kaye

unread,
Aug 16, 2020, 3:36:17 PM8/16/20
to fallibl...@googlegroups.com
the comment states some things, but I have questions. 'drop' and 'hold' are the (modified) actions I should do, and
doing them isn't hard or the issue.

it doesn't help answer:

* when to apply analysing lies stuff and when to share it?
* what things someone does or says are good or not good to analyse? where are the boundaries?
* should I keep doing this *sort* of thing but just, like, gently / not as aggressively?
* etc

for those I need an explanation of why it was wrong. (note: ET's comment doesn't say it *was* wrong to do but I suspect
it would be implied by the underlying explanation)

I have some thoughts on that topic in another email I'll send after this one.

> And do you think that modifying your behaviour to be less aggressive is a low priority?

That depends on whether it's a problem; I don't think it's causing me many issues if it is. but even if I was
aggressive in a way that was causing issues, it's hard to think of why directly trying to be less aggressive would be
the right thing to do. like I imagine there'd be other underlying ideas that were better things to change. a change to
those things would have more reach. if I toned down aggressive-ness in general without a good explanation I might avoid
being aggressive or confrontational when it'd be good to do so.

I say "I don't think it's causing me many issues if it is" above, but I'm not confident it's causing me *no* issues
(the DD thread aside). I have a significant memory from ~7 years ago when a friend told me that 'sometimes [I] use [my]
intelligence to bully people', which I hadn't ever considered before and I didn't have a reply to it. I considered
myself someone who'd been on the receiving end more often than not so it didn't occur to me that it could be like a
*habit*. I *occasionally* get similar comments but they're to a lesser degree, now.

So, yes, I do think it's a low priority right now, but I'm not convinced of it. Resolving the uncertainty is a higher
priority b/c it could be a source of errors if "low priority" is the wrong answer.

Max Kaye

unread,
Aug 16, 2020, 3:38:05 PM8/16/20
to fallibl...@googlegroups.com
I've had some thoughts since yesterday:

* one big difference with the analysing lies stuff is that content is from people, who claim to know something, talking
about the subject in a way that's incompatible with it. like an academic claiming to be an expert but getting stuff
wrong.

* the "DD false modesty example" thing wasn't like that; it's a bit like criticising a philosopher giving a talk
because they made some small talk before starting (in fact it's exactly like that)

* I'm somewhat reticent to mention it b/c of the "should drop it" thing. but it's not clear it's bad to mention it
like this and this email is has postmortem-y qualities, so I think it's okay.

* my original post in question is criticised by the idea that one should only apply (public?) analysis to ppl who
make claims publicly in either an irresponsible or purposeful way. like throw-away / small-talk lines aren't a good
or valuable subject. (note: this line of reasoning doesn't help me w/ why it was incorrect analysis, tho, which is
a separate problem)

* ~everyone lies with small-talky-stuff like that (like does things that are socially acceptable/calibrated), so it's
unfair to criticise ppl who aren't trying to claim anything substantial just b/c I'm exposed to their content. plus
that means (maybe) thinking worse of people who I otherwise like w/o thinking worse of ~everyone else, which is
hurting me (for focusing on it) and hurting them (talking about it publicly w/o context of ~everyone else's behaviour)

* this isn't really contradicted if I was outright (or partially) incorrect; that'd only make the consequences worse

* so this is a big lead wrt thinking about why it was like *morally* wrong (if it was)


I don't think I noticed when I posted this thread but in ET's comment there are some implied words in statement 1. that
I mention ("you guys are wrong"):

1.1 "you guys are wrong [in your analysis]"
1.2 "you guys are wrong [to have posted/done this]"

I only saw 1.1 originally.

the 1.2 reading fits well with statement 2; they go together better than 1.1 and 2 (which are more like 2 separate
points).


Maybe trying to do Analysing Lies stuff is sort of like practicing with a weapon; in that it's capable of good and bad
things but it needs to be wielded with a seriousness and maturity befitting it. I'm reminded of a line from Peterson:
(paraphrasing) 'morality is having a sword and refusing to draw it' (or something like that). like you need to be
capable of doing harm to be moral; analysis like analysing lies can do (great) harm if used improperly.

the implications of that are: don't wield a weapon in public, that's what the practice range is for; and consider
carefully before you do, why you're using it, and what you're using it against (i.e. do the consequences align with a
reasonable goal?)

i don't like the idea of using a weapon as an analogy (because of recent truncheon related discussion) but it suits this
circumstance somewhat -- in large part due to me.

Elliot Temple

unread,
Aug 16, 2020, 3:41:06 PM8/16/20
to fallibl...@googlegroups.com
If I wanted to make a more general comment, I would have. I didn't tell you to e.g. apply or share less text analysis in general. I don’t know why my comment would be expected to help answer stuff it didn’t bring up.


Elliot Temple
www.elliottemple.com

Justin Mallone

unread,
Aug 16, 2020, 4:18:48 PM8/16/20
to 'Kate Sams' via Fallible Ideas
I took Elliot's comment as being very specific to the analysis/comments/opinions of DD that were offered based on the statement in the video clip, and not something with a huge amount of reach in terms of other potential posts I could write.

Elliot has a unique context from which to analyze, interpret, and judge the meaning of DD's statements. So if he says I erred in interpreting such a statement, I'm happy to defer to his expertise and drop the issue.

-JM

Max Kaye

unread,
Aug 16, 2020, 5:06:29 PM8/16/20
to fallibl...@googlegroups.com
On Sun, 16 Aug 2020 16:18:23 -0400
This is a good point.

Max Kaye

unread,
Aug 16, 2020, 5:22:00 PM8/16/20
to fallibl...@googlegroups.com
On Sun, 16 Aug 2020 12:41:03 -0700
I didn't mean to imply I was expecting any of that from your comment. Those were questions that came up trying to think
about prioritisation and whether there was some important error that I should know about or understand. It bothers me
when I don't know if something was a mistake or not, or why.

I'm starting to think I've overthought this and need to step back a bit.
Reply all
Reply to author
Forward
0 new messages