You Won't Believe This!

0 views
Skip to first unread message

Kurt Annaheim

unread,
Mar 5, 2026, 6:09:44 PM (7 days ago) Mar 5
to google groups, befre...@befreetech.com

LMHR/Keto CTA CONTROVERSY - YOU WON'T BELIEVE THIS! with Feldman & Dr Norwitz

Subject: Transcript (cleaned + email-ready)

Because I think Dave deserves to break some headlines here.

Dave, before you bring the latest headlines, can you give everybody—because not everybody’s familiar—a two-minute summary of how we got here, why this all began in the first place, and then bring us through the lean mass hyper-responder story?

Everybody watching: we’ve got [xx] people already watching this. If you know somebody who’s been low-carb, keto, keto, or carnivore, please shoot them a text message, a DM, or share this on your favorite social media—because when you hear what these two gentlemen are about to tell you, you’re going to be like, “I suspected this… but are you for real?”

Go ahead, Dave.


Yeah. So here’s kind of the setup.

For those people who go on a ketogenic diet—people like myself, like Nick Norwitz—we tend to be leaner and more metabolically healthy. There tends to be an increase in LDL cholesterol, often coupled with an increase in HDL cholesterol and low triglycerides.

(Nick, your keyboard is quite loud—just giving you a heads up. Actually, I’m kidding.)

Anyway: the gist is that we’ve known for a while that we wanted to study this. In 2019, I founded the Citizen Science Foundation, a scientific public charity, and we raised money through a crowdsourced study.

That study recruited 100 participants who would fly to UCLA and, at the Lundquist Institute, get a high-resolution heart scan known as a CT angiogram (CTA). They would get it at day zero, and then a year later they would get a second scan. So we’d have 100 baseline scans and 100 follow-up scans—200 total scans, two for each participant.

That’s important because this is a longitudinal study: multiple time points (in this case, two), so we could track plaque presentation at baseline and progression at follow-up, to see what differences were.

After collecting baseline data, it already looked pretty good regarding baseline plaque levels for our participants—relatively low. We started publishing even with baseline data, including a matched analysis (cholesterolcode.com/papers). That matched analysis was already interesting to the low-carb community.

Eventually, we got longitudinal data, and the first analysis looked really good. This was the first of four analyses. That first analysis is semi-quantitative—kind of the “horseshoes and hand grenades” version—but it was looking fairly good, and that’s what we wrapped the movie with at the time.

Then we got a second analysis through an AI company called Clearly. This was the first quantitative analysis—computer-based, higher-resolution plaque quantification. The key point: once we had two analyses, they broadly agreed on two major findings:

  1. Baseline plaque was very predictive of future plaque progression. The more plaque you had in the baseline scan, the more likely you had more plaque at follow-up.

  2. There was no association between ApoB levels or LDL and future plaque progression.

Both datasets agreed on that, and it became the basis of the April 7 paper from last year.

However, there was one place where the two datasets weren’t in as strong agreement as you’d expect compared with other studies: the Clearly dataset seemed to show a higher magnitude of plaque and greater ubiquity. There was also a regressor in the original dataset, but by the final analysis it turned out that regressor wasn’t a true regressor—there was a lab accounting error—so in truth there were no regressors in the Clearly dataset. By “regressors” I mean participants who had less plaque on the follow-up scan than on the baseline scan.

That’s the setup—and it’s important—because after the April 7 publication, I was particularly interested in seeing the raw anonymized data. Contractually, once we published, Lundquist was required to deliver that dataset to the Citizen Science Foundation. I went into analysis and quickly started seeing patterns you don’t typically see in these other studies. I brought that to the attention of Lundquist and Clearly.

At that point, as Nick knows, I had to go quiet with the rest of the team because the Citizen Science Foundation, Clearly, and some major donors had to have a lot of closed-door (virtual) meetings.

And here’s the thing, Ken: I genuinely thought this was going to be a short amount of time. I thought we’d come to an understanding on the right way forward for transparency, and the biggest most important thing was that we needed a fully blinded analysis through the Clearly platform.

Because there were these patterns that weren’t easy to explain, I suspected it was possible the results would be different if there was a fully blinded analysis—since fully blinded analysis is what’s properly required for longitudinal studies. They’re supposed to be blinded. As a lead researcher, you’d absolutely expect and want all data blinded.

Correct. Always. It’s a scientific standard that should be expected.

Later, it was found there was something with the delivery of the data to Clearly: what they received were not fully blinded scans. In the metadata, the dates were exposed. I’m not saying that I know this was acted upon—just that it wasn’t fully blinded.

So behind the scenes we asked: can you run a quality control check and run the scans again? We weren’t asking for changes beyond the fact that it should be a blinded reanalysis. If the measurements are stable, it should be pretty close to the original dataset.

Correct.

Short answer: their answer was no.

To be fair to them, their position would be that it’s an irregular request, and I took them at their word at the time.

Can I interject?

Yeah, go ahead.

It is not an irregular request. And I want to stand up for my friend on his behalf: throughout this process, bear in mind who Dave is. He’s coming from outside academia, and there have been times where people have taken advantage of his naiveté. That’s exactly what someone who knows the inside baseball would say to someone like Dave Feldman: “That’s an irregular request—we never do such a thing.”

It’s irregular for them to have unblinded scans. That’s why it’s irregular.

Also: you weren’t asking for charity. You were going to pay them, correct?

Yes. We published in April, and within days—maybe a week or two—we realized we not only wanted a blinded analysis, but if money was a consideration, we volunteered to pay for it fully. We weren’t asking them to do it for free.

Yes.

And there’s something else I’ve never gotten to talk about publicly: I feel like I’ve been living in a real-life parable of the emperor’s new clothes. For an engineer, when you’re seeing only positive values, and people say, “Well, this was an untreated population with sky-high LDL, so it makes sense they would all have plaque progression,” I’m saying: you can believe that, but if you’re talking about low levels of plaque—low levels of anything where resolution isn’t that tight—you expect wobble. You expect bidirectional scatter.

It’s like trying to measure marbles on a bathroom scale versus bowling balls: you’d have a tougher time with marbles.

And that’s all I was saying. I felt it was a communication problem to explain that with many participants having low levels of plaque, you should expect some bidirectional scatter below the noise floor.

That’s one of the things when I first read the study—because I’ve interacted with thousands upon thousands of people eating real keto and real carnivore, and I’ve seen plaque regression hundreds of times. So when results came out and there was only one person who had regression out of 100, I was like: that’s not what I’m seeing in reality. But this is a controlled blinded study, so it must be right.

Nick’s going to pop a spring if we don’t let him jump in. Go Nick.

Sorry—I’m hoping I don’t repeat anything because I heard only the end of what Dave said. Tech issues. Apologies.

Before we get into what happened next, I want to emphasize: people are confused about why we didn’t check this before the paper came out. Ironically, because Dave and I were affiliated with the study via the funding group, there were blocks in place to protect integrity—which makes sense. Appropriate.

Yes.

Let me make this abundantly clear: what would it look like if the engineer who spearheaded the study said, “Wait a sec—I’m not sure I agree with you. I’m not sure I have high confidence in this dataset,” and obstructed a study because he didn’t like how the data looked?

That’s why you want things blinded, and why it makes sense we needed to get to the final paper.

And the two major findings are stable—and still are stable: plaque predicts plaque, and ApoB does not predict it. That was still publishable.

Yes. Exactly.

What I’ll add is: you and I didn’t have access to certain bits of data to protect integrity, but it also meant we couldn’t check certain things. When it came time to write and publish the paper, I can attest: before it was published, Dave was like, “There are a couple oddities here,” but we were also in the situation of the funding body where it would be weird if we pushed a narrative.

So in the first paper—and there were always going to be more—we emphasized the key novel finding: LDL exposure and ApoB did not predict plaque progression. That is huge and novel. It has been consistent across every single analysis: the original Clearly, plus HeartFlow, plus QAngio. That’s the most robust finding.

It occurs despite the largest LDL spread of any prospective study ever published, which people keep trying to brush under the rug with what appears to be this Clearly debacle, which now we have more information on.

Now, everybody watching: please hit the thumbs up or heart, subscribe, and tell me in the comments where you’re watching from. We need as many people to hear this as possible. Tell me your city, state, country.

So this was supposed to be blinded. It turns out it was not blinded. It was blinded to you guys, but not blinded to Clearly—and then some others who may have been involved.

Take us from there, Nick. What’s the next “OMG, wait, what” moment?

I’ll pass it to Dave for what happened next with the properly blinded analysis with HeartFlow and QAngio. Dave, take us through that.


Yeah. Back to the timeline: once it became obvious we likely weren’t going to get a fully blinded reanalysis with Clearly, we connected with what could be argued is the leader in AI CT angiography: HeartFlow.

Full disclosure: I hadn’t been acquainted with HeartFlow before because I was only being acquainted with technologies that teams were connecting me with. After we connected with HeartFlow, it was a very different experience. They were extremely interested not only in our study, but I said out of the gate: “I want to be sure this is properly blinded,” and their response was: “That’s exactly how we want it to be.”

We worked out what’s known as operational blinding: the data they get makes it impossible for them to know scan order.

HeartFlow completed their analysis ahead of the fourth analysis—and the third and final quantitative analysis, which was our pre-registered endpoint: QAngio.

This mattered because without HeartFlow, there would have been only one more analysis: QAngio. I didn’t want just two quantitative analyses at the end. I wanted at least two more, to see how much they agreed.

Spoiler alert: the fully operationally blinded HeartFlow data came back showing the patterns I was concerned about in Clearly.

For example: if you divide progressors into thirds, those with the lowest baseline plaque had the highest magnitude of percent change increase. Lower baseline plaque → larger percentage increase on follow-up. These are things you can only see after getting your hands on raw data.

HeartFlow gets done, and we learned our lesson from the April 7 rollout. I had to ask Nick—who is hyper-transparent—to be quiet. He literally gave us an emoji to send him when I needed him to be quiet.

Even though HeartFlow was in hand—and Budov’s talk on it looked good (it agreed with the semi-quantitative and showed overall cohort plaque progression was very low)—we waited for QAngio.

QAngio finally comes in, and now we have four analyses. Three of them agree with each other: low overall plaque progression cohort-wide. There are some more rapid progressors; it’s heterogeneous. But importantly: all three analyses—save Clearly—had regressors. Two of the quantitative analyses had regressors in the double digits.

And for people watching: when he says regressors, he means plaque score went down—less plaque—even though LDL and ApoB were sky-high.

Let me summarize where we are:

• Across all analyses: LDL and ApoB do not predict plaque progression.
• Clearly (we thought blinded) turns out not blinded, shows one magnitude of change with oddities.
• Multiple other independent analyses—including pre-specified methodology, a semi-quantitative, and fully blinded AI analysis—largely agree with each other.
• Clearly is the oddball, and they won’t check their work with a blinded reanalysis.

And I’m going to speak carefully, because there are things I can’t say, but I can present what’s publicly available: there are conflicts of interest that were not previously disclosed. If you go to the April 7 paper’s conflict of interest statement, Dave disclosed his conflicts. I disclosed mine. There was someone on the C-suite of Clearly who didn’t mention they were on the C-suite of Clearly. I’ll leave it at that.

Who recommended you use Clearly?

I’m pretty sure I can’t comment on that for legal reasons.

So for those watching: that means Dave is afraid he’ll get sued if he says too much.

The way I understand it: HeartFlow and QAngio showed regression and stability versus Clearly showing effectively 0%. Is that right?

I need to correct you: QAngio had 15 cases out of 99 that regressed, and HeartFlow had 33 out of 95. We haven’t gotten to the new news yet—that’s where the 50% comes from.

So what happened between when QAngio came in and the months since?

There was more push to say: we have enough information to strongly consider doing the quality control check. There was also an internal debate: do we publish HeartFlow and QAngio without addressing the original April 7 paper, or does it need to be part of the conversation? We made the most honest decision: it all needs to be part of the conversation.

And I’ll add one thing: Dave, Adrian, and I are in complete consensus about what we want to do.

The hardest part for me is wearing the hat of president of the Citizen Science Foundation and having official duties. To accomplish the legal and formal mechanisms, I’ve had to leave others outside the room for months.

Meanwhile, another corner: some participants requested their DICOM scans (their right), took them to their cardiologists for reassessment, and a number submitted them back to Clearly.

Even though the original dataset wasn’t fully blinded, I believe there were no names. So in a roundabout way, these participants were getting an individual-level blinded reanalysis of their scans—because Clearly wouldn’t know they had previously analyzed those scans. It would look like fresh scans coming in.

If different data came back, we’d be very interested. Presumably it should be close—maybe within 5% or 10%. Test-retest is a real thing in science, especially with devices.

Nick, you look like you want to say something.

No—I’m waiting. We’re 35 minutes in and you haven’t dropped the bomb yet.

Yes. So these scans: the data coming back from Clearly was being shared back to me. I wanted everything above board—agreements in place, legal and formal, arms-length on what I could know and when.

As more data accumulated, we had reason to determine: this is the time we needed to share that data. Nick knows why, but Nick and Adrian did not know what I just explained was happening—they were unaware participants were doing this outside the study.

At the point we determined why we needed to release the data, I put it together, worked with legal and the internal team, and produced a video. Nick has known for 21 hours.

Did you load the slides?

Yes. Can you see this?

Yes.

What you’re looking at is a collection of red bars and blue bars. The zero line is no change. Bars above it are increases from baseline to follow-up; bars below it are decreases.

All participants in the original Clearly analysis: none are regressors. They’re all progressors. That’s the red bars.

Now the left side shows increases; on the right you see blue bars below zero. Those blue bars are less plaque in follow-up per individual submissions to Clearly than in the original study-provided Clearly dataset.

So we have half progressors versus half regressors.

Median: study-provided Clearly data for these eight participants shows +20.6 mm³ (31% increase). The individual submissions median is +0.7 mm³ (2% increase), which would be quite low.

And apples-to-apples: the magnitude of difference is astronomical. Participant 8 went from +32 to −48. These aren’t small differences.

Even steelmanning Clearly: it’s possible something happened with the dataset delivered for the study and the modality may be accurate—but that only further supports the need for a blinded reanalysis. That’s why we asked for it.

What is unequivocally true: we have the same scans read twice with a gargantuan discrepancy. I don’t think a reasonable person can take the initial Clearly results and have confidence in them given that:

• Multiple analyses agree with each other and disagree with Clearly
• Clearly is the unblinded analysis
• Clearly refused to repeat
• A repeat via their own platform doesn’t agree with their own prior result, and agrees with the other analyses

That’s fishy.

Now the mean: study-provided Clearly for these eight participants is +20.9 mm³ (42% increase). The mean average from individual submissions dips below zero (net lower plaque). Yes, one participant may be more anomalous, but the mean and median are near zero.

Scientifically, people can say: small sample, we don’t know how it represents the whole. That’s fair. But it doesn’t address the question at hand: the reliability of the initial scan read.

To have 100 with basically no regression, then eight at random show 50% regression—the probability of that happening by chance is extremely low.

And to those saying “N=8 isn’t peer-reviewed”: you don’t peer-review CT scans; these are AI reads. And we’re not building a case on eight scans alone—we’re building a case on eight scans that align with three independent analyses including blinded ones, and conflict with the original unblinded Clearly dataset that they refuse to check.

Pie charts: Clearly study-provided results show virtually all progressors and no regressors. Individual submissions show a mix closer to the HeartFlow blinded results—roughly around half progressors and regressors/no change—what we’d expect when baseline plaque is low and there’s bidirectional scatter near the noise floor.

People are calling for retraction. It’s not the fault of anyone on this live, but the discrepancy is obvious.

We are in contact with the journal. We’ll have news on that soon.

The science isn’t settled. It’s still not settled.

What should viewers take from the fact that Clearly analysis was not blinded, and the blinded analyses showed distinct results?

We have four datasets analyzing the same 200 scans. The scans are the ultimate source of truth—200 scans are what all analyses are looking at, like lenses at the same scene. You wouldn’t expect perfect agreement—some variability is normal—but one stands out by several-fold versus the others.

We can also be intellectually honest: it’s possible at five-year scans some correlation might emerge with LDL/ApoB. That’s why we want that data and are working toward it. It could end up being six-year scans depending on resolution of all this.

Nick—final thoughts?

A challenge: it’s clear many people claim they want scientific truth, but when data don’t align with their worldview, they do mental gymnastics. This has been nowhere more true than the keto CTA project.

This is a litmus test to see who can swallow their ego and come to the only reasonable conclusion an intellectual can come to. Let’s see what happens.

I’m in the mood that I will happily run through a series of brick walls. Anybody who wants to challenge us, I welcome it.

I’ll name one: Thomas Daypring. I think he’s been disingenuous and immature with his treatment of me, Dave Feldman, and he’s ignorant on this topic. If he wants to walk the walk, I will pay for his flight to COS(I) to debate him live in two weeks.

I love this. If Dr. Daypring wants an all-expenses-paid trip—airline and hotel—Nick will pay. People get on X behind avatars and say things they can’t defend. Show up and have an intellectual engagement.

And I’ll sweeten the pot: I’ll throw in $100 in chips at the casino of his choice if Dr. Daypring will come to Vegas for COS(I) and debate Nick Norwitz on stage.

Dave—final thoughts?

We’re all going to be in Las Vegas—charity fundraising for the next study, which we’re close to fully funding. We’ve had a lean mass hyper-responder panel every COS(I); this will be the third and final one.

If Dr. Daypring can’t make it, another prominent lipidologist or cardiologist could come. We need a wide spectrum of opinions. One thing we need to draw attention to: there are interests in saying things on social media and then exiting the room. Nick and I are proactive about engagement. We want conversations with critics, provided it’s productive.

And yes: I’m happy to be proven wrong. All three of us like it when we’re proven wrong because then we learn something new. But making fun of people on Twitter or denigrating a novel hypothesis—that’s not science.

Guys, thank you so much.

Thank you, Dave Feldman, for being persistent and stoic in pursuit of this hypothesis. Dr. Norwitz, thank you for your self-composure today.

Everybody watching: watch this space. This won’t be boring over the next few months and years. I’ll put links to studies in the show notes. Follow Dave and Nick on X.



Reply all
Reply to author
Forward
0 new messages