Does It Make Sense To Put Data Centers In Space?

25 views
Skip to first unread message

John Clark

unread,
Dec 5, 2025, 5:47:45 PM (7 days ago) Dec 5
to ExI Chat, extro...@googlegroups.com, 'Brent Meeker' via Everything List, Power Satellite Economics

John K Clark    See what's on my new list at  Extropolis

dbn

Brent Meeker

unread,
Dec 5, 2025, 11:45:18 PM (7 days ago) Dec 5
to everyth...@googlegroups.com
These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets.  Isn't there some point of diminishing returns in this process?

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv0JxFidme3YL6cEcxC_Xv%2BXuwfaQfs_-HUZrQE2e8xcPw%40mail.gmail.com.

John Clark

unread,
Dec 6, 2025, 6:22:08 AM (6 days ago) Dec 6
to everyth...@googlegroups.com
On Fri, Dec 5, 2025 at 11:45 PM Brent Meeker <meeke...@gmail.com> wrote:

These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets.  Isn't there some point of diminishing returns in this process?

Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet, however I don't think I'd invest in the space based data center idea, but who knows, maybe they can find a way to make it practical.   

John K Clark    See what's on my new list at  Extropolis

4rd

Brent Meeker

unread,
Dec 6, 2025, 7:18:48 PM (6 days ago) Dec 6
to everyth...@googlegroups.com


On 12/6/2025 3:21 AM, John Clark wrote:
On Fri, Dec 5, 2025 at 11:45 PM Brent Meeker <meeke...@gmail.com> wrote:

These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets.  Isn't there some point of diminishing returns in this process?

Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet, 

Why?  Do you think there's a lot more to be sucked up?  Or do you think there's new information being generated as fast as they're sucking it up?  Or what?

Brent

John Clark

unread,
Dec 7, 2025, 7:53:14 AM (5 days ago) Dec 7
to everyth...@googlegroups.com
n Sat, Dec 6, 2025 at 7:18 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets.  Isn't there some point of diminishing returns in this process?
 
>>> Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet, 

> Why?  Do you think there's a lot more to be sucked up? 

No, but I think there's a lot more ways to think about the facts that we already know, and even more important I think there are a lot more ways to think about thinking and to figure out ways of learning faster. 

People have been saying for at least the last two years that synthetic data doesn't work and we're running out of real data so AI improvement is about to hit a ceiling; but that hasn't happened because high quality synthetic data can work if used correctly. For example, in the process called "AI distillation" a very large AI model supplies synthetic data to a much smaller AI model and asks it a few billion questions about that data and tells it when it made a correct answer and when it has not. After a month or two the small model becomes much more efficient and is nearly as capable as the far larger one, sometimes even more so; it has been able to do this not by thinking more but by thinking smarter. After that the small model is scaled up and is allowed access to much more computing hardware, and then the process is repeated and it starts teaching a much smaller model. 

That's why, far from hitting a ceiling, almost every week we learn about another  significant improvement in AI technology, the magnitude of improvement that pre 2020 only occurred once or twice a decade. It's so common now that most people have gotten used to it and such announcements bore them, ho-hum just another revolutionary development and AI.  I'm reminded of the boiling a frog parable where a frog is placed in cold water that is gradually warmed up until it boils and kills the frog and the frog doesn't jump out because it doesn't notice the gradual increase in temperature.  

John K Clark    See what's on my new list at  Extropolis

rvv

Brent Meeker

unread,
Dec 7, 2025, 3:20:58 PM (5 days ago) Dec 7
to everyth...@googlegroups.com


On 12/7/2025 4:52 AM, John Clark wrote:
n Sat, Dec 6, 2025 at 7:18 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> These data centers have been sucking up and processing data accumulated over 70yrs or more and condensing it into neural nets.  Isn't there some point of diminishing returns in this process?
 
>>> Companies are making a multi-trillion dollar bet that there is not a point of diminishing returns. And I think that's probably a pretty good bet, 

> Why?  Do you think there's a lot more to be sucked up? 

No, but I think there's a lot more ways to think about the facts that we already know, and even more important I think there are a lot more ways to think about thinking and to figure out ways of learning faster. 

People have been saying for at least the last two years that synthetic data doesn't work and we're running out of real data so AI improvement is about to hit a ceiling; but that hasn't happened because high quality synthetic data can work if used correctly. For example, in the process called "AI distillation" a very large AI model supplies synthetic data to a much smaller AI model and asks it a few billion questions about that data and tells it when it made a correct answer and when it has not. After a month or two the small model becomes much more efficient and is nearly as capable as the far larger one, sometimes even more so; it has been able to do this not by thinking more but by thinking smarter. After that the small model is scaled up and is allowed access to much more computing hardware, and then the process is repeated and it starts teaching a much smaller model. 
This strikes me as a positive feedback hallucination feedback amplifier

Brent

John Clark

unread,
Dec 7, 2025, 3:36:33 PM (5 days ago) Dec 7
to everyth...@googlegroups.com
On Sun, Dec 7, 2025 at 3:20 PM Brent Meeker <meeke...@gmail.com> wrote:


> Why?  Do you think there's a lot more to be sucked up? 

No, but I think there's a lot more ways to think about the facts that we already know, and even more important I think there are a lot more ways to think about thinking and to figure out ways of learning faster. 

People have been saying for at least the last two years that synthetic data doesn't work and we're running out of real data so AI improvement is about to hit a ceiling; but that hasn't happened because high quality synthetic data can work if used correctly. For example, in the process called "AI distillation" a very large AI model supplies synthetic data to a much smaller AI model and asks it a few billion questions about that data and tells it when it made a correct answer and when it has not. After a month or two the small model becomes much more efficient and is nearly as capable as the far larger one, sometimes even more so; it has been able to do this not by thinking more but by thinking smarter. After that the small model is scaled up and is allowed access to much more computing hardware, and then the process is repeated and it starts teaching a much smaller model. 
This strikes me as a positive feedback hallucination feedback amplifier

Then why does it work so well? 

John K Clark    See what's on my new list at  Extropolis

edd
 

Brent Meeker

unread,
Dec 7, 2025, 6:26:36 PM (5 days ago) Dec 7
to everyth...@googlegroups.com
I don't know.  Do you know how it avoids amplifying hallucinations?  Do you even know how well it works?

Brent

John Clark

unread,
Dec 8, 2025, 9:28:13 AM (4 days ago) Dec 8
to everyth...@googlegroups.com
On Sun, Dec 7, 2025 at 6:26 PM Brent Meeker <meeke...@gmail.com> wrote:

>>> This strikes me as a positive feedback hallucination feedback amplifier

>> Then why does it work so well?

I don't know.  Do you know how it avoids amplifying hallucinations? 

Nope, and even the people who wrote the AI only have a hazy understanding of how it works, all they know for sure is that it does. 

 Do you even know how well it works?

Yes, modern AI's work very well, and that's why no nonsense capitalists are willing to invest trillions of dollars to make them work even better.  

John K Clark    See what's on my new list at  Extropolis

g0x
Reply all
Reply to author
Forward
0 new messages