Space debris

5 views
Skip to first unread message

Keith Henson

unread,
Mar 2, 2025, 6:12:26 AMMar 2
to Power Satellite Economics
Some years ago I worked out that when moving a power satellite in a
slow spiral orbit from LEO to GEO it would get hit by space junk about
40 times. That is not a problem if building a few, but a useful fleet
would be in the thousands and can we say Kessler syndrome?

That work was never checked so I don't know for sure I got it right.
Space is really big, but a power satellite sweeps out a lot of it.

Anyway, I have recently been considering space junk hits for 12-hour
Molniya orbits. What I need to put in the older spreadsheet is the
distance the power satellite travels at various altitudes up from
perigee (600 or 800 km).

If someone has insight into this, I would appreciate it. Old age does
take a toll.

Copilot got the perigee velocity off by about 5% when I checked it
with Excel and a calculator. Don't know why, didn't ask, but that's
another thing to check if you are using an AI.

Keith

John Jossy

unread,
Mar 2, 2025, 1:27:57 PMMar 2
to Keith Henson, Power Satellite Economics
Have you checked with John Bucknell with Virtus Solis. Their system will use a Molniya orbit.  His team has likely done those calcs.

--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/power-satellite-economics/CAPiwVB7MuedpRiiE4ZSzgOSK%2BY1PKf2GnHYKLEmkZhJD8LsLpA%40mail.gmail.com.

k.a.c...@sympatico.ca

unread,
Mar 2, 2025, 3:15:35 PMMar 2
to Power Satellite Economics
KeithH wrote:

> Copilot got the perigee velocity off by about 5% when I checked it with Excel
> and a calculator. Don't know why, didn't ask, but that's another thing to
> check if you are using an AI.

Never trust a LLM-type "AI" (which are *not* actually AI in any sense) to get *any* facts right.

It's when their output looks plausible that they are most dangerous...

- Kieran

Erinn van Wynsberghe

unread,
Mar 2, 2025, 4:36:41 PMMar 2
to k.a.c...@sympatico.ca, Power Satellite Economics
Hi Keith,

Some first-hand tips for ChatGPT:

 - When you ask it a question, you should specifically remind it to not make anything up and to not hallucinate (use those exact words). This is a big help, but not a guarantee.

 - ChatGPT will provide you with equations and pathways to solving problems, but it will often perform those calculations incorrectly. It will even claim that it did the calculations correctly, and show you it's reasoning. It frequently cannot see its own flaws. I take the equations it provides and rebuild them myself in Excel, often with help from Wolfram Alpha.

 - Reduce complex GPT asks into multiple small steps spread across numerous statements and inquiries.

 - After every result, I ask ChatGPT if it wants to make any changes or revisions to it's most recent answer. It will almost always provide ideas on how to improve the answer.

 - Explore different ways of asking it to double--check itself, evaluate its own logic, search for flaws in its work, and to show both its methodology and its sources.

 - Ask it for advice on how best to phrase questions so as to ensure accurate calculations.

 - There are now options where you can build your own GPT within the main ChatGPT interface, and tell it to remember specific things. I'm only just starting to explore these capabilities and their reliability.

Hope that helps.

Cheers,

Erinn


Erinn van Wynsberghe
(mobile device)
VanWyn Inc.
eri...@vanwyn.com

On Sun., 2 Mar. 2025 at 3:15 p.m., k.a.c...@sympatico.ca

k.a.c...@sympatico.ca

unread,
Mar 2, 2025, 5:42:35 PMMar 2
to Power Satellite Economics

Erinn;

 

Rather off-topic from space power economics, but maybe worthwhile, to scotch a particularly pernicious current misunderstanding:

 

You wrote:

 

 - Explore different ways of asking it to double--check itself, evaluate its own logic, search for flaws in its work, and to show both its methodology and its sources.

 

The following is based on what I’ve gleaned from some reading into LLMs. If any of this is wrong, I’d appreciate pointers as to what’s wrong, and what the right answer is. With that caveat, I’ll (like ChatGPT 😊) proceed to pontificate as if I was full of certainty…

 

You’re succumbing to the delusion that these LLMs “think.” They don’t. They simply regurgitate text patterns from the corpus of works on which they were trained (some call them “spicy auto-complete”) --- with some impressive bells and whistles added, true (enabling them to go beyond text, to images and audio, among other things).

 

If there exists in their training corpus some text that matches the answer you’re looking for, then it may regurgitate that. Or, it may come up with some other amalgamation of other text strings. But there’s no “thinking” or “self-awareness” involved in doing this. It *cannot* “evaluate its own logic,” because it cannot evaluate anything. It doesn’t do “evaluation,” it does text-string generation based on semi-random plunges into the probability matrix that was generated when its training corpus was digested. (Experiment: ask it the same question multiple times, you’ll get multiple contradictory answers, that comes from the random-plunge part.)

 

There’s a classic textbook on information theory that I first read many decades ago (my dad studied from it when he went to grad school):

 

https://www.amazon.ca/Introduction-Information-Theory-Symbols-Mathematics-ebook/dp/B008TVLR0O/ref=sr_1_1?crid=3BPKV2CZ59XAU&dib=eyJ2IjoiMSJ9.Qv5PCcP8U5ac9dIKxuLxRQ.Q5gi-M_K-LOjlqCfFELl0pTtHmJsrV5iuuczlRdL8Lw&dib_tag=se&keywords=pierce+%22an+introduction+to+information+theory%22&qid=1740954006&sprefix=pierce+an+introduction+to+information+theory+%2Caps%2C91&sr=8-1

 

that was first written in 1961; it compiles and explains the foundational concepts of information theory that Shannon and others developed not too long before then. (What an excellent book! I highly recommend it.) Several of its concepts have stuck in my mind for decades. One of those is his excellent explanation (in chapter 3) of Shannon’s demonstration of how to put together zero-order, first-order, second-order etc. approximations to strings of characters, based on such “random plunges” into a codified “training corpus”, such that (as you go to higher and higher orders) first strings of words begin to appear, then with even higher orders those strings seem to begin to make sense. The “training” is actually statistical analysis of every text substring in a “training” corpus of text (e.g., the contents of lots of books), to see how many times a given string of length n appears (for an order-n approximation); once that’s been done for all length-n strings, then the probability of finding any one of those strings, if you start sampling the corpus at any given point, is known. The text-generation bit just uses a random number generator, weighted by those probabilities, to select which string of text from the corpus to parrot back.

 

AFAIK, ChatGPT etc. are modern updated versions of that basic approach. One innovation they use is in the statistical analysis side. Using longer text strings (i.e., a higher value of n) produces results that seem more and more “sensible.” But, there’s a combinatorial explosion that happens as you go to larger values of n; before too long, the amount of computation needed to do the statistical analysis (if done via brute-force) becomes impossibly large. From my reading into LLMs, they *do* use a particular AI-related technique (computational neural networks) to do a much quicker, approximate analysis of the training corpus, to come up with the (enormous) probability table. (I believe there are other innovations as well, such as algorithms (not AI) that enforce grammar rules.)

 

So, my current understanding is that the only place where an AI technique is used in thee LLMs, is the use of neural networks in the analysis of the training corpus, to produce the probability table. And that the actual software that you interact with when you use ChatGPT/etc., is purely algorithmic (not AI), basically selecting and stitching together randomly-selected strings of text using a random-number generator, weighted by the probabilities in that table.

 

- Kieran

Erinn van Wynsberghe

unread,
Mar 3, 2025, 10:23:05 AMMar 3
to Power Satellite Economics, k.a.c...@sympatico.ca
Hi Kieran,

(A continued digression on ChatGPT)

I haven’t fully studied the inner workings of ChatGPT, but I've logged a lot of hours with it to prepare content (from promotional flyers to grant applications) and so can share techniques I've found that help improve and refine its responses.

From my experience, I get better results by asking it to double-check its own logic, highlight flaws, and justify its reasoning after a given answer. While I agree it’s not truly “thinking” and that it does have a limit on how much work it can handle per inquiry, it does apply an internal set of rules that allow for meaningful revisions (when prompted effectively).

One activity you could try is to copy/paste your understanding of LLMs into each of the popular LLMs and ask for a detailed analysis of what the block of text got right versus wrong.

As a practice exercise, I asked each of the major LLMs "In one sentence, please provide with some technical detail the methods that this LLM uses to provide answers to user inquiries." Here are their respective replies:

ChatGPT: ChatGPT generates responses using a transformer-based deep neural network, leveraging self-attention mechanisms to analyze context, predict the most probable next tokens, and refine coherence using pre-trained embeddings and autoregressive decoding, all guided by reinforcement learning from human feedback (RLHF) to improve quality and alignment.

Gemini: This LLM utilizes a deep neural network architecture, specifically a transformer model, which processes user inquiries by attending to relationships between words and phrases to generate contextually relevant and coherent responses through probabilistic prediction of subsequent tokens.

Co-Pilot: This large language model generates responses by employing advanced neural network techniques, leveraging billions of parameters trained on vast datasets to understand context, predict likely sequences of text, and produce coherent and contextually relevant answers.

Claude: The model processes user queries through a series of transformer-based neural network layers, applying learned attention mechanisms to generate contextually relevant text by predicting the most probable next tokens based on pre-trained patterns from a large corpus, with additional fine-tuning via human feedback to improve accuracy and alignment with user intentions.

Cheers,

Erinn



Erinn van Wynsberghe
M.Eng, B.Eng, MCASI, MRAeS
Aerospace and Physics Engineer
Founding President and CEO
VanWyn Inc.
175 Longwood Road South, Suite 105, Hamilton, Ontario, Canada L8P 0A1


Reply all
Reply to author
Forward
0 new messages