What robots need to become useful

0 views
Skip to first unread message

AI (@ChatGPTricks)

<mail@chatgptricks.com>
unread,
Feb 5, 2026, 4:42:30 PM (2 days ago) Feb 5
to Crazyhellpodcastsosx
AI needs it too...  ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­

Welcome to the Sentient AI newsletter!

As of two short months ago, many predicted 2026 would be the year of the robot.

The demos were getting better.

Their motor movements were getting more precise.

India even announced it would be launching Terminator dogs.

Indian Army's robotic dogs perform synchronised parade on Army Day in Pune.  Watch video - The Economic Times

Sadly, the hype didn't last long.

During a December demo of Tesla's Optimus robot, industry insiders noticed there was likely someone behind the scenes controlling things.

Known as teleoperation, it's a common practice in the world of robotics.

The fact Elon had to resort to it, however, shows just how far behind AI-powered robots really are.

And a large part of why is because the way AI companies train LLMs doesn't carry over to training physical devices.

Why AI and Robots Are Different

LLMs are trained on a giant pile of human-generated text.

Books.

Code.

Reddit arguments that should’ve stayed in drafts.

From there, machine learning specialists train the AI to identify patterns in the data, allowing the LLM to become extremely good at predicting what should come next in a string of text.*

*The process for image and video generation is slightly different but very similar.

Problem is, robots don’t live in text.

via Hacker News

They live in:

  • slippery floors
  • weird lighting
  • random obstacles
  • imperfect sensors
  • humans doing unpredictable human things
  • objects that “look the same” but behave differently (try picking up two identical cups, one empty and one full)

For a robot to work in the real world, it can’t just predict the next token.

Nor can it repeat a perfectly refined motor movement, such picking up a cup of coffee or folding a shirt.

©️ Consumer Electronics Show 2026

Instead, for a robot to provide value beyond repeating a highly specific movement, it has to legitimately understand the world around it.

Today's AIs, however - including everything from ChatGPT to Nano Banana - have zero understanding of the physical reality around them or the person using them.

Instead, they generate text, images, and videos based on prompts. Nothing more, nothing less.

And that's the problem.

Why Robot Demos Are Scams

A lot of robotics today is still some flavor of:

  • scripted motions
  • carefully staged environments
  • narrow task training
  • massive human babysitting behind the scenes (aka Teleoperation)

These demos generate impressive videos for social media, but only to the untrained eye.

©️ Consumer Electronics Show 2026

Because anyone who understands what's going on knows the only reason the demos work is because they were rehearsed dozens/hundreds of times using:

  • the same lighting
  • the same objects
  • the same table
  • the same room
  • the same “please don’t breathe near it” constraints

But if you move an object by two inches, or put the robot on an uneven floor, the cracks start to show.

Which is why 2026 will not be the year of the robot.

In fact, it's possible the 2020s won't even be the DECADE of the robot.

Not because the physical technology isn't there, but because they lack real-world understanding.

Embodied Learning

Embodied learning (aka embodied AI) refers to the idea of having an intelligent system learn by interacting with the world around it.

Think about it.

A toddler doesn’t read 10 million pages about walking.

Instead, they wobble, fall, and try again (and again and again) until they finally learn.

That is the essence of real-world learning.

And at the core of that biological process is data collection.

The baby's brain is forming new neural pathways that remember what happens when they move their leg a fraction of an inch the right way versus the wrong way.

What happens when they tense their core versus when they don't tense it.

Intelligence + Analysis

It's the perfect intersection of learning via real-time, interactive data collection.

Which is precisely what's holding today's robots back.

It's not that they can't perform the movements required to be useful in the real-world. In fact, robotics companies made massive progress on fine motor movements in 2025.

The problem is that their understanding of the world around them is no better than a real-life toddler's.

While toddlers are cute little angels, they're completely worthless in terms of being productive members of society.

And even the robots that appear to be productive, such as the ones currently being deployed in Amazon's warehouses, do not have true intelligence.

Instead, they're highly trained to scan barcodes and repeat a very finite number of repetitive motions. Most of which involve picking up and setting down boxes.

The Brutal Reality

A good analogy is that of self-driving cars.

For Elon to accumulate the volume of data needed to get self-driving cars off the ground, his team had to monitor Tesla sensor inputs observed 7.1 billion millions of real-world drive miles.

The process took years and years, and required entire server rooms full of hard drives just to store the data.

From there, machine learning specialists spent years teaching their AI how to interpret all that data.

And once that was done, it took another couple years for the cars to be reliable enough to get government approval for pilot testing.

Point being, it's taken well over a decade for Elon - the world's tech genius - to get self-driving cars even remotely close to being ready for widespread usage.

And the same can be said for robots in 2026.

Progress is Painfully Slow

While companies like Boston Dynamics and Unitree have been working on robots for over a decade (Boston Dynamics has been doing so for multiple decades), and unveiled some cool products at the 2026 CES, their progress pales in comparison to the rapid improvements we've seen in AI since 2023.

hyundai-boston-dynamics-ces-2026
Boston Dynamics Atlas Robot

Not because the money isn't there.

But because the process of training robots to understand the world around them is exponentially harder than that of training LLMs.

The reason this matters is because better training equals fewer errors.

LLMs went mainstream because they're right often enough to be useful, with very few real-world consequences for when they get something wrong.

Robots Can't Screw Up Like LLMs Can

Robots, however, don't benefit from that same level of forgiveness.

Getting it wrong in robotics means:

  • broken objects
  • broken ankles
  • lawsuits
  • headlines
Video of robot going rogue and attacking its engineers

The solution is training based on real-world interaction.

It's something Yann LeCunn, considered by many to be one of the "Godfathers of AI," understands very well (as is explained in this video).

In fact, given LLMs seem to be plateauing, many in the industry believe they'll need to adopt something like embodied learning to achieve true AGI.

It's a sentiment I personally agree with.

And until that type of learning becomes more affordable - and can be done at scale - we're not going to see a "GPT-3 to GPT 4o" level jump in robot performance anytime soon.

Catch you next time,

Chris Laub
Head of Product, Sentient AI


*Interested in staying on top of the industry's cutting-edge developments?

Make sure to join Sentient’s brand-new AI Power Users group on Telegram.

Inside this free community, we share tips, tricks and hacks you can use to squeeze every last drop of power out of today's top AI models.


@ChatGPTricks Post of the Week:

View IG Post

Want to Connect? DM our Founders!

👉 Louis Gleeson

👉 Ivan Acuna

8101 Biscayne Blvd Ph 703-07, Miami, FL 33138
Unsubscribe · Preferences

Reply all
Reply to author
Forward
0 new messages