Part 2: Autonomous ROS 2 Smart Table Demonstration. LLM AI Chatbot, Vision, Voice I/O. Second demonstration of navigation. More to come.
This is a roving smart table based on Camp Peavy's MockBot. My use case is that it wanders around your kitchen/living room at parties offering your guests food and drink.
New features:
1. Startup verbal chime
2. Verbal snack announcement
3. Added a shelf for the laptop
4. Added battery level display to GUI
5. Added motor speed control to GUI
6. Added "Park" mode
7. Added "Rove" mode to GUI
8. Added "low battery" display and verbal announcement
Still to do:
1. Integrate onboard LLM AI chatbot (it's on the computer and working, but not integrated.)
2. Integrate onboard VLM AI (vision) AI to locate people
(it's on the computer and working, but not integrated.)
3. Add "seek" behavior that will look for people and bring the table to them.
4. "Ping" sensor that will keep it from running under tables. (The lidar sees the table legs but not the table top.)
5. Physical buttons so I don't need to open the laptop.
6. Indicator lights to show modes when laptop is closed
7. Change rove mode behavior so robot covers more of a room.
8. Add "wait" behavior based on touch sensor so user can grab his snack from the tray.
Final update:
- Replace ordinary laptop with a gaming laptop with NVidia GPU for better AI speed.
Software Technology
ROS 2 "Jazzy"
Ubuntu Linux
Python 3.x
Tkinter
Arduino_bridge code (Arduino C and Python)
GPT4All (API)
Gazebo
RViz
SLAM Toolbox
Custom drivers
Navigation 2
MoonDream vision AI VLM
Onboard LLM for conversation and to work with MoonDream
Hardware Technology:
Dell computer (formerly a Windows machine)
Neato Botvac D4
LIDAR
Ultrasonic Sensor
Arduino (USB serial interface)