Is it just me, or are you a bit crazy, too?

24 views
Skip to first unread message

Michael Wimble

unread,
Feb 16, 2026, 12:06:40 AM (7 days ago) Feb 16
to hbrob...@googlegroups.com, rssc...@googlegroups.com
Subject: Multi-Robot ROS Infrastructure - Or How I Learned to Stop Worrying and Automate Everything
Alternate title: Multi-everything ROS for Mere Mortals

Hi everyone,

I wanted to share a project I'm working on because I suspect many of you face the same frustrations I do. If you're managing even ONE robot with multiple computers, you know the pain. If you're thinking about building a SECOND robot that shares some components with the first... well, let me tell you about my nightmare.

THE PROBLEM(S)

I have a robot named Sigyn. She has mutliple computers:
- An AMD desktop running navigation and planning
- A Raspberry Pi 5 handling gripper vision and grasper control
- 3 custom boards running Teensy 4.1 MCUs managing sensors and controllers.
- An OAK-D running AI systems, including object detection and publishing color camera and depth camera images.

Each computer needs different ROS packages. Each has different installed libraries. Each needs to know about all the OTHERS for networking. And here's where it gets messy:

Every time I add a computer or change an IP address, I'm manually updating:
- /etc/hosts on ALL machines (so they can find each other)
- ~/.ssh/config on ALL machines (so I can ssh between them)
- The package lists for each machine (what should even BE on that Pi?)
- Environment variables (RMW_IMPLEMENTATION xml configuration, workspace paths, etc.)
- My own sanity (rapidly depleting)

And God forbid I rebuild a machine from scratch. I have... notes? Somewhere? Did I write down what packages go on the vision Pi? Was it this version of OpenCV or that one?

Now I'm building a second robot - Titania - and I want to reuse some Sigyn packages but not all. Some computers will run the same code. Some won't. The combinatorial explosion of "what goes where" is making my head hurt.

THE BREAKING POINT

I realized I was spending too much time on CONFIGURATION and synchronization instead of ROBOTICS. I was afraid to change anything because I'd have to remember to update it everywhere. I had the same IP addresses hardcoded in multiple places. My bash aliases were inconsistent across machines. Half my packages were in one giant repo and I couldn't figure out which ones were actually dependencies of which.

Enough. There has to be a better way.

THE VISION

I'm building a new management system (working name: SIgyn2, because creative naming is hard). The core idea is simple: YAML configuration files that describe EVERYTHING, and automation that makes it all happen.

Here's what I'm working toward, roughly in this order:

PHASE 1: NETWORK SANITY
- Define all robots and their computers in one place (robots.yaml)
- Define all network info in one place (network.yaml)
- Have a script automatically update /etc/hosts and ~/.ssh/config everywhere
- Never manually edit hosts files again (this alone would be worth it)

PHASE 2: PACKAGE ORGANIZATION
- Split my monolithic repo into separate packages (description, navigation, vision, etc.)
- Define which packages go on which computer types (packages.yaml)
- Use vcstool (.repos files) to manage the multi-repo madness
- Each machine pulls only what IT needs, not everything

PHASE 3: ROLE-BASED CONFIGURATION
- Define computer types (amd_x86, pi_5, jetson_orin_nano) with their capabilities
- Each robot component has a ROLE (main_controller, gripper_vision, ai_processor)
- Automatically generate environment files per-machine
- Bash aliases that make sense for each role (navigation computer gets nav aliases, vision computer gets vision aliases)

PHASE 4: DEPENDENCY TRACKING
- Know what system packages each computer type needs
- Track which ROS packages depend on what
- Version tracking so I know when configurations drift
- "Did I update Sigyn but forget Titania?" becomes detectable

PHASE 5: MULTI-ROBOT SUPPORT
- Each robot gets its own ROS_DOMAIN_ID (so they don't interfere)
- But they can SHARE package code
- Titania can use sigyn_description if she has the same base
- But titania_navigation can be totally different

THE PAIN I'M SOLVING

This is really about reducing cognitive load. I'm tired of:
- Remembering which packages go where
- Manually keeping files in sync across machines
- Being afraid to change things because I'll forget to update something
- Starting a new robot and thinking "oh god, here we go again"
- Rebuilding a machine and spending two days getting it back to working state
- Having different workspace paths on different machines because I wasn't consistent
- SSH'ing to a machine and having none of my usual aliases available
- Wondering why the robot isn't working, only to discover one computer is on the wrong ROS_DOMAIN_ID

I want to run ONE COMMAND on a new computer and have it:
- Know what robot it's part of
- Know its role in that robot
- Pull the right packages
- Configure its network
- Set up proper environment variables
- Give me the right aliases
- Just... WORK

CURRENT STATE

I'm early in this journey. I've got the YAML configurations designed and the core automation script working for network/aliases. I'm in the middle of splitting up my monolithic repo. I just successfully migrated my first package (the robot URDF) to a standalone repo, and watching vcstool pull it into a fresh workspace and build it was... beautiful. It actually worked.

The GitHub won't be public for a few weeks - I have a LOT of repo splitting and cleanup to do. But I wanted to share this now because:

1. Maybe you're facing these same problems
2. Maybe you've already solved them (PLEASE TELL ME)
3. Maybe we can learn from each other's approaches
4. I need to know if I'm solving the wrong problems

THE QUESTION

Am I overthinking this? Is there a standard ROS way to do multi-robot, multi-computer configuration management that I just haven't found? Or are we all suffering in silence, manually editing hosts files like animals?

I'd love to hear:
- How you manage multi-computer robots
- How you keep configurations in sync
- How you decide what packages go where
- Whether you think I'm solving real problems or just creating new ones
- If you've seen tools that already do this

Thanks for reading my robot infrastructure therapy session. If nothing else, writing this down has helped me realize I'm NOT crazy for wanting this. I’m crazy for completely different reasons.

Building robots is hard enough. Infrastructure shouldn't be.

Have I mentioned lately that “Everything about robots is hard” (TM) ?

- Mike

P.S. - If you're wondering "why not Docker?" - I tried. I really did. But getting hardware passthrough, USB devices, GPU access, real-time performance, and ROS networking all working across multiple containers on multiple machines was its own special hell. This approach keeps things running native but adds the management layer I need. I haven’t given up. But it lies in my 11th circle of hell issues.

P.P.S. - Yes, I know about Ansible and other config management tools (psst, I don’t really, but it sounds better if I say I do). But they're designed for server fleets, not robots. I need something that understands ROS workspaces, package dependencies, and the specific insanity of robot systems.

Albert Margolis

unread,
Feb 16, 2026, 1:07:11 AM (7 days ago) Feb 16
to hbrob...@googlegroups.com, rssc...@googlegroups.com
I've suffered with these configuration issues and came to similar conclusions regarding commonly suggested solutions like docker and ansible. I have come up with a question/answer system that I have found pretty helpful. I am in the processing of cleaning up and open sourcing code that I have been refining for the last few decades. You can find what is published so far here:

The following is a response to Mike's email drafted by my assistant Claude Code:

Mike,

  Your description of the configuration problem - packages that each need their own settings, answered once and replayed across machines - is exactly what I've been working
  on in QuickDev.

  The core idea: each package ships a qd_conf.toml file that declares its own configuration questions. When you run the setup tool (qdstart), it scans all installed
  packages, discovers their question files, and walks you through a single interactive session. The answers get auto-partitioned by dot-notation into separate config files -
   qdflask.roles goes to conf/qdflask.toml, denv.SMTP_PW goes to conf/.env, etc. The package author knows what to ask; the framework just discovers and orchestrates.

  A few things that might be relevant to what you're building:

  1. Self-declaring plug-ins - Each package's qd_conf.toml declares questions (with types like boolean, path, random_fill for auto-generated secrets), pre-supplied answers,
  and Flask wiring. No central registry to maintain.
  2. Answer files - The -a flag replays a configuration non-interactively: qdstart -a answers.toml. You can layer multiple files (-a base.toml -a production.toml) with
  first-answer-wins semantics. This makes deployments fully reproducible across machines.
  3. Smart gating - Boolean "enabled" questions gate everything below them. Disable a package and all its follow-up questions are skipped automatically.
  4. Dot-notation partitioning - Answers are stored in a unified namespace (qdflask.login_view, email.MAIL_SERVER) but automatically partitioned into separate files per
  prefix. No manual file management.

  The project is MIT-licensed and on GitHub: https://github.com/almargolis/quickdev

  Even if you don't use it directly, the qd_conf.toml pattern might be worth looking at for your own configuration architecture - it's a clean way to let plug-ins own their
  own setup without the central tool needing to know about them.

- Al Margolis


--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/4C9E61B1-2209-4D69-9FC2-95ECACE3658B%40gmail.com.

James H Phelan

unread,
Feb 16, 2026, 8:24:18 AM (6 days ago) Feb 16
to hbrob...@googlegroups.com

Mike,

Put a slick chat user interface on this and you'd have a Build-a-Bot kit:

BaB: "Is this your first or subsequent robot?"

Mike: "subsequent"

BaB: "What is its name?"

Mike: "Titania"

BaB: Starts a Titania.yaml, selects the next IP address, ROS_DOMAIN_ID.

Prepares robots.yaml.tmp, network.yaml.tmp toward later master update.

BaB: "What is its first processor type?"

Mike: "AMD 7900"

BaB: "Do you still want Ubuntu 24.04 and ROS2 jazzy?"

Mike: "Yes"

BaB: "What role will this processor have?"

"What actuators?"

"What sensors?"

"Is there another processor?"

and so on.

It could then install, or direct you how to install, the necessary software with appropriate testing and safety routines.

When finished (?!) it would update the master .yaml's welcoming the new robot.


I have a similar, but much smaller organization issue.  I have my flavor of articubot_one on my GitHub.

It is copied onto both my robot Stormy the Stingray, and my base computer LinuxBox.  Same OS, same ROS.  I figure, simplistically, that the redundancy is harmless.  Some things (sensing, driving) only run on the robot.  Other things (navigation) only run on the base computer.  Some things (aliases) they share.  But they shouldn't get in each other's way.

The problem comes with GitHub.  Mostly I only work on one or the other then push/pull and keep them synchronized.  But yesterday I had a joystick issue where roboclaw_driver only wants Twist (unless I hack the code) but joy and twist_mux were publishing TwistStamped.  I had to update the joy.yaml on LinuxBox and twist_mux on Stingray.  I know when I come to update GitHub it's going to complain.

Then articubot_one is only a subset of my robot_ws which contains mostly driver packages.  I'd like to keep it all updated on GitHub.  But, despite what they show in the tutorials, GitHub is messy and cranky.

Maybe one of my Digital Mentors will help.

James

James H Phelan
"Nihil est sine ratione cur potius sit quam non sit"
Leibniz

Pito Salas

unread,
Feb 16, 2026, 8:34:02 AM (6 days ago) Feb 16
to hbrob...@googlegroups.com
I’ve experienced similar things. On the surface it does sound like something for ansible or similar tools.

 But…

How often are you needing to build a new robot? How stable are the “correct” configuration parameters?

Yes, to build one robot I actually will rebuild it a dozen times. But my experience is that each time I discover something missing, something that changed since last time, something that I didn’t need before that I think I need now…so for me automation would be premature. I would ask: which are the most annoying settings and handle those surgically. For example a standard environment variables, or rosparams, or yams files. But I don’t try to automate it all. 

So my approach is to have a single directory on GitHub (I call it Rosutils) that I clone to a new robot and it contains a set of standard .bashrc files,  a bunch of scripts, a bunch of aliases, some text file documents, Etc..they are not documented or distributed because they change often and anyway many of the files in that directory are obsolete.  

That’s my system. 

Best,

Pito Salas
Boston Robot Hackers &&
Computer Science Faculty, Brandeis University




On Feb 16, 2026, at 2:07 AM, Albert Margolis <almar...@gmail.com> wrote:



Sergei Grichine

unread,
Feb 16, 2026, 12:23:32 PM (6 days ago) Feb 16
to hbrob...@googlegroups.com
Well, Michael — didn’t you ask for it? ;-) I don’t have all the answers (ask Claudia instead), but I do have some thoughts. 

THE QUESTION

Am I overthinking this? Is there a standard ROS way to do multi-robot, multi-computer configuration management that I just haven't found? Or are we all suffering in silence, manually editing hosts files like animals?
- In short — yes. “...grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.
- When you go big on everything (a.k.a. “overengineering”) — especially alone — you are doomed to drown in minutiae. Don’t ask how I know… maybe Trackroamer? ;-)

I'd love to hear:
** How you manage multi-computer robots
- Thorough, always-updated notes
- Ability to quickly back up, clone, shrink, and restore SD cards and copy SDs to NVMs keeps me worry-free.
- Keeping my onboard “zoo” limited to Raspberry Pi 4/5 units and development to a couple of reliable old-but-good PCs.
- Running Ubuntu 24.04 everywhere (with continuous apt updates).
- and yes, reliable hardware helps not to lose bytes to smoke.

and - "The elephant in the room" - does your single codebase accommodate multiple robots, or you have a different repository for each one?
- if you look at my articubot_one repo, you will see robots folder with four of my creatures and even a (hopefully) upcoming James Phelan's Stingray contribution (thanks, James!).
- packages for outdoor navigation, sensors, cameras etc. are in separate repos - links in the notes
- anybody can take that and create your own robot by adding a robots/<your_bot_here> subfolder (and, hopefully, contributing to my repo in the spirit of OpenSource). 
   
** How you keep configurations in sync
- Keeping OS packages aligned is straightforward: install once, then rely on apt updates (and backups, see above)
- I primarily develop in ROS 2 Jazzy, but also run Kilted on a test PC and on a few robots for future-proofing  my codebase. The two coexist nicely, and switching via environment setup is easy.

** How you decide what packages go where
- Careful notes help a lot
- rosdep helps sometimes

** Whether you think I'm solving real problems or just creating new ones
- If you’re asking that question, you probably already know the answer.
- My own answer lies in a different question: Am I trying to build robots, or to build a codebase? I focus on creating and maintaining a reusable, well-documented codebase — and, whenever possible, fixing bugs in other people’s repositories along the way.

** If you've seen tools that already do this
- GitHub Wiki
- SD/NVM imaging and cloning tools
- Standard package management (apt)

Well, I’ve ruffled enough feathers for today. Peace.
Build more great robots — and contribute to open source. The show must go on.

Best Regards,
-- Sergei


Marco Walther

unread,
Feb 16, 2026, 1:09:30 PM (6 days ago) Feb 16
to hbrob...@googlegroups.com, Michael Wimble, rssc...@googlegroups.com
On 2/15/26 21:06, Michael Wimble wrote:
> Subject: Multi-Robot ROS Infrastructure - Or How I Learned to Stop Worrying and Automate Everything
> Alternate title: Multi-everything ROS for Mere Mortals
>
> Hi everyone,
>
> I wanted to share a project I'm working on because I suspect many of you face the same frustrations I do. If you're managing even ONE robot with multiple computers, you know the pain. If you're thinking about building a SECOND robot that shares some components with the first... well, let me tell you about my nightmare.
>
> THE PROBLEM(S)
>
> I have a robot named Sigyn. She has mutliple computers:
> - An AMD desktop running navigation and planning
> - A Raspberry Pi 5 handling gripper vision and grasper control
> - 3 custom boards running Teensy 4.1 MCUs managing sensors and controllers.
> - An OAK-D running AI systems, including object detection and publishing color camera and depth camera images.
>
> Each computer needs different ROS packages. Each has different installed libraries. Each needs to know about all the OTHERS for networking. And here's where it gets messy:
>
> Every time I add a computer or change an IP address, I'm manually updating:
> - /etc/hosts on ALL machines (so they can find each other)

Don't. Use the zeroconf/avahi ;-) I can do things like
`ssh ubu...@ubuntu-2404-pi5b.local` and it just works;-)

> - ~/.ssh/config on ALL machines (so I can ssh between them)
Don't;-) Add subnets as needed. Assuming your home network is
192.168.0.* a section like
Host 192.168.0.*
ForwardAgent yes
StrictHostKeyChecking no
ServerAliveInterval 180
SendEnv KONSOLE_DCOP

should do the trick. Potentially more for X11 forwarding etc;-)
Same for ~/.ssh/authorized_keys, I have one version for all my local
hosts and the private keys live only on my laptop.


> - The package lists for each machine (what should even BE on that Pi?)
Yeah, that's harder especially when you trying things quickly. What's
the right/last package list?

> - Environment variables (RMW_IMPLEMENTATION xml configuration, workspace paths, etc.)
> - My own sanity (rapidly depleting)
>
> And God forbid I rebuild a machine from scratch. I have... notes? Somewhere? Did I write down what packages go on the vision Pi? Was it this version of OpenCV or that one?
>
> Now I'm building a second robot - Titania - and I want to reuse some Sigyn packages but not all. Some computers will run the same code. Some won't. The combinatorial explosion of "what goes where" is making my head hurt.
>
> THE BREAKING POINT
>
> I realized I was spending too much time on CONFIGURATION and synchronization instead of ROBOTICS. I was afraid to change anything because I'd have to remember to update it everywhere. I had the same IP addresses hardcoded in multiple places. My bash aliases were inconsistent across machines. Half my packages were in one giant repo and I couldn't figure out which ones were actually dependencies of which.
>
> Enough. There has to be a better way.
>
> THE VISION
>
> I'm building a new management system (working name: SIgyn2, because creative naming is hard). The core idea is simple: YAML configuration files that describe EVERYTHING, and automation that makes it all happen.
>
> Here's what I'm working toward, roughly in this order:

Look into configuration management setups. Ansible, Chef, Puppet, ....
all do similar things. And there are many more out there. Don't reinvent
the wheel.

>
> PHASE 1: NETWORK SANITY
> - Define all robots and their computers in one place (robots.yaml)
> - Define all network info in one place (network.yaml)
> - Have a script automatically update /etc/hosts and ~/.ssh/config everywhere
> - Never manually edit hosts files again (this alone would be worth it)

When you manage a couple of thousands of [virtual] computers, the only
way is 'You can look, but don't touch anything'. That requires a lot of
discipline and it's probably not as good for development setups.

So, you would probably start with some checkpoints and have to record
all the changes after that.

-- Marco

James H Phelan

unread,
Feb 16, 2026, 1:57:30 PM (6 days ago) Feb 16
to hbrob...@googlegroups.com

Marco,

"Use the zeroconf/avahi ;-) I can do things like
`ssh ubu...@ubuntu-2404-pi5b.local` and it just works;-)"

What is this zeroconf/avahi?

Google AI says:

Avahi is an open-source implementation of Zeroconf (Zero-configuration networking) for Linux, facilitating "plug-and-play" functionality by allowing devices on a network to automatically discover services (like printers or file shares) without manual IP configuration. It supports mDNS/DNS-SD and is compatible with Apple's Bonjour. 

This video explains how Zeroconf protocols, such as mDNS and DNS-SD, work to enable device discovery on a network: [with apparent security cautions]

https://www.youtube.com/watch?v=ZZy0dFPjHYE&t=97s

~/.ssh/config on ALL machines (so I can ssh between them)

"Add subnets as needed. Assuming your home network is 192.168.0.* a section like
Host 192.168.0.*
  ForwardAgent yes
  StrictHostKeyChecking no
  ServerAliveInterval 180
  SendEnv KONSOLE_DCOP
"

Can you elaborate on these settings and what they do?  I've not seen such before.

https://poe.com/s/L5MaMSj23kqr19ddjNkf    for the full dialog

GPT-5.2-Codex - Explain these Ubuntu 24.04 network settings


Host 192.168.0.*
  ForwardAgent yes
  StrictHostKeyChecking no
  ServerAliveInterval 180

  SendEnv KONSOLE_DCOP:

Below is what each line means in your SSH client config (typically ~/.ssh/config). It applies to any host you connect to whose hostname or IP matches 192.168.0.*.

Host 192.168.0.*
  ForwardAgent yes
  StrictHostKeyChecking no
  ServerAliveInterval 180
  SendEnv KONSOLE_DCOP

Host 192.168.0.*

  • This is a pattern. All SSH connections to IPs in the 192.168.0.0/24 range use the options below.

ForwardAgent yes

  • Forwards your SSH agent to the remote host.
  • Lets the remote host use your local SSH keys without copying them.
  • Security note: If the remote host is compromised, your agent could be abused while the session is open.

StrictHostKeyChecking no

  • Disables host key verification prompts.
  • SSH will accept new host keys automatically, and will not warn if a host key changes.
  • Security note: This makes you vulnerable to man‑in‑the‑middle attacks. Safer options: ask (default) or accept-new.

ServerAliveInterval 180

  • Sends a keepalive packet every 180 seconds.
  • Helps keep connections from dropping due to idle timeouts.

SendEnv KONSOLE_DCOP

  • Sends the environment variable KONSOLE_DCOP from your local machine to the server if the server allows it (via AcceptEnv in its sshd config).
  • This variable was used by older versions of KDE Konsole; it is usually irrelevant today.

If you want, I can suggest safer defaults for LAN usage or help you tailor this for specific hosts...

James

James H Phelan
"Nihil est sine ratione cur potius sit quam non sit"
Leibniz

Marco Walther

unread,
Feb 16, 2026, 5:36:12 PM (6 days ago) Feb 16
to hbrob...@googlegroups.com
On 2/16/26 10:57, 'James H Phelan' via HomeBrew Robotics Club wrote:
> Marco,
>
> /"Use the zeroconf/avahi ;-) I can do things like
> `ssh ubu...@ubuntu-2404-pi5b.local` and it just works;-)"
> /
>
> *What is this /zeroconf/avahi/?*


It's basically a 'service', where each client publishes it's own
'preferred network name' and other clients can find hosts by those names.

In my home, there are currently eight Pi's online, mostly 24*7,
controlling everything from the sprinklers to my GNSS base station;-)

All/most home routers/Wifi-APs will support Zeroconf out of the box. So,
you don't have to manage the name <--> IP mapping. MacOS & Windows
handle it as well;-)

Makes your life a lot easier.


>
> Google AI says:
>
> /Avahi is an open-source implementation of Zeroconf (Zero-configuration
> networking) for Linux, facilitating "plug-and-play" functionality by
> allowing devices on a network to automatically discover services (like
> printers or file shares) without manual IP configuration. It supports
> mDNS/DNS-SD and is compatible with Apple's Bonjour./
>
> /This video explains how Zeroconf protocols, such as mDNS and DNS-SD,
> work to enable device discovery on a network: /[with apparent security
> cautions]
>
> https://www.youtube.com/watch?v=ZZy0dFPjHYE&t=97s <https://
> www.youtube.com/watch?v=ZZy0dFPjHYE&t=97s>
>
> ~/.ssh/config on ALL machines (so I can ssh between them)
>
> /"Add subnets as needed. Assuming your home network is 192.168.0.* a
> section like
> / /Host 192.168.0.*
>   ForwardAgent yes
>   StrictHostKeyChecking no
>   ServerAliveInterval 180
>   SendEnv KONSOLE_DCOP
> /"
>
> *Can you elaborate on these settings and what they do?  I've not seen
> such before.*
>
> https://poe.com/s/L5MaMSj23kqr19ddjNkf   for the full dialog
>
> GPT-5.2-Codex - *Explain these Ubuntu 24.04 network settings*
> Host 192.168.0.*
>   ForwardAgent yes
>   StrictHostKeyChecking no
>   ServerAliveInterval 180
>   SendEnv KONSOLE_DCOP:
>
> /Below is what each line means in your *SSH client config* (typically |
> ~/.ssh/config|). It applies to any host you connect to whose hostname or
> IP matches |192.168.0.*|./
>
> /|Host 192.168.0.* ForwardAgent yes StrictHostKeyChecking no
> ServerAliveInterval 180 SendEnv KONSOLE_DCOP |/
>
>
> /|Host 192.168.0.*|/
>
> * /This is a *pattern*. All SSH connections to IPs in the |
> 192.168.0.0/24| range use the options below./
>
>
> /|ForwardAgent yes|/
>
> * /Forwards your *SSH agent* to the remote host./
> * /Lets the remote host use your local SSH keys *without copying them*./
> * /*Security note:* If the remote host is compromised, your agent
> could be abused while the session is open./
>
>
> /|StrictHostKeyChecking no|/
>
> * /Disables host key verification prompts./
> * /SSH will *accept new host keys automatically*, and will not warn if
> a host key changes./
> * /*Security note:* This makes you vulnerable to man‑in‑the‑middle
> attacks. Safer options: |ask| (default) or |accept-new|./
Yes, but it helps when you're regularly rebuild hosts (Pi's).

>
>
> /|ServerAliveInterval 180|/
>
> * /Sends a keepalive packet every *180 seconds*./
> * /Helps keep connections from dropping due to idle timeouts./
>
>
> /|SendEnv KONSOLE_DCOP|/
>
> * /Sends the environment variable |KONSOLE_DCOP| from your local
> machine to the server *if* the server allows it (via |AcceptEnv| in
> its sshd config)./
> * /This variable was used by older versions of *KDE Konsole*; it is
> usually irrelevant today./
That might be true.

>
> ------------------------------------------------------------------------
>
> /If you want, I can suggest safer defaults for LAN usage or help you
> tailor this for specific hosts.../

Your AI got the gist of it, but it's really helpful to be able to read
manuals without AI help and adapt things to your needs;-) Try
$ man ssh_config

You can't delegate the responsibility for your network to me or any AI.

-- Marco
> --
> You received this message because you are subscribed to the Google
> Groups "HomeBrew Robotics Club" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to hbrobotics+...@googlegroups.com
> <mailto:hbrobotics+...@googlegroups.com>.
> To view this discussion visit https://groups.google.com/d/msgid/
> hbrobotics/df693c77-ecde-49fb-849d-861355db7688%40hal-pc.org <https://
> groups.google.com/d/msgid/hbrobotics/df693c77-
> ecde-49fb-849d-861355db7688%40hal-pc.org?
> utm_medium=email&utm_source=footer>.

Reply all
Reply to author
Forward
0 new messages