Running multiple ships on a single cloud instance

61 views
Skip to first unread message

Christopher King

unread,
Dec 13, 2020, 1:36:25 PM12/13/20
to urbit-dev
Is running multiple ships on single cloud instance (in my case EC2) an anti-pattern? Is there anything in the Urbit architecture that makes this unwise or unfeasible, for example in relation to the swap space issue?

Philip Monk

unread,
Dec 13, 2020, 3:41:49 PM12/13/20
to Christopher King, urbit-dev
It works fine in my experience.  Swap-wise, it seems to require "seeing" 2GB at startup, but it doesn't use it all and the "seeing" can overlap, so if you have small piers, you can usually start several even while only having 3GB of RAM.  If one pier is large, you may need to start that last.  For example (and this is totally empirical, I've never properly studied it):

Pier 1 uses 1500MB and Pier 2 uses only 500MB and you want to start them both on a machine with 3GB of RAM (including swap).  If you boot pier 1 first, it checks that you've got at least 2GB, and you've got 3GB so you're fine.  Then you boot pier 2, and it checks but finds you only have 1.5GB remaining, so it fails to start.

If you boot pier 2 first, it checks and finds you've got 3GB, so it starts.  Then you boot pier 1 and when it checks it find you've still got 2.5GB, so it *also* starts.

This is obviously not a great state of affairs, but as best as I can tell that's how it works.

Christopher King

unread,
Dec 14, 2020, 9:20:18 PM12/14/20
to urbit-dev
I’m wondering if there is a way to containerize with Docker or something similar that will help avoid some of suboptimal aspects of what Philip describes. I’m not a big Docker pro so not sure, does anyone know?
--

Best,
Chris

Brendan Hay

unread,
Dec 15, 2020, 4:01:55 AM12/15/20
to Christopher King, urbit-dev
Tlon Hosting currently runs each individual ship in a container via Kubernetes and runc. For swap behaviour we then just enforce a 2GiB limit for each container so no one will step on anyone else's toes. [1]

If you want to use Docker specifically there are options such as --memory or --memory-swap you can use to limit what an individual container has access to.

[1] As an optimisation memory is bin-packed by defining a resource quota for a ship workload. Each ship's memory request/limit is (re)configured based upon age and pier size, so the ship is scheduled on, or migrated to, a compute node that has sufficient capacity for the resource (swap/memory) request. The initial request corresponding to age/size means we minimise preemptions (migrations) for ships that upload large piers. There is additional configuration regarding what the ship can "see" and what it "gets" - with the latter being fluid up to the 2GiB limit.

Christopher King

unread,
Dec 15, 2020, 8:19:38 AM12/15/20
to urbit-dev
This is really helpful. Thanks for the in-depth explanation.
--

Best,
Chris
Reply all
Reply to author
Forward
0 new messages