Announcing Incus 0.3

44 views
Skip to first unread message

Stéphane Graber

unread,
Nov 27, 2023, 4:50:06 PM11/27/23
to LXC users mailing-list, LXC development mailing-list
Hello,

I'm very excited to announce the release of Incus 0.3!

With Incus, you can easily create and manage both containers and
virtual-machines on a standalone system or a cluster of machines.
It provides a variety of storage and networking options with the goal to feel
like you're interacting with a cloud, but all locally.

The highlights for this third release of Incus are:
- Advanced authorization control with OpenFGA
- Ability to hot-plug/hot-remove shared paths into virtual machines
- Ceph and OVN support in the LXD migration tool

You can give Incus a try online:
https://linuxcontainers.org/incus/try-it

Before getting it on your own system:
https://linuxcontainers.org/incus/docs/main/tutorial/first_steps

A detailed release announcement can be found here:
https://discuss.linuxcontainers.org/t/incus-0-3-has-been-released/18351

For those preferring videos, a Youtube video is available here:
https://www.youtube.com/watch?v=dcPBxavBJWQ

The code can be found on Github:
https://github.com/lxc/incus

And support is done through our community forum:
https://discuss.linuxcontainers.org

Enjoy!

Stéphane Graber, on behalf of the Incus team.

Saint Michael

unread,
Nov 27, 2023, 5:54:19 PM11/27/23
to Stéphane Graber, LXC users mailing-list, LXC development mailing-list
Is rhere paid support?

--
You received this message because you are subscribed to the Google Groups "lxc-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lxc-users+...@lists.linuxcontainers.org.
To view this discussion on the web visit https://groups.google.com/a/lists.linuxcontainers.org/d/msgid/lxc-users/CA%2Benf%3DtS1g44hn4gNbFvxnyheOgXXRmtb8peRvVxecdxyZ_MeA%40mail.gmail.com.

Stéphane Graber

unread,
Nov 27, 2023, 6:03:16 PM11/27/23
to Saint Michael, LXC users mailing-list, LXC development mailing-list
I personally do offer paid support on Incus. Details can be found
here: https://zabbly.com/incus/

Stéphane
--
Stéphane

Saint Michael

unread,
Nov 27, 2023, 7:18:15 PM11/27/23
to Stéphane Graber, LXC users mailing-list, LXC development mailing-list
Thank you for your answer. 
For a business to try this, it needs to have some of pais support.

Saint Michael

unread,
Nov 29, 2023, 2:31:34 PM11/29/23
to Stéphane Graber, LXC users mailing-list, LXC development mailing-list
LXC The question that eludes me:
For some reason, only "veth" works on virtualized environments.
Anything else crashes against the restrictions of almost any
virtualization technology.
The issue is this: consider a server with

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address x.x.x.x/24
gateway x.x.x.1
bridge-ports eth0
bridge_waitport 0
bridge-stp off
bridge-fd 0
bridge_maxwait 0

in this scenario, systemd networking.service will hang forever. The
server will never allow a human being to login.
The only workaround is to force a timeout, and then add to crontab a
@rebooot line starting the network

Is there a real, organic way to fix this? I have not been able to
solve the riddle.

Serge E. Hallyn

unread,
Dec 1, 2023, 3:22:29 PM12/1/23
to Saint Michael, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
On Wed, Nov 29, 2023 at 02:31:20PM -0500, Saint Michael wrote:
> LXC The question that eludes me:
> For some reason, only "veth" works on virtualized environments.

By "on virtualized environments", do you mean a lxd kvm instance?

> Anything else crashes against the restrictions of almost any
> virtualization technology.
> The issue is this: consider a server with
>
> auto lo
> iface lo inet loopback
>
> iface eth0 inet manual
>
> auto vmbr0
> iface vmbr0 inet static
> address x.x.x.x/24
> gateway x.x.x.1
> bridge-ports eth0
> bridge_waitport 0
> bridge-stp off
> bridge-fd 0
> bridge_maxwait 0
>
> in this scenario, systemd networking.service will hang forever. The
> server will never allow a human being to login.
> The only workaround is to force a timeout, and then add to crontab a
> @rebooot line starting the network
>
> Is there a real, organic way to fix this? I have not been able to
> solve the riddle.
>
> --
> You received this message because you are subscribed to the Google Groups "lxc-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to lxc-devel+...@lists.linuxcontainers.org.
> To view this discussion on the web visit https://groups.google.com/a/lists.linuxcontainers.org/d/msgid/lxc-devel/CAC9cSOCU80Cy5x5ruwnHHXvqZ1asoK2GBN0%3D0uxn%3DmQuWsWAfQ%40mail.gmail.com.

Saint Michael

unread,
Dec 1, 2023, 4:00:16 PM12/1/23
to Serge E. Hallyn, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
For instance, on a VMware virtual machine, the only way to use LXC is
with bridges (veth).
Anything else will fail. You may use phys, assigning one virtual
interface to a single container, but the possible number of containers
is low (11)

Serge E. Hallyn

unread,
Dec 1, 2023, 4:22:28 PM12/1/23
to Saint Michael, Serge E. Hallyn, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
Can you show some configurations that fail? Honestly I only ever use
veth or host-shared (net type none). i never feel like messing with
macvlan. Consider https://discuss.linuxcontainers.org/t/promiscuous-mode-required-for-mac-vlan-in-vmware/2749/2

But I can try some of your container configs in a kvm container.

-serge

Serge E. Hallyn

unread,
Dec 1, 2023, 4:22:43 PM12/1/23
to Serge E. Hallyn, Saint Michael, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
On Fri, Dec 01, 2023 at 03:22:24PM -0600, Serge E. Hallyn wrote:
> Can you show some configurations that fail? Honestly I only ever use
> veth or host-shared (net type none). i never feel like messing with
> macvlan. Consider https://discuss.linuxcontainers.org/t/promiscuous-mode-required-for-mac-vlan-in-vmware/2749/2
>
> But I can try some of your container configs in a kvm container.

In a kvm vm, of course.

Saint Michael

unread,
Dec 1, 2023, 4:32:23 PM12/1/23
to Serge E. Hallyn, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
The issue is Vmware. It will only allow a single mac address per
interface. So macvlan will not work. Honestly, macvlan and phys will
also fail on a physical machine when you start and stop containers
often. The kernel will get confused.
The only stable networking is veth,

Serge E. Hallyn

unread,
Dec 5, 2023, 9:04:18 AM12/5/23
to Saint Michael, Serge E. Hallyn, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
> The only stable networking is veth,

In theory you should also be able to use anything that works with CNI
(https://s3hh.wordpress.com/2017/10/19/cni-for-lxc/) though it has been
a few years since I've tried that.

I should think it possible to write something that works more like
virtio networking in kvm for lxc.

I agree veth is the most stable option, and I almost always use veth,
and don't mind it. What exactly are you looking for? Do you want an
ip address that is fully visible outside the host? Just something that
performs better?

Saint Michael

unread,
Dec 5, 2023, 11:43:53 AM12/5/23
to Serge E. Hallyn, Stéphane Graber, LXC users mailing-list, LXC development mailing-list
The addresses of the containers must be public IPs. I use LXC as a
full VM. But we need that one single mac address is used for all IPs,
while each IP is active in a different container.
I don't know if this is even possible at the kernel level.
Reply all
Reply to author
Forward
0 new messages