Is there a "easy" way to add the Intel driver for I226-V NIC / Intel IGC driver

33 views
Skip to first unread message

Steffen Hansen

unread,
Jul 14, 2025, 10:09:34 AMJul 14
to esos-users
Hi,

 Not entirely sure I should even ask in here as what I'm trying to accomplish is a step or 4 below what other are doing with esos. Long explanation in PS.

 As the subject says, is there an "easy" way to add the Intel IGC driver to esos in order to use an Intel I226-V 2.5Gb NIC.

Ashamed to admit I haven't used menuconfig since the last millenium when I put a dual P.Pro Compaq Proliant 5500 in production (Samba+print) as replacement for our Windows NT 3.51 file/print server.

BR, and thank you, Steffen

Post Scriptum part as promised:

Pre-story: My Bare Metal Home Assistant server died on me. Decided to go with a Proxmox setup instead, which went way too easy (I now even have a backup!). But,  single server with a single disk is way too simple right, so let's complicate the h3ll out of it. What if I were to do an HA cluster with 2 nodes (a vote) and a SAN backend (microscale)? Researching potential backends (fnarr) I stumbled over esos and thought: "This looks fairly smart".

Found a couple of old identical HP Elitedesk 1liter boxes to use as cluster hosts (i5/10500 w.32Gb and an NVME for OS) and a third Elitedesk to use as "SAN" (Pentium Gold/16Gb) which happens to have 2 NVME slots - so did a BIN on 2 x Samsung 990 Plus and put those in. Booted the esos USB, configured the two Samsungs in Raid1 and learned a ton about iSCSI targets and initiators. Anyway - all is well and I can do Live Migration between the two hosts without dropping a ping packet.

With a 1Gb link I get 110-115Mb/sec transfer speed as is to be expected. Should be ok for the workload that'll be on the setup, but as I may have mentioned, I can't leave well enough alone. Now all 3 hosts have a spare M.2 2230 port used for WLAN which I don't need, so a little shopping on t'internet sees me with 3 x 2.5Gb Intel I226-V NICs M.2 2230 format and a 5 port Unifi 2.5Gb switch. The Proxmox hosts: No issue whatsoever: they find that card and once activated can ping on the subnet that will be dedicated to iSCSI traffic. However, esos doesn't see the card and now reading through the documentation, I see that they are not listed under supported hardware - hence the post you're reading.

Output from misc. utilities

[esos] lspci | grep -i ethernet
00:1f.6 Ethernet controller: Intel Corporation Device 0d4c
03:00.0 Ethernet controller: Intel Corporation Device 125c (rev 04)

[proxmox] lspci | grep -i ethernet
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (11) I219-LM
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)

[esos] ethtool -i eno1 | grep driver
driver: e1000e

[proxmox] ethtool -i eno1 | grep driver
driver: e1000e

[proxmox] ethtool -i enp2s0 | grep driver
driver: igc


[esos] lshw
        *-network
             description: Ethernet interface
             product: Intel Corporation
             vendor: Intel Corporation
             physical id: 1f.6
             bus info: pci@0000:00:1f.6
             logical name: eno1
             version: 00
             serial: e0:70:ea:cb:1b:20
             size: 1Gbit/s
             capacity: 1Gbit/s
             width: 32 bits
             clock: 33MHz
             capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
             configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k duplex=full firmware=0.4-4 ip=172.22.1.225 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s

         *-network UNCLAIMED
                description: Ethernet controller
                product: Intel Corporation
                vendor: Intel Corporation
                physical id: 0
                bus info: pci@0000:03:00.0
                version: 04
                width: 32 bits
                clock: 33MHz
                capabilities: pm msi msix pciexpress cap_list
                configuration: latency=0
                resources: memory:e1000000-e10fffff memory:e1100000-e1103fff


[proxmox] lshw
        *-network
             description: Ethernet interface
             product: Ethernet Connection (11) I219-LM
             vendor: Intel Corporation
             physical id: 1f.6
             bus info: pci@0000:00:1f.6
             logical name: eno1
             version: 00
             serial: b0:22:7a:df:ee:a9
             size: 1Gbit/s
             capacity: 1Gbit/s
             width: 32 bits
             clock: 33MHz
             capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
             configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=6.8.12-9-pve duplex=full firmware=0.4-4 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
             resources: irq:124 memory:f1300000-f131ffff

          *-network
                description: Ethernet interface
                product: Ethernet Controller I226-V
                vendor: Intel Corporation
                physical id: 0
                bus info: pci@0000:02:00.0
                logical name: enp2s0
                version: 04
                serial: c4:62:37:09:8a:82
                capacity: 1Gbit/s
                width: 32 bits
                clock: 33MHz
                capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
                configuration: autonegotiation=on broadcast=yes driver=igc driverversion=6.8.12-9-pve duplex=full firmware=2017:888d ip=10.10.10.220 latency=0 link=yes multicast=yes port=twisted pair
                resources: irq:19 memory:f1000000-f10fffff memory:f1100000-f1103fff



Steffen Hansen

unread,
Jul 18, 2025, 2:00:35 PMJul 18
to esos-users
Apoligies - this one can actually be ignored. I'm trying to build 4.4.1 with IGC support (wasn't that hard to include). I get an error though when building the chroot_build.sh script, but there's another post waiting on approval detailing the issue.

I was a bit fearful of the whole Build it Yourself part, but it seems doable - even for me.

BR and thank you,
Steffen

Reply all
Reply to author
Forward
0 new messages