Nas M.2 Nvme

0 views
Skip to first unread message

Charise Zelnick

unread,
Aug 3, 2024, 6:08:58 PM8/3/24
to lectmonmota

please just use the flash_l4t_external.xml for the partition layout, and leaving num_sectors unchanged as it will be detected by flashing scripts during flashing.
Also, do you really need -S 800GiB? The size of the APP partition is configurable upon first boot in the OEM config.

have you check the developer guide?
-to-flash-an-encrypted-rootfs-to-an-external-storage-device
You need to use flash_l4t_nvme_rootfs_enc.xml as the partition layout file to enable disk encryption.

Board and ribbon are 100% tested before they are shipped. So in most case, if they are installed properly, it should work, except for some SSD drive not supported by Raspberry Pi PCIe protocol. May i ask what kind of SSD drive is currently being tested?

Want also to note for Argon One, the nvme base need to be screwed on back to the case for the nvme to work. We use the pogo pins to draw power for the nvme drive instead of relying on the cable. The pogo pins can provide 3A of current instead of the 1A provided by the ribbon cable.

ive just had the same problem with a samsung PM981 drive, could not get it to recognise it at all, however i then updated the EEPROM firmware using this command and it worked right after a reboot, check this for an updated version and see how you go

This might actually be one of those cases where the NVMe Base needs the extra 5V supply from GPIO. The flat flex cable is technically limited to providing 5V@1A or around 5W. The 3V3 supply on the NVMe Base can do up to 3A continuous if needed when providing power through the extra 5V header.

Sometimes after a reboot, /dev/nvme0n1 will show up. But on the next reboot, it will disappear. And while it is detected, it seems to function OK, until any kind of heavy work like cloning the OS from the SD card using rsync or creating swapfiles using dd. At this point, it fails and goes read-only only. Then upon reboot is it not detected again.

Issue seems to be related to the disk Samsung 980 PRO PCIe 4.0 NVMe M.2 250GB. I went back to the store and replaced it with Samsung 970 EVO Plus PCIe 3.0 NVMe M.2 SSD 500GB and the disk is now detected in RPi.

Not sure why Samsung 980 PRO 250GB did not work, it is included in the list of tested and compatible disks with the Piromoni NVMe base. Maybe because it is 250GB and not 1TB? Or because it is PCIe 4.0?

Which power supply do you use? I have seen in this forum, that someone have issues if they use wrong/unsupported ones. Additional I read about issues with NVMe detection with a RPi4 if the power supply seems to be unsufficient.

I know it seemed like you updated the bootloader to the latest firmware but could you please try to update it again with the method in this video? I also faced this problem but after updating it this way, the nvme started to get detected.

You actually do not even need a USB enclosure for the NVME drive (I debated buying one but I found a way to do it using just the NVME hat I already had for my Pi 5). If Pi OS can detect the nvme drive (through lsblk or lspci) and it is mounted, you can select it as the target for Pi imager (on Pi OS), and just install Umbrel OS as a custom OS. I tried other methods but this one was by far the easiest (no repartitioning or extra config required other than updating the eeprom bootloader and other stuff). This is the link I used for NVME setup and then I just selected the drive as a target on Pi Imager - Getting Started with NVMe Base for Raspberry Pi 5
Hope this helps

So I had an M.2 SATA drive for my main boot and root drive. This was encrypted via luks, and the root partition itself was behind an LVM. I was running fedora 39 with kernel 6.7.3-200 with nvidia drivers. This drive was rather small and was running out of space, so I got a new, much larger M.2 NVMe drive to replace it with. There is another encrypted drive, a normal SATA disc, that is also in the LVM for root.

I booted the system using a live usb for fedora 39, and from that state I cloned the M.2 SATA disc to an img file on another sata ssd in the system, using dd. Once that had finished, I replaced the M.2 SATA with the new NVMe, booted back into the live usb. It saw the new /dev/nvme0 device and I used dd to write the img file to the new M.2 NVMe drive.

Once that was finished, I used a serious of parted, cryptsetup resize, pvresize, and lvresize to resize the root partition to take up all the new free space. I checked the uuids of the partitions and they were the same.

vgscan, pvscan, lvscan all worked fine, and I was able to mount the root partition using the live usb and even chroot to it. from there I cleared out my old /lib/modules from kernels 3,4, and 5, and did a dracut --regenerate-al to rebuilt the initramfs.

I rebooted without the usb, so from the new M.2 NVMe disc, and grub worked fine and tried booting the kernel. It prompted me for the luks password for the sata disc, but complained that it could not find the device with the uuids from the new NVMe disc.

I suggest that you boot back to the live usb with only the new M.2 nvme drive attached and mount the installed file systems on the /mnt mount point. Then use a chroot environment and dracut to create the initramfs image supporting the nvme drive. Normally the initramfs image only provides drivers for hardware installed at the time the image was created.

There seem several ways to do so. I really suggest that you use the latest respin iso of fedora from Index of /pub/alt/live-respins since the original install media uses an older kernel (6.5.9 ?) that probably is no longer on your installed system and you may encounter a problem using dracut when booted with a kernel and supporting packages that are not currently installed in the chroot environment system.

thank you all - you were able to set me in the write direction. I was able to fix this with a another dracut + grub-config, but I had not mounted everything before chroot - my /boot partition was never chrooted, but I fixed that the second time around.

Have my first issue with my very first build after using computers and remembering the days before the interweb. I intended to be ambitious and to maybe over build with some headroom for future upgrades. Whatever I own I do try to ensure I get the best out of it and to that end I am bumping against installing Win10 after Raid 0 the two drives within the Bios. I have gone through the videos of trying to load bottom drivers then raid config but i still get get the windows setup to see them as raided drives. I need a detailed idiots guide rather than a detailed expert guide if that makes sense. Any help with videos or pdf's......at 43 I am feel it might be like teaching someone to use a spoon. Will continue my own searching and googling as there must be something I am missing.

3.Enter the Raidxpert2 menu in BIOS, you need to initialise (writes some data to the drives to prepare them for Raid) all hard drives that will be used for Raid. This option will be in the Raidxpert2 menu, so check all options.

I try raid 0. In bios I'm creating raid, and at first glance everything is fine (I can add screenshots from the BIOS). Then I made a bootable USB flash drive and added drivers there. During installation, I add 3 drivers sequentially from the DD folder, they are installed, but still two nvme disks are displayed.

Strange thing. tried again from the beginning. In the BIOS, I left SATA in AHCI mode (before that stood in raid), and nvme as raid. I got to the driver installation and this time I could not install anything from the nvme_did folder, the list was empty, like "incompatible with my equipment". But from the nvme_cc folder (for a different processor) - success, and one disk appeared! While everything seems to be established and working, I observe.

I used the NVMe_DID drivers, browsing to and then installing rcBootom, RCRaid, and RCfg in that order. It made no difference - after installing all three drivers and doing a refresh, the Windows installer still sees two separate NVM drives and can't install.

However, from reading AMD NVMe/SATA RAID Quick Start Guide for Windows Operating Systems I've seen that this can happen when the system has multiple controllers, so no problem - I choose the first one.

Yes, drivers will be provided and added to that page when they become available. For now you can use the previous drivers for Windows 11. I am using them on my Windows 11 installation with 2 drives in Raid 1 configuration.

The NVMe testing service offers conformance and interoperability testing across various OS, drivers, and hardware platforms as well as PCIe SSD and PCIe Servers. Testing here helps products qualify for the NVMe Integrator's List.

This is the preferred method of issue reporting for IOL INTERACT. If you do not have an account already, you may request access via the link below, or email nvm...@iol.unh.edu directly to request access.

1) make sure your BIOS is operating in AHCI not RAID mode (can't see the NVME inside clonezilla otherwise - you'll want to set this back before rebooting into windows.
2) if you're using GPT, you'll need to run parted and fix the allocation in order for windows to see the whole disk

As a whole, i have two problems. In case of HDD it determines as SDA, so my flash determines as SDB and script works.With m.sata my flash determines as SDA, thus i'll need to change it, but i dont know how to do it.

If I understand correctly, if you run the restore in batch, it will not ask user and will convert the drive automatically during the restore. To be precise, it creates a temporary linked image in temp folder and restores that.

I'm in the exact same boat, but I've gone ahead with ask_user and forcing confirmation. My plan is to distribute a usb stick with everything preloaded on it, that the user just has to confirm the target, and then confirm the operation.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages