NoteThe package architecture has to match the Linux kernel architecture, that is, if you are running a 64-bit kernel, install the appropriate AMD64 package (it does not matter if you have an Intel or an AMD CPU). Mixed installations (e.g. Debian/Lenny ships an AMD64 kernel with 32-bit packages) are not supported. To install VirtualBox anyway you need to setup a 64-bit chroot environment.
We provide a yum/dnf-style repository for Oracle Linux/Fedora/RHEL/openSUSE. All .rpm packages are signed. The Oracle public key for rpm can be downloaded here. You can add this key (not normally necessary, see below!) with
Note that importing the key is not necessary for yum users (Oracle Linux/Fedora/RHEL/CentOS) when using one of the virtualbox.repo files from below as yum downloads and imports the public key automatically! Zypper users should run
Hello everyone!
After upgrading my Arch i have trouble starting VirtualBox. I get this message "The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing: /sbin/rcvboxdrv setup". After executing the command I get:
Closing, for deletion. Restored after it became apparent that there wasn't the same problem. Some posts will seem slightly odd due to them original being an a different topic, and merged back into this one.
Those files are generally owned by the "linux" package. Since you did a partial update and the corrrect "linux" package wasn't installed, the files didn't exist when the modules were installed and depmod created them. Now you've got a bunch of orphaned files in the way.
Hmm, so the modules should be there. As you didn't have the correct kernel at the time, though, something could easily have gone wrong with the module generation. Reinstall the linux-headers package to regenerate them.
The linuxXX-virtualbox-host-modules are the precompiled modules for each kernel and is normally what you would want to have installed. virtualbox-host-dkms dynamically builds for each kernel you have installed.
I mean, if I choose virtualbox-host-dkms, it works regardless of my kernel version and potential kernel updates in the future which are handled under the hood, while all other choices requires exact match to the current kernel version and will cease to work after a kernel update, right?
mhwd-kernel will automatically update a newly installed kernel with any modules currently used in your existing kernel . For example, if you were to update from kernel 4.14 to 4.19, mhwd-kernel would automatically update 4.19 with any and all modules present in 4.14. How about that!
In my case, the issue was having virtualbox-4.1 installed alongside virtualbox-4.2. Once I uninstalled 4.1, I could run sudo /etc/init.d/vboxdrv setup and sudo modprobe vboxdrv just fine, as well as start VMs.
I tried following the instructions but was met with another error. I have secure boot enabled and would prefer to keep it that way. Is there any way to get Virtualbox working without giving up secure boot? Thanks
For me, running 38 Xfce and any 6.3 kernel, VirtualBox 7.0.6 would not work. I was getting the same error messages outlined in this thread. Any suggested solution would also not work. If I booted to the last remaining 6.2 kernel on my system VirtualBox worked well.
That solution works but gives up the easy automatic recompile of the modules when one has a kernel update that is available when using the package from rpmfusion with akmods.
I suspect it also has problems with signing the kernel modules for use with secure boot as requested by the OP.
The dkms is used by pretty much every distribution other than RedHat and Co. It will do automatic updates with every new kernel and it will do the automatic signing as well. The RpmFusion packages with akmods are packaged to the Fedora standard and tested in the Fedora environment. The Oracle packages may be less well tested on Fedora.
Secure boot is only active when booted uefi so that would mean your system in legacy mode never checks whether the modules are signed and always is able to load them (just like it does when using uefi and having secure boot disabled).
VirtualBox itself does not care if using secure boot or not. The kernel does when booting uefi but does not when booting legacy mode. Signing the kernel module is only useful when using uefi AND having secure boot enabled.
There was an issue awhile back with a kernel update that caused virtualbox to stop working if installed from the virtualbox site.From what I read on the vbox forum vbox needs a patch to work and they suggested to install the test version until the next version of vbox was released.I removed vbox and installed it from rpmfusion which had the patch added.I have had no issues with vbox from rpmfusion so you might try that.Just to add I was having the same error you posted.
I use VirtualBox frequently to create virtual machines for testing new versions of Fedora, new application programs, and lots of administrative tools like Ansible. I have even used VirtualBox to test the creation of a Windows guest host.
Never have I ever used Windows as my primary operating system on any of my personal computers or even in a VM to perform some obscure task that cannot be done with Linux. I do, however, volunteer for an organization that uses one financial program that requires Windows. This program runs on the office manager's computer on Windows 10 Pro, which came preinstalled.
This financial application is not special, and a better Linux program could easily replace it, but I've found that many accountants and treasurers are extremely reluctant to make changes, so I've not yet been able to convince those in our organization to migrate.
This set of circumstances, along with a recent security scare, made it highly desirable to convert the host running Windows to Fedora and to run Windows and the accounting program in a VM on that host.
The physical computer already had a 240GB NVMe m.2 storage device installed in the only available m.2 slot on the motherboard. I decided to install a new SATA SSD in the host and use the existing SSD with Windows on it as the storage device for the Windows VM. Kingston has an excellent overview of various SSD devices, form factors, and interfaces on its web site.
That approach meant that I wouldn't need to do a completely new installation of Windows or any of the existing application software. It also meant that the office manager who works at this computer would use Linux for all normal activities such as email, web access, document and spreadsheet creation with LibreOffice. This approach increases the host's security profile. The only time that the Windows VM would be used is to run the accounting program.
Before I did anything else, I created a backup ISO image of the entire NVMe storage device. I made a partition on a 500GB external USB storage drive, created an ext4 filesystem on it, and then mounted that partition on /mnt. I used the dd command to create the image.
I installed the new 500GB SATA SSD in the host and installed the Fedora 32 Xfce spin on it from a Live USB. At the initial reboot after installation, both the Linux and Windows drives were available on the GRUB2 boot menu. At this point, the host could be dual-booted between Linux and Windows.
Now I needed some information on creating a VM that uses a physical hard drive or SSD as its storage device. I quickly discovered a lot of information about how to do this in the VirtualBox documentation and the internet in general. Although the VirtualBox documentation helped me to get started, it is not complete, leaving out some critical information. Most of the other information I found on the internet is also quite incomplete.
First, I installed the most recent version of VirtualBox on the Linux host. VirtualBox can be installed from many distributions' software repositories, directly from the Oracle VirtualBox repository, or by downloading the desired package file from the VirtualBox web site and installing locally. I chose to download the AMD64 version, which is actually an installer and not a package. I use this version to circumvent a problem that is not related to this particular project.
The installation procedure always creates a vboxusers group in /etc/group. I added the users intended to run this VM to the vboxusers and disk groups in /etc/group. It is important to add the same users to the disk group because VirtualBox runs as the user who launched it and also requires direct access to the /dev/sdx device special file to work in this scenario. Adding users to the disk group provides that level of access, which they would not otherwise have.
I then created a directory to store the VMs and gave it ownership of root.vboxusers and 775 permissions. I used /vms for the directory, but it could be anything you want. By default, VirtualBox creates new virtual machines in a subdirectory of the user creating the VM. That would make it impossible to share access to the VM among multiple users without creating a massive security vulnerability. Placing the VM directory in an accessible location allows sharing the VMs.
3a8082e126