Ihave been stuck on a netbook since last year due to budget constraints, but finally managed to get a decent laptop from a very generous person. Now my ubuntu installation in the netbook has all my development tools, libraries, personal mediawiki, other servers and things that I'm only reminded when the command line complains. I can always ssh into my netbook, but don't want to carry both the computers all the time. Is it possible to create an image from my netbook Ubuntu and use it out of the box in a Vmware player in the new laptop?
If I use a new ssh-key ( generate new from the console ), or use an older key, once it's booted up, these keys have no effect on establishing an SSH connection. It will prompt for password, and this is not due to permissions issue on the host key side just fyi. The old password on the system doesn't work with any combination of the legitimate old user, or if it was replaced ( not sure if it will do this ) with "ubuntu/root/ec2-user".
I tried SSHing from an older machine just in-case since the 14.04 is on openssh 6.6x but has no effect. Would someone possibly point me in the right direction to resolve this as quickly as possible? Or is the only solution to fix the original image pre-AMI, and do some specific adjustments?
I haven't used the VM import feature for a few years, but according to documentation, it shouldn't change the configuration of users present on the system, and it doesn't create the ec2-user (or "ubuntu" in the case of AWS's standard Ubuntu AMIs): -import/latest/userguide/prepare-vm-image.html#prepare-vm-image-linux
To be sure, you are able to log on to the system with a public key at the source, but the exact same key supplied to the exact same user as before is no longer accepted after launching the image as an EC2 instance?
If you want a quick workaround instead of troubleshooting the import tool, you could launch a brand new, temporary instance in the same AZ from a standard AWS AMI, detach the root volume from the imported instance, attach it to the temporary instance, and inspect and correct the credentials configuration there. Then just reattach the volume to the original instance and boot it up.
Installing on a NVMe resulted in the follwoing first
Ubuntu Core started to Bootstrap but got stuck in Stopped/Started Getty
somehow like in Ubuntu Core 18 on Raspberry Pi 3 doesn't bootstrap and -serial-getty-in-ubuntu-core-18
After the forced restart I saw
and was asked to configure but with an error echoed before
The configuration failed after entering my Ubuntu Email
It could be that due to a redownload of the install images that the setup worked once. Honestly, I cannot remember, what made it work.
After hours and hours, I was able to boot into Ubuntu Core. I got the impression that some errors were echoed during start-up and due to resize trick, I did not trust the install.
I tried to refresh the snap packages but it always failed with EOF
In the end, I repeated the install using a SATA drive what gives me the impression that there is not any error echoed during start-up, although snap refresh fails with EOF but that could be due to my very slow internet connection currently.
I was only able to refresh snapd.
I suspect part of the problem is that you were using the image specifically tailored for an Intel NUC. You might have better experience by using the image from the KVM page ( ) which is designed to run on a virtual machine and is more generic in nature, although not specifically a VMware machine so there might still be incompatibilities.
The question is, why it does not detect /dev/sda3, labelled writeable, although it is seen by Ubuntu Live Image. Moreover /dev/sda1 and /dev/sda2, labelled system-boot, are detected as the system is initiating boot.
It could be that we are missing kernel modules in the initramfs that would allow mounting/using this device from the initramfs, but these kernel modules are loaded in the live image. Do you know precisely what kernel module is necessary for your writable partition device?
The LSI Logic SAS driver ( mptsas ) and LSI Logic Parallel driver ( mptspi ) for SCSI are no longer supported. As a consequence, the drivers can be used for installing RHEL 8 as a guest operating system on a VMWare hypervisor to a SCSI disk, but the created VM will not be supported by Red Hat.
but to boot from such a device the drivers need to be included in the initramfs, Ubuntu core uses a pre-generated initrd so the drivers need to be added there at generation time (i think @ijohnson said that above already)
I am a HUGE fan of HashiCorp Packer and I have been using it for a number of years across many different projects including the VMware Event Broker Appliance (VEBA) solution. While it can certainly feel daunting at first, the same can be said for just about anything new, I typically point folks over to Ryan Johnson's fantastic Packer Examples for VMware vSphere project as a starting point, where you can find working Packer examples across a number of popular OS distributions for both Windows and Linux.
Most recently, I was helping out a few colleagues who was interested in automating the build of an Ubuntu Desktop image that could then be exported to an OVF/OVA. Of course, my recommendation was for them take a look at Ryan's project and they should be able to augment the existing Ubuntu Server 22.04 example. Interestingly enough, while I always recommend Ryan's Packer example repo, I have not personally used it myself and this is primarily due to the existing customization I have in my Packer builds which includes the use of custom OVF properties, which you can read more about HERE, HERE AND HERE.
Ryan's project is extremely comprehensive and while things should just work if you use the default builds, but if you wish to make tweaks, I can certainly understand that you could feel overwhelmed, which is exactly how I felt when trying to figure out how to augment the existing Ubuntu Server 22.04 build.
While I do have experience in using Packer, it did take me a few attempts as I ran into some setup issues on my macOS system and just ended up deploying an Ubuntu 22.04 VM to then use as my build host. The required change to go from an Ubuntu Server to Ubuntu Desktop was minimal, you do need to understand the project layout and ultimately how the repo has been setup, which includes the use of Ansible Packer Provisioner, which was not something I had used before.
I wanted to put together this blog post, not only as a reference for myself but also for anyone who wants to start using Packer and Ryan's awesome repo but need a bit more guidance if you intend to perform further customization.
Step 1 - Download and install the latest Ubuntu Server 22.04 ( -22.04.2-live-server-amd64.iso) as a VM. While you can certainly run the Packer build from your desktop system, depending on how it has been setup, you might still run into issues and having a dedicated build host is not necessary a bad thing.
Step 5 - Create a new build configuration, which allows you to isolate each build separately including credentials and build environments. In the example, I have named this jammy_desktop, but you can use any name as it simply creates a directory with the Packer configuration files that you will need to edit.
Step 6 - Depending on when you clone Ryan's repo, you may want to use newer versions of the OS ISO images and that requires you to update the Packer configuration file to include the directory where the ISO image will be stored on your vSphere Datastore but also the filename and SHA256 hash. In my example, I am using the latest Ubuntu Server 22.04.2 image and I have tweaked the filename to match the default value but the directory and the SHA26 hash is a different value. You can use sha256sum utility to generate the required has.
Step 7 - Since we want an Ubuntu Desktop image rather than the default Ubuntu Server, which does not include a graphical desktop, we will need to adjust the cloud-init user-data configuration to include the additional ubuntu-desktop package which will give us the desktop experience.
Note: If you need to make other OS customization, you can certainly apply the changes here or look at using the Ansible Packer Provisioner which is also used within the project. Additional research should be done if you intend to use Ansible rather than native cloud-init interface which Ubuntu supports.
Additionally, we also need to provide a password for the build account including the SHA512 hash of the password. To generate the hash, you can run the following command and then specify the password of your choice and it will output the SHA512 hash
Step 11 - Now we need to edit common.pkrvars.hcl file and specify the vSphere Datastore in which your ISO images will be stored by editing the common_iso_datastore variable. Ryan's repo uses both the Content Library and OVF Providers to allow multiple output of the final image. If you do not have or intend to output the Ubuntu image into an existing vSphere Content Library, then make sure you configure the common_content_library_skip_export variable to true or else you will hit an error at the very end of the build.
Step 12 - The last configuration file you will need to edit is the vsphere.pkrvars.hcl file which contains information about your vSphere environment including vCenter Server, credentials and resources Packer will use. This should be pretty straight forward and if you are using the default self-signed TLS certificate for vCenter Server, make sure you set the vsphere_insecure_connection variable to true.
Step 13 - If you recall from Step 6, we had specified the name of the ISO and the directory it would be found in. You now need to ensure this matches up in your vSphere environment by uploading the Ubuntu ISO image that Packer will use to build your image.
3a8082e126