Thisdocuments how I was able to set up a Mikrotik Cloud Hosted Router (CHR) on a VPS from Vultr.com. I'm using the CHR as an OpenVPN server, however this document will focus on getting the CHR up and running.
Like most VPS providers, Vultr offers a library of ISO images you can use to install operating systems on your servers. Mikrotik doesn't have an "installer" for CHR, just a pre-made disk image, so we're going to download that image from Mikrotik's web site, upload it to Vultr as a "snapshot", and create a new VM using that snapshot.
By itself this does work, however it gives you a CHR with only 128 MB usable storage, which is kinda strange - if you're paying for VM with 20 GB of space, you would expect to have 20 GB of space available within the VM, right?
When installing CHR on other systems (such as VirtualBox or VMWare) you would normally resize the virtual disk before starting the VM for the first time, and the CHR image would expand the filesystem to use the rest of the un-allocated space within the disk. This filesystem expansion only happens the very first time a VM is started from a Mikrotik image.
One option would be to resize the disk image before uploading it, however in this case that would mean uploading 20 GB of empty space, which is a waste of time, bandwidth, and disk space (both for you and for Vultr). It's bad enough that you have to upload 128 MB when you really only need about 50 MB (if only Vultr would allow qcow2 images...)
So instead, we're going to let the VM start normally, shut it down, reboot into Finnix (which is a "live CD", the ISO is in Vultr's library), use that to manually resize the partition and filesystem, and then reboot into the CHR normally.
You're using a normal Unix-type shell, within a terminal window on a workstation with a GUI (i.e. a Linux or Mac workstation). If you're using Windows, you should be able to use something like SecureCRT or Putty instead of the ssh commands shown below, but I can't really help with that (I haven't used Windows on a daily basis since Windows XP SP1.)
Visit Mikrotik's download page. Under the "Cloud Hosted Router" section you will be able to download disk images for VMWare, VirtualBox, Microsoft Hyper-V, and a "Raw disk image". Download the "Raw disk image" for the desired version. I recommend using the "Current" version.
Vultr's web site does not offer a way to upload a snapshot file directly, so you will need to first upload the resulting .img file to a web server in order to have a URL from which Vultr's servers can download it.
Sign into and select the "Snapshots" tab. Click the "Add Snapshot" button, and scroll down to "Upload snapshot from remote machine". Enter the URL from which your unzipped .img file can be downloaded (such as " -6.38.5.img") and click the "Upload" button.
Server Type - Immediately below the "(2) Server Type" banner is a row of options - "64 bit OS", "32 bit OS", "Application", etc. The last item on this list is "Snapshot" (you may need to click the blue ">" to the right of the list in order to see it.) Select "Snapshot", and then choose the snapshot file you just uploaded.
After you've read the security section and know what you'll need to do as soon as the VM is working, go ahead and click the blue "Deploy Now" button at the bottom. The browser will return to the list of your VMs.
While Vultr is setting up the VM, it will show up on the list as "Installing". It may or may not have an IP address by the new VM - if not, reload the page once or twice until the IP address shows up.
Note that it may take 30-45 seconds after the VM shows "Running" before the SSH service will allow you to connect. If your ssh command just hangs there for more than about ten seconds, cancel it and try again.
After doing this, make sure that nobody else managed to get in there before you changed the password. If /user active print shows more than one session, somebody else is in there. RouterOS doesn't have a way to forcibly disconnect such as user (or if they do, I'm not aware of it) so if this happens, your best bet may be to destroy the VM and create a new one.
You will almost certainly see messages popping up about login failures from random IPs around the world. This is because when the CHR starts up for the first time, it is more or less "wide open", with telnet, ftp, ssh, and a few other services listening on their default ports, and no firewall rules at all. There are thousands of black-hat hackers out there, scanning large blocks of IP addresses for devices with open services, and they will try to connect to your CHR as soon as it boots up.
After changing the SSH host key size, you also need to re-generate the host key. (Note that you may not see the "y" echoed back to you immediately. It took about fifteen seconds to generate the new key before the "y" appeared in the terminal window while I was writing this document.)
The very first time a CHR boots up from a new Mikrotik image, it will expand the partition to use any empty space at the end of that image. When building a CHR on osme other virtualization platform, the instructions say to resize the disk before starting it up the first time. However, Vultr doesn't offer a way to do this, so we're going to have to resize it by hand.
Back on the
my.vultr.com web site, go back to the list of your VMs. After about 45 seconds, if the VM still says "Running", click on the "..." menu to the right and select "Server Stop" to make sure it shuts down.
On the left, click on "Custom ISO". You will see a drop-list of ISO images in Vultr's library. Select the "Finnix" option on this list (currently it says "Finnix - 111 x86" but the version number may change in the future), then click the blue "Attach ISO and Reboot" button.
After about 20 seconds, click the "View Console" icon at the top right (it looks like a monitor with ">_" on it). This will open a new browser window with the VM's console. You should see the VM booting up, or you may see the Finnix boot menu with the 60-second countdown going.
In the case of Vultr, the second partition already covers the entire disk, so we don't need to resize it. (I will admit, I'm not exactly clear how the partition was resized, unless Vultr did it when importing the snapshot - and if this is the case, I don't know why RouterOS didn't resize the filesystem when it started the first time.)
Check the filesystem. This command will find and fix any errors it finds. (If you don't do this, and the filesystem wasn't cleanly un-mounted before, the "resize2fs" command below will refuse to do anything.)
Vultr's directions tell you to configure your VM's ethernet interface to use what they refer to as "auto-configuration", which is "IPv6 Stateless Address Autoconfiguration" or "SLAAC" (see RFC 4862 for details), to get its IPv6 address. In most cases this would be fine, however this VM is a router, and Mikrotik is strictly following the rules, which say that IPv6 routers (devices which receive and forward packets which are not addressed to themselves) are not supposed to do SLAAC.
If you enable IPv6 for a VM, Vultr will assign a /64 block to that VM, and tells you what IPv6 address it would receive, if it were doing SLAAC. The one thing it doesn't tell you is what your IPv6 default gateway should be.
What I had to do was use the Mikrotik's packet sniffer to capture a few minutes' worth of IPv6 traffic on the ether1 interface, download the resulting file, and use Wireshark to find and inspect the RA packet which the CHR is receiving but ignoring.
This command configures the Mikrotik "sniffer" tool to capture any IPv6 traffic sent to ff02::1, which is the multicast "All Nodes" address. The RA packets we're looking for are sent from the router, to this address. These packets will physically arrive on the ether1 interface, even though RouterOS ignores them.
This shows you an automatically updating "directory listing" of the capture file which is being created. At first the file will be 24 bytes, but as matching packets are received, they will be added to the file, and you will see the file's size grow, usually by about 150-200 bytes per packet.
Then, add an IPv6 default route, pointing to the packet's source address. Note that because the gateway address is a link-local address, you need to add "%ether1" to the end of the address in order to specify the interface.
EVE-NG w przeciwieństwie do GNS3 nie posiada kreatora importu wirtualnego urządzenia. W EVE-NG wszystko opiera się o poprawne nazewnictwo plikw i folderw w katalogu /opt/unetlab/addons/qemu/. Dokładne nazewnictwo plikw i folderw jest opisane w dokumentacji EVE-NG na stronie -
ng.net/index.php/documentation/qemu-image-namings/.
Zgodnie z powyższą tabelką, folder MikroTik RouterOS musi zaczynać się prefiksem mikrotik, a pliki qcow2 muszą być nazwane hda i hdb. Foldery oprcz wymaganego prefiksu muszą zawierać wersję dodawanego urządzenia. W przypadku MikroTik RouterOS poprawne nazewnictwo będzie przedstawiało się następująco:
If you install with iso file, you can refer to the instructions in this link. In our article, using the available qcow2 file is much simpler. You can read the article combined with watching the video at the end of the article to do it faster and more accurately.
By default, opening a console window uses the vnc protocol. You can choose another protocols in the options when adding a new node. If use vnc, you can download TightVNC or UltraVNC Viewer. In our lab, we use both applications.
I have been deleting, recreating, and juggling partitions like a madman. Tearing things down to put a RAID and encryption underneath the Thin LVM pool was easy, and I left behind around 200 GB for setting up lvmcache down the road while I was doing it.
I was slightly bummed out that the Proxmox installed set aside 96 GB for itself. That is 10% of my little Teamgroup NVMe, but storage is cheap. I got more bummed out when I learned that this is where it was going to store my LXC containers and any qcow2 files I might still want to use.
3a8082e126