whena security expert publishes his exploit research - anyone can apply such an exploit;
someone will build a container image that will do the exploit AND provide a Linux root shell;
by using a root shell someone may leave a permanent backdoor/vulnerability in your RouterOS system even after the docker image is removed and the container feature disabled;
if a vulnerability is injected into the primary or secondary routerboot (or vendor pre-loader), then even netinstall may not be able to fix it;
Container package is compatible with arm arm64 and x86 architectures. Using of remote-image (similar to docker pull) functionality requires a lot of free space in main memory, 16MB SPI flash boards may use pre-build images on USB or other disk media.
src= points to RouterOS location (could also be src=disk1/etc_pihole if, for example, You decide to put configuration files on external USB media), dst= points to defined location (consult containers manual/wiki/github for information on where to point). If src directory does not exist on first time use then it will be populated with whatever container have in dst location.
If You wish to see container output in log - add logging=yes when creating a container, root-dir should point to an external drive formatted in ext3 or ext4. It's not recommended to use internal storage for containers.
There are multiple ways to add containers:
These links are latest as of 16th of June, 2022. Please make sure to download the right version that matches Your RouterOS device's architecture.
Update sha256 sum from docker hub to get the latest image files
As of the latest exciting news around MikroTik and their fresh release of the docker container in MikroTik, you may have heard that adding docker to RouterOS brings some interesting possibilities. Like managing some micro services on a web level. But the question now is how to install it and use the latest addition?
In brief, using container in MikroTik is the implementation of Linux containers that enable users to run containerized environments within MikroTik Router Operating System. One important thing to know is that you must use a trusted hosting service provider, otherwise if your MikroTik CHR is compromised in any way, containers become an easy way for cyber attackers who want to install malicious software in the router and over your network.
Pi-hole is a DNS based adblocker. It can block all ads in all your devices in your network. Since RouterOS v7.5, you can integrate it in your MikroTik router. This tutorial is for all ARM, ARM64 and x86 based MikroTik devices.
AdGuard home is a DNS based adblocker and it can block advertisements on your network just like Pi-hole. It provides even more features, like Parental Control, Service blocking(e.g. blocking certain popular social media, shops, streaming platform etc.), and also protection from malware websites.
You can run docker containers in our Cloud Hosted Routers as well. Docker works in our Standard, Licensed and Dedicated MikroTik server plans. Choose the appropriate plan for your needs.
You can also watch our video guide (applies to Pi-hole installation):
In entry-level mikrotik routers the operating memory is not enough for docker containers to work properly, we will install docker container on the virtual server with RouterOS system installed. You can choose the optimal configuration of the virtual server with RouterOS preinstalled.
The standard set of installed RouterOS add-ons does not include the container add-on and in order to add it, you need to go to the official website of mikrotik and in the software section to download the add-on set Extra packages pre-selecting the architecture where RouterOS is installed.
The last thing we need to do is to forward the port of the PiHole web interface to the external port of our server. Replace server_ip with the IP address of your router or server, and in to-addresses specify the IP address you assigned to the docker container, and dst-port with the port you want to open for access to the PiHole admin panel.
The issue is i cannot get hostnames instead of ip addresses using this device (mikrotik hex). I am running pihole docker in unraid and blocking works perfectly. Is there any configuration i need to do. In mikrotik i have a dhcp lease script which populates ipaddress with their hostnames.
The github repo you provided says that Prometheus is optional, but that is not correct. In this case, the repo is offloading data into an independent Prometheus instance and then that instance is sending data to Grafana (Cloud). So, to use the examples provided, you will need to setup a Prometheus database and then add that as a datasource to Grafana Cloud.
I want to collect Mikrotik statistics which are prepared by MKTXP into the Prometheus format. My MKTXP collector runs in docker container with ip 172.18.0.5 on port 49090. And next I have an another docker container, Grafana_Agent.
I have a ubuntu server with ip 192.168.10.144, in this server I have a docker network using ip range
10.0.0.0/24. I need connect my computer to some services running in docker, so I've added a route in Mikrotik:
Your PC has IP address 192.168.10.53? You send traffic to host 10.0.0.77 via your default gateway - 192.168.10.1. Your router send you back ICMP redirect packets. You can read more about that here: -information-protocol-rip/13714-43.html
Today we will be discussing an exciting feature of Unimus - support for remote networks and distributed polling. Managed devices do not always have to be directly reachable by the Unimus Server. In a scenario where our devices and Unimus are separated by a WAN it would make sense to utilize a remote agent. All client devices would be polled locally which eliminates the need for individual direct server-device connections. This saves resources such as bandwidth and processing power and simplifies administration as you only need to maintain connectivity to a single host in each remote location. We would also get vastly improved scalability, enhanced security and fault isolation.
To extend Unimus functionality to a remote network we would use Unimus Core. A Core is the brains of Unimus. Same as the embedded Core on any Unimus Server it performs network automation, backups, change management and more on managed network devices. Acting as a remote poller, Unimus Core communicates with Unimus Server over a secure TCP connection conforming to modern industry standards. We can install Unimus Core on any supported OS, run a portable version or run a container image. Find out more on our wiki.
Fairly recently (August 2022) MikroTik added container support on their RouterOS. This introduces a nifty new way of deploying Unimus Core directly on an edge router, thus reducing the number of devices required in the network. Let's have a look at how to set this up.
HQ router is also configured for destination NAT, a.k.a port triggering, directing incoming TCP 5509 and TCP 8085 traffic to Unimus Server. TCP 5509 allows inbound remote Core connection. TCP 8085 is not strictly required for our demonstration, we open it simply for remote access to the HTTP GUI.
The left side represents a remote network. Branch router, a MikroTik RB5009UG+S+, is our edge router capable of running containers. Connected on LAN side is the device we want to manage, another MikroTik RouterBOARD - Branch switch. Branch router supports containers and will run Unimus Core in one.
Our Unimus Core container will have its own virtual ethernet interface assigned to use for communication outside. Although this 'veth' could be added to the local bridge connecting to Branch switch, it makes more sense security-wise to add it to a separate 'containers' bridge. This way any container traffic goes through the routing engine and firewall, where it can be subject to policies.
Edge router WAN ports are reachable via internet simulated by a local network. This is sufficient for our testing purposes as it simulates indirect connectivity between the Unimus Server and the remote Core.
Once we are all set, we can begin by navigating to Zones and 'Add new Zone'. This Zone will represent our remote location. Enter Zone name, ID and Remote core connection method and hit 'Confirm' to proceed.
Variables are defined in key-value pairs. These are needed to point Unimus Core to the Unimus Server and input the Access Token we got earlier. Additionally, we can set the timezone and memory constraints for Java and there is an option to define volumes' mount points for data persistence. Details on GitHub.
We will be pulling our Unimus Core container latest image straight from Docker hub at -
1.docker.io. You could also import one from a PC (via docker pull/save) or build your own. The remote Core needs to be the same version as the embedded core on Unimus Server to avoid any compatibility issues between versions. So just make sure you grab a suitable version.
Destination NAT rule is necessary for core connection traffic. We need to translate destination address of incoming remote Core traffic to the Unimus Server IP address. TCP port 5509 is used by default.
Unimus 2.5.0 is out! This is a major release with many new features and improvements. In this article we review what's new - from new functionality to support for new devices, fixes for various bugs and issues, and security improvements.
This docker-compose file uses the latest Routinator image from docker-hub, allows access to the 3323 port (for RPKI checking) and 9556 (for the web client).
A volume is also created for the cache files.
Once this is done, you can run docker-compose up -d routinator to get the container running, it might take a few minutes to get the required files ready. If you want to see stdout while it prepares, just omit the -d flag.
Hi, I experienced this exact same problem and never found a solution for it.
Every time I add another container to my macvlan or reboot one of the existing, my router receives another DHCP lease with a random MAC address and the the hostname of my docker host.
This makes my docker host unreachable by its name, since DNS now has several entries for all these random MAC addresses.
3a8082e126