mender client not recognizing active root partition

494 views
Skip to first unread message

Niall Parker

unread,
Aug 25, 2018, 1:55:50 AM8/25/18
to Mender List mender.io
Hello,

I'm getting started with Mender on a buildroot generated image on a BBB. Using the latest buildroot master, the mender client seems to build fine and run, I can set the active partition manually through mender_boot_part and u-boot but when I try to install a mender artifact it complains:

ERRO[0000] update image installation failed: Active root partition matches neither RootfsPartA nor RootfsPartB.  module=installer

This only occurs when the os booted is the one built by buildroot ... if I boot into the Yocto image, it can install the mender artifacts just fine. The buildroot version can commit an update started from Yocto but can't start an update (using either a yocto.mender or buildroot.mender). The default images are of course quite different between Yocto and Buildroot but both are displaying the root partition as /dev/root via df and root=PARTUUID=<blah> via /proc/cmdline

Yocto shows the device /dev/mmcblk0p3 with mount while buildroot only shows /dev/root ... seems like Yocto is using the util-linux version of mount while buildroot uses busybox ... I guess that may well be the issue.

Maybe another rebuild with util-linux mount will fix it, has anyone gotten the busybox mount working ?



Mirza Krak

unread,
Aug 25, 2018, 2:03:17 AM8/25/18
to Mender List mender. io


On Sat, 25 Aug 2018, 07:55 Niall Parker, <ve7...@gmail.com> wrote:
Hello,
Could you paste the content of /etc/mender/mender.conf?

/ Mirza 

Niall Parker

unread,
Aug 25, 2018, 2:16:24 AM8/25/18
to Mender List mender.io, mirza...@northern.tech
Using the default:

{
  "InventoryPollIntervalSeconds": 1800,
  "UpdatePollIntervalSeconds": 1800,
  "RetryPollIntervalSeconds": 300,
  "RootfsPartA": "mmcblk0p2",
  "RootfsPartB": "mmcblk0p3",
  "ServerCertificate": "/etc/mender/server.crt",
  "ServerURL": "https://docker.mender.io",
  "TenantToken": "dummy"

An update on the using util-linux mount ... while it now shows me the underlying root device, mender update complains the same as before, 

# mount
/dev/mmcblk0p2 on / type ext4 (rw,relatime,data=ordered)
devtmpfs on /dev type devtmpfs (rw,relatime,size=240176k,nr_inodes=60044,mode=755)
proc on /proc type proc (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
tmpfs on /tmp type tmpfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
sysfs on /sys type sysfs (rw,relatime)
none on /sys/kernel/config type configfs (rw,relatime)
/dev/mmcblk0p5 on /data type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/mmcblk0p1 on /uboot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)

debug output from mender:

DEBU[0000] Reading Mender configuration from file /etc/mender/mender.conf  module=config
DEBU[0000] Read data from device manifest file: device_type=beaglebone  module=mender
DEBU[0000] Found needed line: device_type=beaglebone     module=mender
DEBU[0000] Current manifest data: beaglebone             module=mender
DEBU[0000] Starting device update.                       module=rootfs
INFO[0000] Performing remote update from: [http://192.168.99.87:8000/rootfs-3.mender].  module=rootfs
DEBU[0000] Client initialized. Start downloading image.  module=rootfs
DEBU[0000] Received fetch update response &{200 OK 200 HTTP/1.0 1 0 map[Last-Modified:[Sat, 25 Aug 2018 04:05:45 GMT] Server:[SimpleHTTP/0.6 Python/2.7.12] Date:[Sat, 25 Aug 2018 06:07:50 GMT] Content-Type:[application/octet-stream] Content-Length:[12113920]] 0x107f0540 12113920 [] true false map[] 0x1079c500 <nil>}+  module=client_update
DEBU[0000] Image downloaded: 12113920 [&{0x107f0540 0x107174e0 0x1079c480 0 12113920 0 0}] [<nil>]  module=rootfs
Installing update from the artifact of size 12113920
DEBU[0000] checking if device [beaglebone] is on compatibile device list: [beaglebone]
  module=installer
DEBU[0000] installing update rootfs.ext4 of size 62914560  module=installer
DEBU[0000] Trying to install update of size: 62914560    module=device
DEBU[0000] Have U-Boot variable: mender_boot_part=2      module=bootenv
DEBU[0000] List of U-Boot variables:map[mender_boot_part:2]  module=bootenv
DEBU[0000] Setting active partition from mount candidate: /dev/mmcblk0p2  module=partitions
ERRO[0000] update image installation failed: Active root partition matches neither RootfsPartA nor RootfsPartB.  module=installer
ERRO[0000] Installation failed: installer: failed to read and install update: update: can not install update: &{48 rootfs.ext4  62914560 33188 1000 1000   2018-08-25 04:03:32 +0000 UTC 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC 0 0 map[] map[] USTAR}: update: can not install: Active root partition matches neither RootfsPartA nor RootfsPartB.  module=rootfs
ERRO[0000] installer: failed to read and install update: update: can not install update: &{48 rootfs.ext4  62914560 33188 1000 1000   2018-08-25 04:03:32 +0000 UTC 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC 0 0 map[] map[] USTAR}: update: can not install: Active root partition matches neither RootfsPartA nor RootfsPartB.  module=main

Mirza Krak

unread,
Aug 25, 2018, 2:26:49 AM8/25/18
to Niall Parker, Mender List mender. io
On Sat, 25 Aug 2018, 08:16 Niall Parker, <ve7...@gmail.com> wrote:

On Friday, 24 August 2018 23:03:17 UTC-7, Mirza Krak wrote:


On Sat, 25 Aug 2018, 07:55 Niall Parker, <ve7...@gmail.com> wrote:
Hello,

Hello, 


I'm getting started with Mender on a buildroot generated image on a BBB. Using the latest buildroot master, the mender client seems to build fine and run, I can set the active partition manually through mender_boot_part and u-boot but when I try to install a mender artifact it complains:

ERRO[0000] update image installation failed: Active root partition matches neither RootfsPartA nor RootfsPartB.  module=installer

This only occurs when the os booted is the one built by buildroot ... if I boot into the Yocto image, it can install the mender artifacts just fine. The buildroot version can commit an update started from Yocto but can't start an update (using either a yocto.mender or buildroot.mender). The default images are of course quite different between Yocto and Buildroot but both are displaying the root partition as /dev/root via df and root=PARTUUID=<blah> via /proc/cmdline

Yocto shows the device /dev/mmcblk0p3 with mount while buildroot only shows /dev/root ... seems like Yocto is using the util-linux version of mount while buildroot uses busybox ... I guess that may well be the issue.

Maybe another rebuild with util-linux mount will fix it, has anyone gotten the busybox mount working ?

Could you paste the content of /etc/mender/mender.conf?

Using the default:

{
  "InventoryPollIntervalSeconds": 1800,
  "UpdatePollIntervalSeconds": 1800,
  "RetryPollIntervalSeconds": 300,
  "RootfsPartA": "mmcblk0p2",
  "RootfsPartB": "mmcblk0p3",
  "ServerCertificate": "/etc/mender/server.crt",
  "ServerURL": "https://docker.mender.io",
  "TenantToken": "dummy"

Yeah it seems that I might have pushed a default that was wrong, the RootfsPartA/B should contain the full path of the device, that is:

  "RootfsPartA": "/dev/mmcblk0p2",
  "RootfsPartB": "/dev/mmcblk0p3",

Feel free to send a patch upstream for this if you like, otherwise I will do it when I get around to it. 

/ Mirza 

Niall Parker

unread,
Aug 25, 2018, 3:03:42 AM8/25/18
to Mender List mender.io, ve7...@gmail.com, mirza...@northern.tech
 Thanks, I probably should have caught that when I was comparing the Yocto and BR images ... it now seems to be OK with downloading and installing update (doesn't need the util-linux mount either) 

The reboot following the update seems to be getting the wrong image though, something else to figure out (hitting the bootlimit incorrectly ?)

Mirza Krak

unread,
Aug 25, 2018, 3:15:28 AM8/25/18
to Niall Parker, Mender List mender. io
On Sat, 25 Aug 2018, 09:03 Niall Parker, <ve7...@gmail.com> wrote:

<snip>


Yeah it seems that I might have pushed a default that was wrong, the RootfsPartA/B should contain the full path of the device, that is:

  "RootfsPartA": "/dev/mmcblk0p2",
  "RootfsPartB": "/dev/mmcblk0p3",

 Thanks, I probably should have caught that when I was comparing the Yocto and BR images ... it now seems to be OK with downloading and installing update (doesn't need the util-linux mount either) 

The reboot following the update seems to be getting the wrong image though, something else to figure out (hitting the bootlimit incorrectly ?)

Probably something with U-boot environment and user-space tools configuration. That is the fw_setenv/fw_getenv. 

It is a good idea to run through this, https://docs.mender.io/1.5/devices/integrating-with-u-boot/integration-checklist to make sure everything is setup correctly and might help isolating the problem. 

/ Mirza 

Niall Parker

unread,
Aug 25, 2018, 12:37:56 PM8/25/18
to Mender List mender.io, ve7...@gmail.com, mirza...@northern.tech


On Saturday, 25 August 2018 00:15:28 UTC-7, Mirza Krak wrote:
<snip>

It is a good idea to run through this, https://docs.mender.io/1.5/devices/integrating-with-u-boot/integration-checklist to make sure everything is setup correctly and might help isolating the problem. 

/ Mirza 

Found the problem ... I had used the default uboot config from buildroot and just replaced the environment image with the one from Yocto, that mostly worked but Mender needs the bootcount to be stored in the environment as well while the default uboot config was using the RTC. There were some comments that putting bootcount in the environment was dangerous from a flash wear perspective but unless Mender gets the capability to adjust an RTC stored bootcount we'll have to live with it (I don't think my particular application will have so many boots as to be a problem and the MMC layer should handle most of the wear levelling anyway).

Thanks for your help !

     ... Niall

Mirza Krak

unread,
Aug 25, 2018, 12:52:54 PM8/25/18
to Niall Parker, Mender List mender.io
Glad it worked out.

BOOTCOUNT_ENV does but some additional wear but it is not as bad as
one would think.

The "bootcount" value will be only written to ENV if
"upgrade_available" is set, meaning that it is only active if an
update is in progress and not that it will be written on each boot.

See [1] for reference.

I am curious if your work is publicly available? I am sure this would
be helpful for others

[1]. http://git.denx.de/?p=u-boot.git;a=blob;f=drivers/bootcount/bootcount_env.c;h=9084ca8a6e82665c1b944f95a19012a7aeeb0163;hb=HEAD

--
Mirza Krak | Embedded Solutions Architect | https://mender.io

Northern.tech AS | @northerntechHQ

Niall Parker

unread,
Aug 26, 2018, 5:16:46 PM8/26/18
to Mender List mender.io, ve7...@gmail.com, mirza...@northern.tech

On Saturday, 25 August 2018 09:52:54 UTC-7, Mirza Krak wrote:
[snip]


BOOTCOUNT_ENV does but some additional wear but it is not as bad as
one would think.

The "bootcount" value will be only written to ENV if
"upgrade_available" is set, meaning that it is only active if an
update is in progress and not that it will be written on each boot.

Yes, the bootcount variable behaves differently in ENV as it is dependent on upgrade_available ... the original bootcount I had configured using RTC memory did indeed count each boot and thus would have contributed to flash wear, especially on a raw flash where the same page would be re-written each boot. Not a problem here though.

See [1] for reference.

I am curious if your work is publicly available? I am sure this would
be helpful for others

Certainly, though so far I've just been doing some hacks on post-build.sh, post-image.sh and some other board specific files. Seems like some system level config (mender device_type and artifact_name) belong in the system menuconfig (conditional on mender package), not sure how to implement (will ask on BR list as well). I'll start with documenting what I have so far.

For now just using the binary mender-artifact but eventually a host package will be the right way to go. I see you are the author on the BR package, maybe you are working on it ... ;-) 

Mirza Krak

unread,
Aug 26, 2018, 5:29:04 PM8/26/18
to Mender List mender.io, Niall Parker
On Sun, Aug 26, 2018 at 11:16 PM, Niall Parker <ve7...@gmail.com> wrote:
>
> On Saturday, 25 August 2018 09:52:54 UTC-7, Mirza Krak wrote:
>>
>> [snip]
>>
>> BOOTCOUNT_ENV does but some additional wear but it is not as bad as
>> one would think.
>>
>> The "bootcount" value will be only written to ENV if
>> "upgrade_available" is set, meaning that it is only active if an
>> update is in progress and not that it will be written on each boot.
>
>
> Yes, the bootcount variable behaves differently in ENV as it is dependent on
> upgrade_available ... the original bootcount I had configured using RTC
> memory did indeed count each boot and thus would have contributed to flash
> wear, especially on a raw flash where the same page would be re-written each
> boot. Not a problem here though.
>>
>>
>> See [1] for reference.
>>
>> I am curious if your work is publicly available? I am sure this would
>> be helpful for others
>
>
> Certainly, though so far I've just been doing some hacks on post-build.sh,
> post-image.sh and some other board specific files. Seems like some system
> level config (mender device_type and artifact_name) belong in the system
> menuconfig (conditional on mender package), not sure how to implement (will
> ask on BR list as well). I'll start with documenting what I have so far.

Regarding the device_type and artifact_name, my initial patch-set [1]
included options for this but they where rejected. Suggestion was that
rootfs_overlay was to be used for that. If you are able to convince
them otherwise please go for it :). But my opinion is that it belongs
as options to avoid to much duplication "out of tree".

>
> For now just using the binary mender-artifact but eventually a host package
> will be the right way to go. I see you are the author on the BR package,
> maybe you are working on it ... ;-)

Yeah, I have a patch-set for mender-artifact here [2]. Was just
preparing to send em upstream :).

[1]. http://lists.busybox.net/pipermail/buildroot/2018-August/228294.html
[2]. https://github.com/mirzak/buildroot/commits/mender-artifact

Niall Parker

unread,
Aug 26, 2018, 6:18:19 PM8/26/18
to Mender List mender.io, ve7...@gmail.com, mirza...@northern.tech

On Sunday, 26 August 2018 14:29:04 UTC-7, Mirza Krak wrote:

[snip] 
Regarding the device_type and artifact_name, my initial patch-set [1]
included options for this but they where rejected. Suggestion was that
rootfs_overlay was to be used for that. If you are able to convince
them otherwise please go for it :). But my opinion is that it belongs
as options to avoid to much duplication "out of tree".

Without them as options then the mender-artifact would need to pull them out of the rootfs image ... I suppose that is doable but seems awkward and fails when device_type is set on the persistent data partition. My goal was to use BR to generate an initial sdimage (default to same rootfs on both A/B) and also generate the rootfs.mender that could be used for updates along the way. Having the artifact_name specified as both command line parameter and part of the rootfs I guess is a double check the correct file is built ? 

Definitely prefer the config option approach myself, I'll try your original patches.

thanks again...
  
    ... Niall

Mirza Krak

unread,
Aug 27, 2018, 3:19:53 AM8/27/18
to Niall Parker, Mender List mender.io
On Mon, Aug 27, 2018 at 12:18 AM, Niall Parker <ve7...@gmail.com> wrote:
>
> On Sunday, 26 August 2018 14:29:04 UTC-7, Mirza Krak wrote:
>
> [snip]
>>
>> Regarding the device_type and artifact_name, my initial patch-set [1]
>> included options for this but they where rejected. Suggestion was that
>> rootfs_overlay was to be used for that. If you are able to convince
>> them otherwise please go for it :). But my opinion is that it belongs
>> as options to avoid to much duplication "out of tree".
>>
> Without them as options then the mender-artifact would need to pull them out
> of the rootfs image ... I suppose that is doable but seems awkward and fails
> when device_type is set on the persistent data partition. My goal was to use
> BR to generate an initial sdimage (default to same rootfs on both A/B) and
> also generate the rootfs.mender that could be used for updates along the
> way. Having the artifact_name specified as both command line parameter and
> part of the rootfs I guess is a double check the correct file is built ?

Indeed provides a way to sanity check the build output.

I have done a board integration recently where I create the ".mender"
file in a "post-image" script, and reading the values from rootfs. So
it can work but would be nice to have it as options because I did run
in to an issue where the artifacts where created with faulty
configuration because I initially did it a "post-build" script which
would use the "old" file-system image.

NOTE, that someone on our IRC (#mender @ freenode) that goes by the
name ski7777 has hinted that he is working on something similar to
what you are describing. Either you are that person :), or it might be
good idea to sync up if you are not.

Niall Parker

unread,
Aug 27, 2018, 5:04:23 PM8/27/18
to Mender List mender.io, ve7...@gmail.com, mirza...@northern.tech
On Monday, 27 August 2018 00:19:53 UTC-7, Mirza Krak wrote:
[snip]

I have done a board integration recently where I create the ".mender"
file in a "post-image" script, and reading the values from rootfs. So
it can work but would be nice to have it as options because I did run
in to an issue where the artifacts where created with faulty
configuration because I initially did it a "post-build" script which
would use the "old" file-system image.

I tried the patches with the config options, I think it is definitely preferable to digging out of the rootfs ... though I see Thomas's point about not enforcing folder structure as well.

Mender does need some definite folder and partition structure I am presently handling in genimage.cfg/post-build.sh/post-image.sh, even if the folder specific tweaks don't belong in the mender package, it would still be nice to have the BR2_PACKAGE_MENDER_* values collected and stored (though I discovered I needed BR2_ROOTFS_POST_SCRIPT_ARGS to pass them in) 


NOTE, that someone on our IRC (#mender @ freenode) that goes by the
name ski7777 has hinted that he is working on something similar to
what you are describing. Either you are that person :), or it might be
good idea to sync up if you are not.

Not me, guess it is time I figured out IRC then ;)

    ... Niall
Reply all
Reply to author
Forward
0 new messages