figuring out the architecture on the farm

110 views
Skip to first unread message

Vincent Rabaud

unread,
Mar 23, 2015, 3:30:07 AM3/23/15
to ros-sig-...@googlegroups.com
OpenCV is supposed to not use IPP on 32 bit but it does on the buildfarm, hence a failure:
http://54.183.26.131:8080/job/Ibin_uS32__opencv3__ubuntu_saucy_i386__binary/2/consoleFull
(it's the same on the old buildfarm)

This was actually a build to try debugging and I have to admit I do not know what's happening. Any help is welcome.

Now, I added the following logs to track the bug:
https://github.com/vrabaud/opencv3-release/blob/release/indigo/opencv3/cmake/OpenCVDetectCXXCompiler.cmake#L116   (where the X86 variable is defined)
https://github.com/vrabaud/opencv3-release/blob/release/indigo/opencv3/cmake/OpenCVFindIPP.cmake#L37

Overall, the machine thinks it is X86_64 and not X86 because CMAKE_SYSTEM_PROCESSOR is x86_64 . Shouldn't that variable be i686 or i386 ? According to the official defs at least:
http://www.cmake.org/cmake/help/v3.0/variable/CMAKE_SYSTEM_PROCESSOR.html
http://www.cmake.org/cmake/help/v3.0/variable/CMAKE_HOST_SYSTEM_PROCESSOR.html (this one is the host one and is properly set to
x86_64)

What should be the proper way to check for the architecture on the farm ?

Dirk Thomas

unread,
Mar 23, 2015, 2:14:17 PM3/23/15
to ros-sig-...@googlegroups.com
Hi Vincent,

on the new build farm the reason might be the following:
the build is being performed in a 32 bit Docker container which runs on a 64 bit Docker host.
Therefore the system processor as well as the kernel will always report 64 bit.

As long as you don't cross compile (which is not the case on the build farm) you might want to try CMAKE_SIZEOF_VOID_P.

You mentioned it would also happen on the current farm.
I am not sure how pbuilder is handling these information.
But I couldn't find the corresponding job.
Can you please provide a link to that as well?

Cheers,
- Dirk


--
You received this message because you are subscribed to the Google Groups "ros-sig-buildfarm" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ros-sig-buildf...@googlegroups.com.
To post to this group, send email to ros-sig-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ros-sig-buildfarm/3eb4ef5a-5051-4d9b-a32d-ede7f60f4a4a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Vincent Rabaud

unread,
Mar 23, 2015, 2:28:54 PM3/23/15
to ros-sig-...@googlegroups.com
Here is the link on the old buildfarm (fresh out of the oven):
http://jenkins.ros.org:8080/job/ros-indigo-opencv3_binarydeb_trusty_i386/18/consoleFull

It reports the same error.

Don't we want to set CMAKE_SYSTEM_PROCESSOR by default as a CMake arg ? I believe OpenCV will not be the only lib to check its architecture.

--
You received this message because you are subscribed to a topic in the Google Groups "ros-sig-buildfarm" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ros-sig-buildfarm/3krZwqqv-N8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ros-sig-buildf...@googlegroups.com.

To post to this group, send email to ros-sig-...@googlegroups.com.

Dirk Thomas

unread,
Mar 23, 2015, 3:20:16 PM3/23/15
to ros-sig-...@googlegroups.com
I don't think that the build farm should set any custom CMake arguments.
Simply because results would not be reproducible anymore and locally in a similar environment it would fail the same way.

When you search for CMake and detecting 32 / 64 bit you will find that CMake does not do a good job answering that question for you.
We should come up with a "recipe" how do perform the task and either teach people to use that or provide the functionality through CMake infrastructure (something like https://github.com/ros/cmake_modules).

Can you please clarify what has recently changed in the opencv3 repository since the job was building successfully in December?

Cheers,
- Dirk

Vincent Rabaud

unread,
Mar 24, 2015, 3:09:42 AM3/24/15
to ros-sig-...@googlegroups.com

For opencv3, well it simply does not build with ipp32 bit anymore (that is how upstream behaves, but ipp64 is there by default).

I understand there is no standard way of getting the architecture, especially in the way we build things (not cross-compiling).

I'll disable IPP then but I hope there won't be too many libs requiring architecture checks.

Mani M

unread,
Jul 5, 2016, 11:31:04 AM7/5/16
to ros-sig-buildfarm
Hello,

I've been experiencing a similar issue with this package: http://wiki.ros.org/parrot_arsdk.

The build script picks up the wrong architecture for 32 bit builds (x86_64 instead of i386): http://build.ros.org/view/Ibin_uT32/job/Ibin_uT32__parrot_arsdk__ubuntu_trusty_i386__binary/24/console

This issue does not happen when I build the package inside a 32 bit Virtual Machine, so I guess I am facing the same problem as Vincent's. Is there any suggestion/solution for solving this issue? I must mention that parrot_sdk uses a custom build system which itself is a poorly-documented beast. 

Thank you,
Mani

Dirk Thomas

unread,
Jul 5, 2016, 11:45:00 AM7/5/16
to ros-sig-...@googlegroups.com
Hi Mani,

the ROS build farm is using Docker to isolate each environment. Your jobs uses a 32-bit Ubuntu Trusty container. You can try it locally with a Docker image named "osrf/ubuntu_i386:trusty". Maybe that allows you to tweak that architecture detection logic.

Cheers,
- Dirk

Tully Foote

unread,
Jul 5, 2016, 4:06:48 PM7/5/16
to ros-sig-...@googlegroups.com
Looking through the code paths I found two places that appeared to detect the arch in the buildsystem. 

and 

However from my testing in docker they appear to not return 64bit in the 32bit chroot. https://gist.github.com/tfoote/c27e5ce15c0c39cc13e735f6cb78eed2

You'll probably need to find out how it's autodetection logic is working and either spoof the input or improve the logic.

The other thing I've noticed is that the build appears to be downloading a bunch of things using git during the build process. This is problematic as it means the builds are reliant on external services to successfully build. And in emulated arm environments git not infrequently  hangs: http://build.ros.org/view/Ibin_uT32/job/Ibin_arm_uThf__parrot_arsdk__ubuntu_trusty_armhf__binary/11/console I'd recommend inserting any resources you need to fetch into the release repository instead of downloading them during the build process. 

Tully

Mani M

unread,
Jul 5, 2016, 7:12:20 PM7/5/16
to ros-sig-buildfarm
Looking through the code paths I found two places that appeared to detect the arch in the buildsystem. 

and 

However from my testing in docker they appear to not return 64bit in the 32bit chroot. https://gist.github.com/tfoote/c27e5ce15c0c39cc13e735f6cb78eed2

You'll probably need to find out how it's autodetection logic is working and either spoof the input or improve the logic.

 
Thank you very much Tully. The information you've provided will give me a great starting point for debugging. 

The other thing I've noticed is that the build appears to be downloading a bunch of things using git during the build process. This is problematic as it means the builds are reliant on external services to successfully build. And in emulated arm environments git not infrequently  hangs: http://build.ros.org/view/Ibin_uT32/job/Ibin_arm_uThf__parrot_arsdk__ubuntu_trusty_armhf__binary/11/console

Good point. That is the next thing on my todo list after fixing the current issue (and thanks for helping me realize the issue with arm failures). 

 I'd recommend inserting any resources you need to fetch into the release repository instead of downloading them during the build process. 

How can I add the external source packages to the release repository systematically? My current solution for this problem is to manually create a snapshot of all upstream packages in the source repo, modify the catkin build steps accordingly and then use bloom to push it to the release repo. Is there anyway to fetch the upstream packages as part of the build process of the source repo (similar to what it is done now), yet have all the fetched packages pushed to the release repository? 

Tully Foote

unread,
Jul 6, 2016, 12:31:25 AM7/6/16
to ros-sig-...@googlegroups.com
On Tue, Jul 5, 2016 at 4:12 PM, Mani M <mani.mo...@gmail.com> wrote:
Looking through the code paths I found two places that appeared to detect the arch in the buildsystem. 

and 

However from my testing in docker they appear to not return 64bit in the 32bit chroot. https://gist.github.com/tfoote/c27e5ce15c0c39cc13e735f6cb78eed2

You'll probably need to find out how it's autodetection logic is working and either spoof the input or improve the logic.

 
Thank you very much Tully. The information you've provided will give me a great starting point for debugging. 

The other thing I've noticed is that the build appears to be downloading a bunch of things using git during the build process. This is problematic as it means the builds are reliant on external services to successfully build. And in emulated arm environments git not infrequently  hangs: http://build.ros.org/view/Ibin_uT32/job/Ibin_arm_uThf__parrot_arsdk__ubuntu_trusty_armhf__binary/11/console

Good point. That is the next thing on my todo list after fixing the current issue (and thanks for helping me realize the issue with arm failures). 

 I'd recommend inserting any resources you need to fetch into the release repository instead of downloading them during the build process. 

How can I add the external source packages to the release repository systematically? My current solution for this problem is to manually create a snapshot of all upstream packages in the source repo, modify the catkin build steps accordingly and then use bloom to push it to the release repo. Is there anyway to fetch the upstream packages as part of the build process of the source repo (similar to what it is done now), yet have all the fetched packages pushed to the release repository? 

The best solution would be to release the 3rdparty packages on their own and then add a dependency on them.

Pending that you can add overlay files in a directory like this: https://github.com/ros-gbp/opencv3-release/tree/master/kinetic if you reference it in the track like: https://github.com/ros-gbp/opencv3-release/blob/master/tracks.yaml#L58 There is documentation here: http://wiki.ros.org/bloom/Tutorials/ReleaseThirdParty on how to do these things. This is the standard use case of adding a package.xml like opencv3. but you can add more resources there which will allow you to avoid downloading them at build time. 


Mani M

unread,
Jul 6, 2016, 2:01:43 PM7/6/16
to ros-sig-buildfarm
Thank you Tully! I think I've found the root cause of the issue. As far as I discovered, the meta-build system does not pass the correct configure/compilation flags to some of underlying projects, which makes them use `uname -a` to determine the architecture. Inside Docker containers apparently, that command returns the host kernel's architecture. I am still working on a fix though.

The best solution would be to release the 3rdparty packages on their own and then add a dependency on them. 
Pending that you can add overlay files in a directory like this: https://github.com/ros-gbp/opencv3-release/tree/master/kinetic if you reference it in the track like: https://github.com/ros-gbp/opencv3-release/blob/master/tracks.yaml#L58 There is documentation here:http://wiki.ros.org/bloom/Tutorials/ReleaseThirdParty on how to do these things. This is the standard use case of adding a package.xml like opencv3. but you can add more resources there which will allow you to avoid downloading them at build time. 

That is exactly what I am trying to achieve here. `parrot_arsdk` is a 3rd party package. The issue is that the `parrot_arsdk` itself downloads the source code of all its components during the build process using Android repo tool (a similar tool to `wstool`). The entry point for building the SDK is an XML document.

I will try to fix the build on i386 platforms first, then I will focus on eliminating build time downloads.

Mani Monajjemi

unread,
Jul 8, 2016, 2:13:12 PM7/8/16
to ros-sig-...@googlegroups.com
Thanks Dirk for your suggestion. I will try that for sure. 

Do you have any idea why this problem does not show up when I run the build inside a 32-bit Virtualbox VM? Does the build farm set all the architecture related flags for 32-bit builds?

Thanks,
Mani


--

Mani Monajjemi | Roboticist | mani.im

Tully Foote

unread,
Jul 8, 2016, 2:19:56 PM7/8/16
to ros-sig-...@googlegroups.com
The buildfarm runs in a container not a virtual machine which means that it shares the kernel with the host. So if you try to use uname to get the build type it will report the host. Virtual machines run a different kernel inside the VM so uname gives a different result.

Tully

Reply all
Reply to author
Forward
0 new messages