Gimbal control and POC hardware

52 views
Skip to first unread message

Arto Bendiken

unread,
Oct 28, 2015, 10:27:34 AM10/28/15
to Conreality mailing list
[I'm moving this discussion to the mailing list, as it's getting
rather unwieldy and hard to comprehend in Facebook comments.]

Alexander and I had discussed [1] hardware options for the gimbal
housing the camera and laser for target tracking & engagement in our
quest to rid Alexander's vacation home of houseflies [2]. I won't
quote the Facebook discussion at length here, but what it comes down
to is that an initial proof-of-concept hardware platform needs the
following core components:

• a gimbal controller board
• a 3-axis brushless camera gimbal, with motors
• a camera module
• a laser module

Initially, the targeting system [3] itself will run on a laptop used
as ground control station, interfacing to the gimbal controller over
USB and/or Bluetooth and receiving a video frames from the camera over
(likely) the same bus. The camera shutter control is usually provided
for by gimbal controller boards and could thus be repurposed for laser
fire control.

The control software will be laid out according to the ROS [4]
architecture, meaning that it will eventually be deployable to any
sufficiently capable standalone Linux-based robotics platform.

From looking into hardware options based on Alexander's input, I've
selected the popular open-source STorM32-BGC board [5] as a promising
candidate for the gimbal controller board. These boards are among the
most capable brushless gimbal controller (BGC) options today, and yet
(thanks to the fully open hardware and software) among the cheapest:
they can be purchased for €30-80 from multiple manufacturers.

The camera and laser (initially, a red laser pointer would be fine)
are commodity components and will probably amount to no more than
€50-100 in expenses. The most expensive piece of kit is the camera
gimbal, which start at €150--and it's not always clear whether that
includes the gimbal motors, which don't come cheap. So, we're looking
at a budget of €300-500 for an initial POC set-up, probably.

Further thoughts and input welcome...

[1] https://www.facebook.com/conreality/posts/1656559921296213
[2] https://groups.google.com/forum/#!topic/conreality/zfCe8upi_t4
[3] https://github.com/conreality/consensus/wiki/Targeting-System
[4] http://wiki.ros.org/
[5] http://www.olliw.eu/2013/storm32bgc/?en

--
Arto Bendiken | @bendiken | http://ar.to

Daniel Komorný

unread,
Oct 28, 2015, 10:30:57 AM10/28/15
to Conreality mailing list
I can chip in with €100.

--
You received this message because you are subscribed to the Google Groups "Consensus Reality" group.
To unsubscribe from this group and stop receiving emails from it, send an email to conreality+...@googlegroups.com.
To post to this group, send email to conre...@googlegroups.com.
Visit this group at http://groups.google.com/group/conreality.
To view this discussion on the web visit https://groups.google.com/d/msgid/conreality/CAE7aNuTs4nn81Sf%3Dx%2BsLK26HWm%2BWZ7QJcm65ZxWozKi7occEJQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Alexander Biersack

unread,
Oct 28, 2015, 11:26:39 AM10/28/15
to Daniel Komorný, Conreality mailing list

I questioned the need of a separate gimbal controller board. Any drone or rover will need a board, so why have an extra board just for the gimbal. Why not go straight for a board that can run the whole software needed? Sure you can run the image processing on the base station but this will require a lot of wireless bandwidth for the video and introduce delays, which might defeat the goal of tracking a fast moving fly. It also means a reliable video and control connection is required at all times.

Arto Bendiken

unread,
Oct 28, 2015, 11:32:40 AM10/28/15
to Daniel Komorný, Conreality mailing list
On Wed, Oct 28, 2015 at 3:30 PM, Daniel Komorný
<daniel....@gmail.com> wrote:
> I can chip in with €100.

Thanks, Dan! I'll go ahead and set up a Bitcoin wallet for the project
sometime soon, it will certainly be helpful if we can crowdfund a bit
of the expenses.

Arto Bendiken

unread,
Oct 28, 2015, 11:55:33 AM10/28/15
to Alexander Biersack, Conreality mailing list
On Wed, Oct 28, 2015 at 4:26 PM, 'Alexander Biersack' via Consensus
Reality <conre...@googlegroups.com> wrote:
> I questioned the need of a separate gimbal controller board. Any drone or
> rover will need a board, so why have an extra board just for the gimbal. Why
> not go straight for a board that can run the whole software needed? Sure you
> can run the image processing on the base station but this will require a lot
> of wireless bandwidth for the video and introduce delays, which might defeat
> the goal of tracking a fast moving fly. It also means a reliable video and
> control connection is required at all times.

In the abstract, I do like the idea of doing everything on a
BeagleBoard or a similar general-purpose board. And I do see such a
configuration as an eventual objective for the project, already for
cost-cutting reasons.

However, perhaps you might clarify how you see all that actually working?

As far as I can tell, it would be a considerable further amount of
work, none of which would directly contribute to the actual solution
as such. That is, I'm unaware of high-level gimbal control software
for the BeagleBoard that would operate on a similar level of
abstraction as what the STorM32-BGC provides. As far as I can tell, we
would have to ourselves directly drive the gimbal motors with PWM
signals--a task far more complicated [1] than controlling them via the
STorM32.

If I've overlooked something, happy to correct course...

As for the GCS set-up, there is as yet no radio link involved in the
STorM32 configuration I've proposed. Further, it is important to keep
in mind that this is an iterative process: we generally cannot
optimize for both development and production goals simultaneously.
Right now, I'm optimizing for rapid iterative development, which in
practice entails that the control software be running directly on the
computer where I am also editing it.

[1] http://www.instructables.com/id/Brushless-Gimbal-with-Arduino/

Alexander Biersack

unread,
Oct 28, 2015, 12:03:44 PM10/28/15
to Arto Bendiken, Conreality mailing list

I am all for rapid development and fast iteration. Right, I wasn't aware you wanted to connect it straight to the laptop. In the end it should not be such a big deal to lift the gimbal control code from the STM and put it on a more powerful computer, except of course for the real time issues. Yes development on a full fledged computer is easier, but the STM might be more of a pain than working on a beagleboard with linux. Knowing the STM is not bad.
 

Arto Bendiken

unread,
Oct 30, 2015, 1:59:53 PM10/30/15
to Conreality mailing list
I've now added a Donations page [1] to the wiki, listing our imminent
needed hardware purchases and also mentioning a BTC wallet address for
anyone who wants to help pitch in a little something towards those
purchases. I'll be looking to acquire the listed hardware in November
and December one way or another.

Talking about hardware, I believe I've found the
on-paper-quite-perfect camera module for us, namely the open-source
CMUcam5 Pixy [2] which has built-in object tracking capability, i.e.,
it runs computer-vision algorithms directly onboard the camera module
and continuously outputs the current (X, Y) pixel coordinates for
tracked objects at 50 Hz. Sounds rather promising in principle,
looking forward to evaluating it.

Also, turns out that for the laser turret POC, we can quite possibly
repurpose the COTS 2-axis/3-axis pan/tilt/roll mechanisms [3] designed
for first-person view (FPV) drones. They're simpler and cheaper than
the gimbals we were looking at earlier; we'll have to see how fine the
motor control is in practice, but they do make sense as an initial
step to iterate from, and we can also replace their servos as needed.

Adafruit has a nifty little demo [4] that combines the Pixy cam with a
2-axis pan/tilt mount to do real-time object tracking. That's
practically almost halfway there already.

I should also mention that Alexander and I have been eyeing the
just-released BeagleBoard X15 [5] board as the reference development
platform for the project going forward. For the near term, we'll stick
with the much cheaper BeagleBone Black [6] board which is already in
mass production, but the X15's capabilities are going to enable some
rather smart drones.

[1] https://github.com/conreality/consensus/wiki/Donations
[2] http://www.cmucam.org/projects/cmucam5/wiki
[3] https://www.adafruit.com/products/1967
[4] https://learn.adafruit.com/pixy-pet-robot-color-vision-follower-using-pixycam
[5] http://beagleboard.org/x15
[6] http://beagleboard.org/black

Alexander Biersack

unread,
Oct 30, 2015, 3:19:55 PM10/30/15
to Arto Bendiken, Conreality mailing list
I believe the cheaper gimbals are not brushless. If you have servos, you can point them somewhere, but for jitter free tracking they are no good. The others have discrete steps and gears, while the brushless gimbals can move smoothly. So depending on what you want to do, they may or may not be good enough. You could turn the laser on and off if you are on target or not, while with the brushless gimbals you should be able to keep them pointed for longer duration. If the target is big enough it might not make a difference.

a

Arto Bendiken

unread,
Oct 30, 2015, 9:28:28 PM10/30/15
to Alexander Biersack, Conreality mailing list
True, I'm not expecting wonders from them. My thinking is that as a
first iteration, I'd be happy to just track and pop some air
balloons--which are relatively sluggish and large compared to Spanish
houseflies. Once we can reliably accomplish that, let's revisit gimbal
quality.

Alexander Biersack

unread,
Oct 31, 2015, 7:51:54 AM10/31/15
to Arto Bendiken, Conreality mailing list
I am not sure this cam module is the best choice. Okay it is a 204MHz Arm Cortex M4F with an M0 coporocessor. Price is also reasonable.
At MI we followed the strategy of not programming for specialty hardware and we relied on constant progress and performance increases in HW.
So if one invests programming time into this years hot HW, it will be a lost investment next year or the year after.
On the other hand if you program under linux using a standard software and a general purpose CPU, you will be able to use everything for years.
And to tell you the truth, I am much more comfortable and productive in a standard unix environment than when cross compiling with some obscure tools for some obscure HW I have to learn.
So my choice would be to pay a little more for a strong CPU and do the video processing using ROS and a unix environment and a good high level language.

alex

Alexander Biersack

unread,
Oct 31, 2015, 7:57:30 AM10/31/15
to Arto Bendiken, Conreality mailing list
But as a first step, this probably will get us started and up and running quickly. But I think later we will have to replace this module.

Arto Bendiken

unread,
Oct 31, 2015, 9:38:27 AM10/31/15
to Alexander Biersack, Conreality mailing list
On Sat, Oct 31, 2015 at 12:57 PM, Alexander Biersack
<a.bie...@googlemail.com> wrote:
> But as a first step, this probably will get us started and up and running
> quickly. But I think later we will have to replace this module.

Possibly we might replace it going forward, but my single-minded
priority right now is to get to a working POC with the minimum
expenditure of time and money, and then to iterate. If a wealthy donor
showed up wanting to fund our R&D, that strategy should be
reconsidered; but until then, both time and money are, in fact,
relatively scarce.

Further, I would say that let's try and not be one-dimensional on
these kinds of questions. I think the end goal here must be to support
a broad range of hardware, the broader the better, so that our users
can build their DIY configurations from whatever components they can
afford and source.

That doesn't mean we won't provide reference/example configurations;
we certainly will. But there is rarely a single "better" or "best";
instead, there is only a multi-dimensional trade-off matrix.
Firmware-assisted tracking is going to be in one local optimum in that
space, and just-do-it-all-in-software in another. One isn't inherently
better than the other in any objective sense, only in terms of fitness
to purpose and constraints.

On Sat, Oct 31, 2015 at 12:51 PM, Alexander Biersack
<a.bie...@googlemail.com> wrote:
> At MI we followed the strategy of not programming for specialty hardware and
> we relied on constant progress and performance increases in HW.

Fish should always be able to perceive the water they swim in,
however. Back then, you were the beneficiaries of a multidecade,
industry-wide megatrend: you were surfing the dominant wave. All of
that made perfect sense.

That particular wave is passing, however, and we're now riding other
rising waves--including a renaissance in heterogeneous computing and
specific-purpose hardware acceleration, as well as the proliferation
of open-source hardware combined with ever-cheaper crowdfunded
manufacturing. Ideally, we should be able to adapt to whatever the
tides happen to be. The only constant is change.

> So if one invests programming time into this years hot HW, it will be a lost
> investment next year or the year after.
> On the other hand if you program under linux using a standard software and a
> general purpose CPU, you will be able to use everything for years.

The reason ROS became dominant in the robotics space is exactly why I
think this camera module is a great idea: if you architect your system
right, all of that just doesn't much matter.

Namely, there is, in fact, a huge annual turnover of hardware in
robotics (the wonders of the free market), and the last thing one
wants to do is not take advantage of that; rather, one should want to
figure out how to adapt to the constant change instead of retreating
from it. The ROS project did, indeed, figure out one great way to go
about accelerating that adaptation cycle, which is why they became
popular.

> And to tell you the truth, I am much more comfortable and productive in a
> standard unix environment than when cross compiling with some obscure tools
> for some obscure HW I have to learn.

There are no obscure cross-compilation tools or software to learn for
this camera module. You can control it directly from your Python
program running on the BBB [1], or even from your laptop (via the USB
connector).

More to the point, I am currently writing a simple ROS proxy program
that will convert the camera-specific 50 Hz object-tracking packets to
generic ROS messages sent out on the ROS message bus. That entirely
decouples the consumer of those messages from the actual hardware.
It's a couple-hour task when done properly (C++ instead of Python),
not much of a cost.

In the future when we plug in a more pedestrian USB camera module,
we'll have an active OpenCV-based process that will analyze the video
feed to do object tracking and send out those exact same ROS messages.
Again, the downstream message consumer shouldn't have to particularly
care which bit of hardware is plugged in so long as the received
telemetry is of isomorphic quality.

> So my choice would be to pay a little more for a strong CPU and do the video
> processing using ROS and a unix environment and a good high level language.

There's no good reason you can't have both. Of course, we want to
support cheap, baseline camera modules--they do come as cheap as half
the price of the Pixycam, after all. But the Pixy doesn't give us any
disadvantage, since it can perfectly well also function as such a
"dumb" camera, with the built-in computer vision unused/disabled.

To the contrary, it gives us two immediate concrete advantages, namely
that of potentially accelerating our initial POC development, and
secondarily that of gaining concrete experience with hardware-assisted
computer vision, a whole growing category of hardware we definitely
wish to be able to take advantage of whenever available.

I, for one, am much looking forward to buying a Pixycam next week!

[1] http://www.cmucam.org/projects/cmucam5/wiki/Hooking_up_Pixy_to_a_Beaglebone_Black

Arto Bendiken

unread,
Oct 31, 2015, 1:46:55 PM10/31/15
to Conreality mailing list
On Sat, Oct 31, 2015 at 2:37 PM, Arto Bendiken <ar...@bendiken.net> wrote:
> More to the point, I am currently writing a simple ROS proxy program
> that will convert the camera-specific 50 Hz object-tracking packets to
> generic ROS messages sent out on the ROS message bus. That entirely
> decouples the consumer of those messages from the actual hardware.
> It's a couple-hour task when done properly (C++ instead of Python),
> not much of a cost.

As it happens, the world has already provided us the Pixycam-ROS bridge:

https://github.com/jeskesen/pixy_ros

Moving on...

Daniel Komorný

unread,
Nov 6, 2015, 5:41:43 AM11/6/15
to Arto Bendiken, Conreality mailing list
I'm too ignorant in how we might benefit from this specifically regarding this project, nevertheless I have a gut feeling it could be useful somehow:


--
You received this message because you are subscribed to the Google Groups "Consensus Reality" group.
To unsubscribe from this group and stop receiving emails from it, send an email to conreality+...@googlegroups.com.
To post to this group, send email to conre...@googlegroups.com.
Visit this group at http://groups.google.com/group/conreality.

Alex Biersack

unread,
Dec 14, 2015, 10:10:11 AM12/14/15
to Consensus Reality
There is another interesting chip/board development which we might use for openCV in the future. It is the Parallela from Adapteva Epiphany III 16 core or it's successor, the Epiphany IV. The Epiphany III 16 core is a little more pricey at $99 and $149 then a Raspberry Pi 2 or BBB, but it also cannot be compared one to one as it has the Epiphany 16 core coprocessor.
Next to the Arm A9 dual core it has Apadteva's 16 core low power Epiphany III coprocessor. Future releases might have 64 or more cores.

Bare bones $99 board just Ethernet, no GPIO, no USB, no HDMI:
http://www.amazon.com/Adapteva-Parallella-16-Micro-Server/dp/B0091UDG88/ref=sr_1_2?ie=UTF8&qid=1450102449&sr=8-2&keywords=Adapteva

$149 with USB, HDMI, 24 GPIOs:
http://www.amazon.com/Adapteva-Parallella-16-Desktop-Computer/dp/B0091UD6TM/ref=lp_9360745011_1_1?srs=9360745011&ie=UTF8&qid=1450104214&sr=8-1

It has about a 10th of the processing power of a low power intel M and a half the power consumption (2W for the Epiphany III), but the price is also lower, intel charges $250 to $350 for a ultra low power CPU and it will use (without the support board) 4.5W. The Epiphany only offers 32bit, while the intel can do 64bit. http://www.it.uu.se/edu/course/homepage/projektTDB/ht14/project08/Project_08_Report.pdf

Of course programming 16 processors is more work, but for CV it should be possible to get good multiprocessor speedups and distribute/parallelize the problem.

This is not a replacement for the Raspberry Pi2 or the Beaglebones, which we should keep working with. It runs Linux and may be cheaper then considering it as a replacement for a Pi2 plus the CV-processor that is needed for processing the camera inputs. Right now, I would definitely prefer using the Raspberry Pi 2 and buying a camera with the CPU like the Pixycam which does the image processing.

We should let the Adapteva things mature but keep observing the developments. Maybe someone will port openCV to it for us?

Alex

On Saturday, October 31, 2015 at 6:46:55 PM UTC+1, Arto Bendiken wrote:

Alex Biersack

unread,
Dec 14, 2015, 10:31:42 AM12/14/15
to Consensus Reality
Here are some more resources:

Architecture Reference Manual:

Here is some stuff relevant to openCV:
Face recognition good speedups for 16 cores using openCV, spped compares well to a x86 based GHz class CPU which probably uses a lot more watts, but they don't give any specifics of what kind of an x86 it is:
https://parallella.org/forums/viewtopic.php?f=13&t=2175

$4.000 bounty for open camera firmware/drivers for Raspberry Pi camera module (First $3.000 already taken, only $1.000 for Kerneldrivers left):
https://www.parallella.org/2015/06/01/the-open-camera-project-1000-bounty-for-open-firmwaredrivers-for-raspberry-pi-camera-module/

Alex

Alex Biersack

unread,
Dec 14, 2015, 10:48:10 AM12/14/15
to Consensus Reality
The 64 core Epiphany IV is unlikely to happen. All produced were sold out and producing a new batch would cost $3M. 

Alex

Arto Bendiken

unread,
Dec 14, 2015, 11:56:48 AM12/14/15
to Alex Biersack, Conreality mailing list
On Mon, Dec 14, 2015 at 4:10 PM, Alex Biersack
<a.bie...@googlemail.com> wrote:
> There is another interesting chip/board development which we might use for
> openCV in the future. It is the Parallela from Adapteva Epiphany III 16 core
> or it's successor, the Epiphany IV. The Epiphany III 16 core is a little
> more pricey at $99 and $149 then a Raspberry Pi 2 or BBB, but it also cannot
> be compared one to one as it has the Epiphany 16 core coprocessor.
> Next to the Arm A9 dual core it has Apadteva's 16 core low power Epiphany
> III coprocessor. Future releases might have 64 or more cores.

Right, this could be interesting. I had been intending to get a
Parallella board a couple of years ago, but it didn't work out.
Certainly still interested in checking it out as a platform. If OpenCV
is sufficiently parallelizable to be useful, all the more so.

PS. Alex, for link dumps, note that creating wiki pages would be a
rather more preferable way to capture that knowledge.

Arto Bendiken

unread,
Dec 14, 2015, 11:58:01 AM12/14/15
to Daniel Komorný, Conreality mailing list
On Fri, Nov 6, 2015 at 11:41 AM, Daniel Komorný
<daniel....@gmail.com> wrote:
> I'm too ignorant in how we might benefit from this specifically regarding
> this project, nevertheless I have a gut feeling it could be useful somehow:
>
> https://thestack.com/world/2015/11/05/the-self-replicating-smartphone-app-thats-ready-for-the-apocalypse-and-the-censors/

That is certainly one ambitious project. Probably a taste of futures to come.

Alexander Biersack

unread,
Dec 14, 2015, 12:20:35 PM12/14/15
to Arto Bendiken, Conreality mailing list
Yes, I figured after I sent it that this was probably not the best place.

Al
Reply all
Reply to author
Forward
0 new messages