On Sat, Oct 31, 2015 at 12:57 PM, Alexander Biersack
<
a.bie...@googlemail.com> wrote:
> But as a first step, this probably will get us started and up and running
> quickly. But I think later we will have to replace this module.
Possibly we might replace it going forward, but my single-minded
priority right now is to get to a working POC with the minimum
expenditure of time and money, and then to iterate. If a wealthy donor
showed up wanting to fund our R&D, that strategy should be
reconsidered; but until then, both time and money are, in fact,
relatively scarce.
Further, I would say that let's try and not be one-dimensional on
these kinds of questions. I think the end goal here must be to support
a broad range of hardware, the broader the better, so that our users
can build their DIY configurations from whatever components they can
afford and source.
That doesn't mean we won't provide reference/example configurations;
we certainly will. But there is rarely a single "better" or "best";
instead, there is only a multi-dimensional trade-off matrix.
Firmware-assisted tracking is going to be in one local optimum in that
space, and just-do-it-all-in-software in another. One isn't inherently
better than the other in any objective sense, only in terms of fitness
to purpose and constraints.
On Sat, Oct 31, 2015 at 12:51 PM, Alexander Biersack
<
a.bie...@googlemail.com> wrote:
> At MI we followed the strategy of not programming for specialty hardware and
> we relied on constant progress and performance increases in HW.
Fish should always be able to perceive the water they swim in,
however. Back then, you were the beneficiaries of a multidecade,
industry-wide megatrend: you were surfing the dominant wave. All of
that made perfect sense.
That particular wave is passing, however, and we're now riding other
rising waves--including a renaissance in heterogeneous computing and
specific-purpose hardware acceleration, as well as the proliferation
of open-source hardware combined with ever-cheaper crowdfunded
manufacturing. Ideally, we should be able to adapt to whatever the
tides happen to be. The only constant is change.
> So if one invests programming time into this years hot HW, it will be a lost
> investment next year or the year after.
> On the other hand if you program under linux using a standard software and a
> general purpose CPU, you will be able to use everything for years.
The reason ROS became dominant in the robotics space is exactly why I
think this camera module is a great idea: if you architect your system
right, all of that just doesn't much matter.
Namely, there is, in fact, a huge annual turnover of hardware in
robotics (the wonders of the free market), and the last thing one
wants to do is not take advantage of that; rather, one should want to
figure out how to adapt to the constant change instead of retreating
from it. The ROS project did, indeed, figure out one great way to go
about accelerating that adaptation cycle, which is why they became
popular.
> And to tell you the truth, I am much more comfortable and productive in a
> standard unix environment than when cross compiling with some obscure tools
> for some obscure HW I have to learn.
There are no obscure cross-compilation tools or software to learn for
this camera module. You can control it directly from your Python
program running on the BBB [1], or even from your laptop (via the USB
connector).
More to the point, I am currently writing a simple ROS proxy program
that will convert the camera-specific 50 Hz object-tracking packets to
generic ROS messages sent out on the ROS message bus. That entirely
decouples the consumer of those messages from the actual hardware.
It's a couple-hour task when done properly (C++ instead of Python),
not much of a cost.
In the future when we plug in a more pedestrian USB camera module,
we'll have an active OpenCV-based process that will analyze the video
feed to do object tracking and send out those exact same ROS messages.
Again, the downstream message consumer shouldn't have to particularly
care which bit of hardware is plugged in so long as the received
telemetry is of isomorphic quality.
> So my choice would be to pay a little more for a strong CPU and do the video
> processing using ROS and a unix environment and a good high level language.
There's no good reason you can't have both. Of course, we want to
support cheap, baseline camera modules--they do come as cheap as half
the price of the Pixycam, after all. But the Pixy doesn't give us any
disadvantage, since it can perfectly well also function as such a
"dumb" camera, with the built-in computer vision unused/disabled.
To the contrary, it gives us two immediate concrete advantages, namely
that of potentially accelerating our initial POC development, and
secondarily that of gaining concrete experience with hardware-assisted
computer vision, a whole growing category of hardware we definitely
wish to be able to take advantage of whenever available.
I, for one, am much looking forward to buying a Pixycam next week!
[1]
http://www.cmucam.org/projects/cmucam5/wiki/Hooking_up_Pixy_to_a_Beaglebone_Black