[I'm moving this discussion to the mailing list, as it's getting
rather unwieldy and hard to comprehend in Facebook comments.]
Alexander proposed [1] the following clearly defined initial scope for
our R&D efforts:
> I propose two challenges.
>
> The fly challenge: Take one of these little lasers and shoot a fly out
> of midair. In Spain in the Summer flies can be very obnoxious. When
> you are in a patio or on a terrace and the temperature is high, they
> can be rather fast. They zigzag through the air in a rather
> unpredictable fashion. So the idea is to set up one or more cameras,
> either fixed or movable and put a little laser on gimbals and shoot
> those little bastards out of the air. If the laser is too heavy to be
> moved fast enough, use a computer controlled mirror to aim the laser
> beam at the fly.
>
> The mosquito challenge: Do the same for mosquitoes, only with night
> vision cameras in the dark. Use an infrared or some other non visible
> light source if you want. Sometimes mosquitoes will sit on the wall or
> hide somewhere until you turn off the lights and go to bed. Then the
> mosquitoes come out and suck your blood. I don't know if they use body
> heat to find you, the smell of humans or what they do, but go get them
> flying in the air.
I agreed with Alexander, as the proposal mirrored my own thinking:
> These are exactly the highest-priority secondary uses I have in mind,
> yes. They exercise a subset of the platform, chiefly object
> recognition & tracking as well as the targeting system, which is a
> good modular approach to go about things.
We should explicitly recognize, though, that it's certainly the case
that these use cases constitute not merely a subset of the platform,
but in fact also an extension thereof: tracking objects as small as
insects (micro-sized objects) conceivably requires capabilities that
the platform would not otherwise aim for.
For example, it's not unlikely that we'll need better (and more
expensive) optics and mechanics than we would for macro-sized objects.
On the Facebook thread, Alexander did some back-of-the-envelope
calculations on how precise gimbal control is needed, and upcoming
camera experiments and selection should be rather educational as well.
Still, as an initial scope and focus of the project, this is extremely
clear. If it can be made to work, it will be impressive as a demo.
(The follow-up question will inevitably be, when and where can I buy
one?) If it cannot be made to work (for example, because of optical
and/or economic limitations), we'll still have learnt many useful
things along the way. And most everything built and learnt can be
directly applied to the next stage on the roadmap, namely transferring
these capabilities onto mobile robotics platforms such as drones.
The next step, now, will be acquiring some initial hardware for
proof-of-concept development. I'll post about that separately,
shortly.
[1]
https://www.facebook.com/conreality/posts/1656559921296213
--
Arto Bendiken | @bendiken |
http://ar.to