Android HAL

578 views
Skip to first unread message

Luca Belluccini

unread,
Dec 15, 2008, 5:50:41 PM12/15/08
to android-porting
If I want to create a new device, conforming to Android HAL, I should
use one of the following approaches:
1. App - Runtime Service - lib
2. App - Runtime Service - Native Service - lib
3. App - Runtime Service - Native Daemon - lib
Except for the existing devices, it is not easy to identify all the
files which need to be modified for adding a new one.
I am also interested in writing a User-Mode Driver, instead of using a
Kernel-Mode Driver model.
For now, I want to make it work in the emulator.

My fake device will be similar to "sensor" device.
I tried to create a library calling it "fake" and managing all
necessary features.
I put them into /hardware/libhardware/fake, /hardware/libhardware/
include/fake, with its Android.mk.
It relies on /dev/fake and /dev/input/fake (one for data, the second
one for control).
First question. There's a standard way in Android Project Group for
adding a new device (kernel and/or user model)? (Folders where files
should be placed, tips for makefiles, etc? The device /dev/input/
compass is managed by which driver?)
There's a stub for implementing a new device and related Manager/
Services, to higher levels up to Application?
Thank you in advance,
Luca Belluccini

Mark.Li

unread,
Dec 19, 2008, 2:36:07 AM12/19/08
to android-porting
up
I'm interested in this issiue too, and I'm reading the code. As
you said,it is not easy to identify all the
files which need to be modified for adding a new one.

Anybody who can give some clue?
Thank you in advance,
Mark lee

Dave Sparks

unread,
Dec 19, 2008, 11:48:44 AM12/19/08
to android-porting
The abstraction layers in Android are admittedly inconsistent. Audio
and camera both use a C++ pure virtual interface (e.g.
AudioHardwareInterface.h), while other places like LED's and blitter
functions have a C struct of function pointers.

Because we run the actual device image in the emulator, any devices
that aren't backed by a kernel driver in the emulator need a fake
device (such as camera). For audio, we have a kernel driver for the
emulator, but other devices like the G1 have a user space layer on top
of the kernel driver. In this case, for the emulator, there is a shim
that translates AudioHardwareInterface calls to kernel calls.

How you surface the driver features to the application will depend on
what you are trying to do. Most hardware features are abstracted by a
native service, for example Surfaceflinger for 2D graphics,
AudioFlinger for audio, CameraService (because CameraFlinger just
sounded wrong) for the camera. This allows us to enforce security
using the binder interface and to abstract away a lot of differences
so that applications don't have to be written to work with specific
hardware.

I haven't looked at the compass code, but I can take you through the
camera as an example.

At the top of the stack is android.hardware.Camera, which is the Java
object that the application instantiates when it wants to take a
picture. This is a thin wrapper around android_hardware_Camera.cpp
which is the JNI interface. This in turn is a wrapper around libs/ui/
Camera.cpp which is the proxy for the remote object in the camera
service. And yes, libs/ui is a strange place for it, but it has some
circular dependencies on Surfaceflinger, so that's where it ended up.

Now it gets interesting because there is a binder interface called
ICamera.h (pure virtual) which is implemented in ICamera.cpp which is
the marshalling code for the IPC binder calls. Let's just take it on
faith that the calls from the client result in a marshalled
ICameraClient object appearing on the server side of the interface.
Upon establishing a connection - provided that the application has
permission to use the camera - the camera service instantiates a
CameraHardwareInterface derived object, which is the HAL for the
actual camera hardware. The camera service takes care of preview
display and other low-level house-keeping functions that would be
difficult to do in a Java app.

The camera hardware driver can either be implemented as a kernel
driver with a thin user space shim driver, or it can be implemented as
a user space driver with a thin kernel driver (primarily to control
access).

This is one of the more complex device models. Other devices (like the
LEDs) have a much simpler model.

Mathias Agopian

unread,
Dec 19, 2008, 12:06:56 PM12/19/08
to android...@googlegroups.com
On Mon, Dec 15, 2008 at 11:50 PM, Luca Belluccini
<lucabel...@gmail.com> wrote:
>
> If I want to create a new device, conforming to Android HAL, I should
> use one of the following approaches:
> 1. App - Runtime Service - lib
> 2. App - Runtime Service - Native Service - lib
> 3. App - Runtime Service - Native Daemon - lib


It depends, most of the time you'll need to do something like:

app -> java api (manager) -> java service -> HAL module -> kernel

Sometimes (it's the case for the sensors), there is an extra native
daemon, but in this particular case, it's an implementation detail
(the sources for that daemon are proprietary).

Sometimes, the "java service", can be replaced directly by a "native
service", thanks to the binder interfaces (they're agnostic to the
languages involved, so a service can even be rewritten from java to
C++ or vice-versa, without breaking compatibility).

In some cases, it is also possible to skip the "java service"
entirely, if the HAL module can handle multiple clients and all the
permissions involved.

It is also possible to have a mix and match, this is the case for the
sensors; the service is used to establish a connection with the
hardware, but all the data moving is done by talking directly to the
HAL module from the the app.

So, in short, there are no rules. It depends on what you're trying to
accomplish.


mathias
Reply all
Reply to author
Forward
0 new messages