The Xserver in Angstrom uses the framebuffer for output which as we know
isn't suitable for demonstrating the potential of the OMAP3. :)
So my question is what is the state on an Xserver for OMAP3?
There isn't one. Once the kernel drivers for the display have been
sorted out, someone is likely to start working on an X driver.
For potential of the OMAP3 I guess you mean the SGX hardware. Well,
there is a small catch to get that acceleration on Xserver.
Accelerated Xserver approaches used in current desktops are based on
standard OpenGL not OpenGL ES (OGLES from now on), all of them
requiring the GLX protocol for integration between the X server and
the GL library. But GLX doesn't support OGLES as far I known.
There is X server project called Xegl (http://www.freedesktop.org/wiki/Xegl
) that runs without using GLX, but has not received too much effort
since there is lack of EGL libraries for desktop machines. OMAP does
have an EGL, so problem solved here if you invest some time doing the
glue logic. Then you need a way to render with the Xrender extension:
glitz is a rendering library over OpenGL that can be the Xegl backend
(or also used as cairo rendering backend for accelleration, but in
OMAP case would do more sense to use the OpenVG backend for cairo than
glitz). The good news is that glitz designer keep OGLES in mind when
writing glitz, but some people tried to test it on OGLES 1.1 and
didn't work well due the lack of pbuffers, fbo and buffer mapping
functions on the 1.1 standard. So, maybe attempting to port glitz
to the OGLES 2.0 standard could be more successful.
Then you will have to do some funny hacking with the modular X server
libraries and you will finally get an OGLES accelerated Xserver, and
then you will have to get a composite manager running on top of it.
There are other options for taking advantage of the acceleration like
using clutter (which already works with OMAP3), but you will be
lacking the ability to run standard widget sets like Gtk. Still
clutter provides a very good ground (IMHO) for creating UIs since it
provides rich scenography functions to create animations and widgets
with good effects like 2D physics (something you won't find with the
Then you also have the partial acceleration mode: use standard X
server with GTK, and a gtk-clutter widget for rendering accelerated
scenes inside a special window.
Sorry I just confuse things a bit more, the landscape here is not that
simple yet. If I miss something or someone has some other ideas (or
corrections, it take me several weeks to get the picture and I may be
mistaken in something) they are welcome.
Could you elaborate clutter approach more? If I only want to run one
special application (no need for windows manager here), and I don't
have the burden of running traditional X-window based application.
Will clutter be a good choice. How far away is clutter from metal?
Clutter -> openGL dirver -> PowerVR 530 -> Display ? No heavy
x-protocol in the chain?
On Sep 25, 2008, at 10:58 AM, guo tang wrote:
> Could you elaborate clutter approach more? If I only want to run one
> special application (no need for windows manager here), and I don't
> have the burden of running traditional X-window based application.
> Will clutter be a good choice. How far away is clutter from metal?
> Clutter -> openGL dirver -> PowerVR 530 -> Display ? No heavy
> x-protocol in the chain?
Yes, clutter is a good choice here.
The layout will be:
Clutter -> OpenGL ES -> Video hardware.
Clutter provides text rendering using fontconfig and pango, so you
have all the goodies for text rendering as normal Linux applications
(i18n support, bi-directional support, etc), and you can stack up some
more libraries on top of it like cairo and gstreamer to render
directly from them to actors in 3D space.
At this point it clutter only provides input using tslib for
touchscreens when using the EGL backend, but I'm working in adding a
generic Linux input event so you can use mouses or keyboards or keypads.
What clutter doesn't provide is an elaborated toolkit for rich-text
boxes or tree views. However you can use the webkit actor for complex
Sorry a little off topic here. If I want to get very accurate 256
level of black and white, and
very accurate 256 level of color. Is that possible with omap3530? I
know for sure 16bit 5-6-5 format
won't do that. Is that a hardware limitation of software driver issue?
Are there something like 16bit palette
>> Accelerated Xserver approaches used in current desktops are based on
>> standard OpenGL not OpenGL ES (OGLES from now on), all of them
>> requiring the GLX protocol for integration between the X server and
>> the GL library. But GLX doesn't support OGLES as far I known.
> I think you are forgetting about Gallium3D here :) A gallium sgx
> driver + failover pipe would give us a complete openGL implementation
> where the GLES portions are accelerated.
> Having said that, I have no idea what imgtec is working on, but I
> suspect they are writing kernel 2.4 drivers for tinyX or something
> equally useless in this day and age. Intel binned the imtec drivers
> and hired TG to do drivers for their SGX chip....
I didn't know about Gallium3D, thanks for the info. It seems from
google it hat TG is working on an OGLES state tracker (which will be
useful to bring the standard desktop acceleration), so this is another
option. The issue here is that either you wait for TG to release the
state tracker or you write it yourself. It seems to me that you need
to be a seasoned OpenGL guru to do it.
> Sorry a little off topic here.
just start a new thread instead of replying to an existing one and chose a meaningful subject ;-)
> If I want to get very accurate 256
> level of black and white, and
> very accurate 256 level of color. Is that possible with omap3530? I
> know for sure 16bit 5-6-5 format
> won't do that. Is that a hardware limitation of software driver issue?
> Are there something like 16bit palette
> color mode?
the omap3530 supports up to 24bit RGBA, should be enough, no?
In addition to the NDA issue, another problem is that the pvr2d API
takes physical addresses of the memory you want to blit. Normally you
ask pvr2d to allocate the memory for you (in which case you don't need
the physical address). But to get a composition manager to work with
GL, you need to allocate memory in one process and use it in another
(I.e. allocate in x server and use in client). I have no idea how
you're supposed to allocate a hunk of memory, pin it and then get the
physical address of it from user space. The idea I've been kicking
around is to "reserve" say 8MB of ram for graphics. I think this can
be done by passing a parameter to the kernel which tells it the
physical address of memory it can use. You just pass in the physical
address of the RAM, minus the top 8MB. Once you've made sure the
kernel's never going to touch the top 8mb of physical RAM, you can use
it for the SGX and even mmap it through /dev/ram. Anyone know if this
is likely to work?
BTW: Gallium3D's SGX drivers are for the SGX in Intel Atoms. As far as
I know, TI have no plans to licence the Gallium3D driver from TG -
Which means we're stuck with the ImgTec binary drivers with almost no
X integration. :-(
If anyone's interested in running QWS on the beagle, let me know -
however I guess all the BeagleBoard people are focussed on X11 & GTK+?
Even so, taking a look at the QWS integration might shed some light on
things. BTW: I also wrote a backend for EGL on X11 for Qt if anyone
wants to give it a try. If you're feeling _very_ adventurous you can
even try out my OpenGL ES 2.0 paint engine (Not for the faint of heart
- it's a bit buggy at the moment but should be stable for Qt 4.5.0,
whenever that gets released.)
Well, I don't have the skills for driver work, but I'll be using Qt or Qt
Embedded with my Beagle. My goal is to port my Qt based "MythTV like" media
setup to the Beagle and get it all running smoothly. But as audio and video
currently are in a somewhat mixed state I guess 4.5.0 is out before it all
But I'll try some snapshot on the Beagle and see what I can get to work, so
you're not the only Qt person on this list/forum. :)
Many an ancient lord's last words had been:
"You can't kill me because I've got magic aaargh...."
-- Terry Pratchett, Interesting Times
One of my objective is to use the frame buffer with some hw acceleration
and to possibly mix 2D surface with 3D.
My first thought was to not use X11 systema ans to use instead DirectFB
Then to try creating plugin for hardware acceleration (fill rectangle, blit
copy, blit color keying, blit blend)
Your point is very interesting. I am interested in trying QWS and your
opengl es paint engine.
Maybe it is the solution i am looking for.
I assume there are still no OpenGL ES drivers for the beagleboard?
And without the drivers the new paint engine is mostly useless? AFAIK the
drivers are not publicly available anywhere.
Shadwell hated all southerners and, by inference, was standing at the
-- Terry Pratchett & Neil Gaiman, Good Omens