Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Framebuffer directly on CPU.

50 views
Skip to first unread message

Skybuck Flying

unread,
Oct 1, 2022, 4:42:05 AM10/1/22
to
Here is an idea for you:

Try and implement a framebuffer directly on CPU.

For quick and easy access to pixel data.

Reduces complexity like directx/opengl/gdi.

Perhaps work together with Microsoft to implement yet again a new graphics API to be able to "multi-task" multi applications and they all can have their own piece of memory which can then be swapped to this CPU framebuffer.

If too complex, then at least one application should be able to have access to this CPU framebuffer.

At the very least this would allow custom graphics and no longer be bound by the limitations of directx/opengl/shaders/rtx/etc.

Bye,
Skybuck.

Paul

unread,
Oct 2, 2022, 3:45:03 AM10/2/22
to
This idea is only "feasible" when the CPU runs
at 5GHz. It takes a fair amount of CPU power
to run MESA or the like, and fake all the various
rendering or decoding features.

You can watch a movie format on a CPU, but it might
not be the best usage of electricity. Imagine for
example, the amount of CPU cycles required to run
a scaler for a 4K monitor output. On the P4, that
took 40% of one core, as an example. And the screen
in that example, was not exactly high resolution.
When the scaling function was added to the video card,
that became a "free lunch". Your P4 could be used
for other things then.

If you implemented this, the RAM for the frame buffer
would be system memory. When the frame buffer was reading-out
to drive the screen, that would be a DMA engine.

Paul

Skybuck Flying

unread,
Oct 4, 2022, 10:02:37 PM10/4/22
to
On Sunday, October 2, 2022 at 9:45:03 AM UTC+2, Paul wrote:
> On 10/1/2022 4:42 AM, Skybuck Flying wrote:
> > Here is an idea for you:
> >
> > Try and implement a framebuffer directly on CPU.
> >
> > For quick and easy access to pixel data.
> >
> > Reduces complexity like directx/opengl/gdi.
> >
> > Perhaps work together with Microsoft to implement yet again a
> > new graphics API to be able to "multi-task" multi applications and
> > they all can have their own piece of memory which can then be
> > swapped to this CPU framebuffer.
> >
> > If too complex, then at least one application should be able to have access to this CPU framebuffer.
> >
> > At the very least this would allow custom graphics and no longer be bound by the limitations of directx/opengl/shaders/rtx/etc.
> >
> > Bye,
> > Skybuck.
> This idea is only "feasible" when the CPU runs
> at 5GHz. It takes a fair amount of CPU power
> to run MESA or the like, and fake all the various
> rendering or decoding features.

Multi core and special pixel engines come to mind and ray tracing, distance field rendering, forget opengl/triangles and shit.

>
> You can watch a movie format on a CPU, but it might
> not be the best usage of electricity. Imagine for
> example, the amount of CPU cycles required to run
> a scaler for a 4K monitor output. On the P4, that
> took 40% of one core, as an example. And the screen
> in that example, was not exactly high resolution.
> When the scaling function was added to the video card,
> that became a "free lunch". Your P4 could be used
> for other things then.

Somehow this laptop with broken gpu can do it, not sure if it's the cpu or gpu doing it.

Next time I run a video I will take a look at cpu-z and gpu-z and try and figure it out ! ;)

> If you implemented this, the RAM for the frame buffer
> would be system memory. When the frame buffer was reading-out
> to drive the screen, that would be a DMA engine.

Could also be the 3D/VCache of AMD computer or later intel chips, veberos or something.

Or some part of the cache... CPU have large caches nowadays.

Bye for now,
Skybuck.
0 new messages