Is it possible/suitable to design a GPU based on RISC-V

1,360 views
Skip to first unread message

Manili

unread,
Jun 22, 2018, 5:24:16 PM6/22/18
to RISC-V HW Dev

Hello,


I’m new to the RISC-V ecosystem…

1. I’d like to know if it is possible/suitable to design a GPU based on RISC-V (i.e. a kind of extension of RISC-V)? 

2. If it’s a good idea to implement, are there any projects currently working on it?

      If the answer is yes, would you mind mention the project’s name and website?

      If the answer is no, are there any special reasons that nobody not implement it yet?


Best regards,

Manili

Tommy Murphy

unread,
Jun 22, 2018, 5:27:24 PM6/22/18
to RISC-V HW Dev
Probably worth searching for info about what NVIDIA are doing with RISC-V.

Manili

unread,
Jun 22, 2018, 5:48:05 PM6/22/18
to RISC-V HW Dev
Thanks a lot Tommy.

But AFAIU (and read about it before) the FALCON is a controller (general purpose embedded processor) as a part of a GPU. What I'm talking about is the GPU itself (e.g. CUDA cores and so on). Did I miss something or understand wrong?

Regards,
Manili


On Saturday, June 23, 2018 at 1:57:24 AM UTC+4:30, Tommy Murphy wrote:
Probably worth searching for info about what NVIDIA are doing with RISC-V.

Tommy Murphy

unread,
Jun 22, 2018, 6:00:35 PM6/22/18
to RISC-V HW Dev
In that case you probably want to look for information about the RISC-V vector extensions work in progress (you'll have to search and browse the mailing lists) and maybe the Hwacha project (http://hwacha.org/)?

David Lanzendörfer

unread,
Jun 23, 2018, 12:17:59 AM6/23/18
to hw-...@groups.riscv.org, Manili
Hi Manili
We're right now working on a free process in our #LibreSilicon project[1] and
my company[2] (where I'm the CEO) has a SoC on the road map for in the next 2
years.
We will require a GPU for said "Sau Mau Ping 1" (codename of the SoC)
I've got a pretty powerful FPGA (artix-7) now and was going to do some
prototyping on the side anyway already.
If you like we can cooperate.

Cheers
David

[1] http://libresilicon.com
[2] http://lanceville.cn/en/home
signature.asc

Luke Kenneth Casson Leighton

unread,
Jun 23, 2018, 3:58:06 AM6/23/18
to Manili, RISC-V HW Dev
On Fri, Jun 22, 2018 at 10:24 PM, Manili <manili....@gmail.com> wrote:

> Hello,
>
>
> I’m new to the RISC-V ecosystem…

hi manil, welcome.

> 1. I’d like to know if it is possible/suitable to design a GPU based on
> RISC-V (i.e. a kind of extension of RISC-V)?

manil i assume (below) you mean a royalty-free patent-unencumbered
public and published extension to RISC-V, as opposed to proprietary,
secret efforts. the question that you ask is i feel very important,
because if RISC-V were to be the basis of a commercial and libre GPU
it would not only greatly increase the perceived value of RISC-V but
also solve a long-standing very annoying long-standing problem:
proprietary GPUs in the embedded space are *integrated* and you get
absolutely zero choice in the matter... aside from wasting vast
amounts of time on reverse-engineering [0]

at FOSDEM 2018 when Yunsup and the team announced the U540 there was
some discussion about this: it was one of the questions asked. one of
the possibilities raised there was that maddog was heading something:
i've looked for that effort, and have not been able to find it [jon is
getting quite old, now, bless him. he had to have an operation last
year. he's recovered well].

also at the Barcelona Conference i mentioned in the
very-very-very-rapid talk on the Libre RISC-V chip that i have been
tasked with, that if there is absolutely absolutely no other option,
it will use Vivante GC800 (and, obviously, use etnaviv). what *that*
means is that there's a definite budget of USD $250,000 available
which the (anonymous) sponsor is definitely willing to spend... so if
anyone can come up with an alternative that is entirely libre and
open, i can put that initiative to the sponsor for evaluation.

basically i've been looking at this for several months, so have been
talking to various people (jeff bush from nyuzi [1] and chiselgpu [2],
frank from gplgpu [3], VRG for MIAOW [4]) to get a feel for what would
be involved.

* miaow is just an OpenCL engine that is compatible with a subset of
AMD/ATI's OpenCL assembly code. it is NOT a GPU. they have
preliminary plans to *make* one... however the development process is
not open. we'll hear about it if and when it succeeds, probably as
part of a published research paper.

* nyuzi is a *modern* "software shader / renderer" and is a
replication of the intel larrabee architecture. it explored the
concept of doing recursive software-driven rasterisation (as did
larrabee) where hardware rasterisation uses brute force and often
wastes time and power. jeff went to a lot of trouble to find out
*why* intel's researchers were um "not permitted" to actually put
performance numbers into their published papers. he found out why :)
one of the main facts that jeff's research reveals (and there are a
lot of them) is that most of the energy of a GPU is spent getting data
each way past the L2/L1 cache barrier, and secondly much of the time
(if doing software-only rendering) you have several instruction cycles
where in a hardware design you issue one and a separate pipeline takes
over (see videocore-iv below)

* chiselgpu was an additional effort by jeff to create the absolute
minimum required tile-based "triangle renderer" in hardware, for
comparative purposes in the nyuzi raster engine research. synthesis
of such a block he pointed out to me would actually be *enormous*,
despite appearances from how little code there is in the chiselgpu
repository. in his paper he mentions that the majority of the time
when such hardware-renderers are deployed, the rest of the GPU is
really struggling to keep up feeding the hardware-rasteriser, so you
have to put in multiple threads, and that brings its own problems.
it's all in the paper, it's fascinating stuff.

* gplgpu was done by one of the original developers of the "Number
Nine" GPU, and is based around a "fixed function" design and as such
is no longer considered suitable for use in the modern 3D developer
community (they hate having to code for it), and its performance would
be *really* hard to optimise and extend. however in speaking to jeff,
who analysed it quite comprehensively, he said that there were a large
number of features (4-tuple floating-point colour to 16/32-bit ARGB
fixed functions) that have retained a presence in modern designs, so
it's still useful for inspiration and analysis purposes. you can see
jeff's analysis here [7]

* an extremely useful resource has been the videocore-iv project [8]
which has collected documentation and part-implemented compiler tools.
the architecture is quite interesting, it's a hybrid of a
Software-driven Vector architecture similar to Nyuzi plus
fixed-functions on separate pipelines such as that "take 4-tuple FP,
turn it into fixed-point ARGB and overlay it into the tile"
instruction. that's done as a *single* instruction to cover i think 4
pixels, where Nyuzi requires an average of 4 cycles per pixel. the
other thing about videocore-iv is that there is a separate internal
"scratch" memory area of size 4x4 (x32-bit) which is the "tile" area,
and focussing on filling just that is one of the things that saves
power. jeff did a walkthrough, you can read it here [10] [11]

so on this basis i have been investigating a couple of proposals for
RISC-V extensions: one is Simple-V [9] and the other is a *small*
general-purpose memory-scratch area extension, which would be
accessible only on the *other* side of the L1/L2 cache area and *ONLY*
accessible by an individual core [or its hyperthreads]. small would
be essential because if a context-switch occurs it would be necessary
to swap the scratch-area out to main memory (and back).
general-purpose so that it's useful and useable in other contexts and
situations.

whilst there are many additional reasons - justifications that make
it attractive for *general-purpose* usage (such as accidentally
providing LD.MULTI and ST.MULTI for context-switching and efficient
function call parameter stack storing, and an accidental
single-instruction "memcpy" and "memzero") - the primary driver behind
Simple-V has been as the basis for turning RISC-V into an
embedded-style (low-power) GPU (and also a VPU).

one of the things that's lacking from RVV is parallelisation of
Bit-Manipulation. RVV has been primarily designed based on input from
the Supercomputer community, and as such it's *incredible*.
absolutely amazing... but only desirable to implementt if you need to
build a Supercomputer.

Simple-V i therefore designed to parallelise *everything*. custom
extensions, future extensions, current extensions, current
instructions, *everything*. RVV, once it's been implemented in gcc
for example, would require heavy-customisation to support e.g.
Bit-Manipulation, would require special Bit-Manipulation Vector
instructions to be added *to RVV*... all of which would need to AGAIN
go through the Extension Proposal process... you can imagine how that
would go, and the subsequent cost of maintenance of gcc, binutils and
so on as a long-term preliminary (or if the extension to RVV is not
accepted, after all the hard work) even a permanent hard-fork.

in other words once you've been through the "Extension Proposal
Process" with Simple-V, it need never be done again, not for one
single parallel / vector / SIMD instruction, ever again.

that would include for example creating a fixed-function 3D "FP to
ARGB" custom instruction. a custom extension with special 3D
pipelines would, with Simple-V, not need to also have to worry about
how those operations would be parallelised.

this is not a new concept: it's borrowed directly from videocore-iv
(which in turn probably borrowed it from somewhere else).
videocore-iv call it "virtual parallelism". the Vector Unit
*actually* has a 4-wide FPU for certain heavily-used operations such
as ADD, and a ***ONE*** wide FPU for less-used operations such as
RECIPSQRT.

however at the *instruction* level each of those operations,
regardless of whether they're heavily-used or less-used they *appear*
to be 16 parallel operations all at once, as far as the compiler and
assembly writers are concerned. Simple-V just borrows this exact same
concept and lets implementors decide where to deploy it, to best
advantage.


> 2. If it’s a good idea to implement, are there any projects currently
> working on it?

i haven't been able to find any: if you do please do let me know, i
would like to speak to them and find out how much time and money they
would need to complete the work.

> If the answer is yes, would you mind mention the project’s name and
> website?
>
> If the answer is no, are there any special reasons that nobody not
> implement it yet?

it's damn hard, it requires a *lot* of resources, and if the idea is
to make it entirely libre-licensed and royalty-free there is an extra
step required which a proprietary GPU company would not normally do,
and that is to follow the example of the BBC when they created their
own Video CODEC called Dirac [5].

what the BBC did there was create the algorithm *exclusively* from
prior art and expired patents... they applied for their own patents...
and then *DELIBERATELY* let them lapse. the way that the patent
system works, the patents will *still be published*, there will be an
official priority filing date in the patent records with the full text
and details of the patents.

this strategy, where you MUST actually pay for the first filing
otherwise the records are REMOVED and never published, acts as a way
of preventing and prohibiting unscrupulous people from grabbing the
whitepapers and source code, and trying to patent details of the
algorithm themselves just like Google did very recently [6]

hope that helps.

l.

[0] https://www.youtube.com/watch?v=7z6xjIRXcp4
[1] https://github.com/jbush001/NyuziProcessor/wiki
[2] https://github.com/asicguy/gplgpu
[3] https://github.com/jbush001/ChiselGPU/
[4] http://miaowgpu.org/
[5] https://en.wikipedia.org/wiki/Dirac_(video_compression_format)
[6] https://yro.slashdot.org/story/18/06/11/2159218/inventor-says-google-is-patenting-his-public-domain-work
[7] https://jbush001.github.io/2016/07/24/gplgpu-walkthrough.html
[8] https://github.com/hermanhermitage/videocoreiv/wiki/VideoCore-IV-Programmers-Manual
[9] libre-riscv.org/simple_v_extension/
[10] https://jbush001.github.io/2016/03/02/videocore-qpu-pipeline.html
[11] https://jbush001.github.io/2016/02/27/life-of-triangle.html

Manili

unread,
Jun 23, 2018, 11:52:27 AM6/23/18
to RISC-V HW Dev, manili....@gmail.com
Dear Luke,

Thanks a lot for the detailed reply.
Well I saw Miaow, GPLGPU and Nyuzi projects before. But the other ones were new to me. So that brings some new questions to my mind:
1. I did not understand what's wrong with ChiselGPU and why you didn't choose the project for sponsorship?
2. I did not find any HDL implementation of "VideoCore IV" in the link you mentioned. Is it legally possible to implement "VideoCore IV" from the released documents of Broadcom?
3. Also what do you think about Hwacha? Does it have the potential to become a GPU one day (of course with modifications)?
4. What specs does this integrated GPU require (from aspects of hardware and software)?

Best regards,
Manili

Manili

unread,
Jun 23, 2018, 12:06:37 PM6/23/18
to RISC-V HW Dev, manili....@gmail.com, david.lan...@o2s.ch
Hello David,

Thanks a lot for your invitation for cooperating.
1. What are we exactly going to do? Is there already a GPU under development in your company or you want to hire/cooperate with someone to develop it from scratch?
2. What does this cooperation require from both sides? What do you expect from me and what could I expect from you?

I mean would you mind be more clear about "If you like we can cooperate"?

Best regards,
Manili

David Lanzendörfer

unread,
Jun 23, 2018, 1:51:20 PM6/23/18
to Manili, RISC-V HW Dev
Hi Manili
> Thanks a lot for your invitation for cooperating.
> 1. What are we exactly going to do? Is there already a GPU under development
> in your company or you want to hire/cooperate with someone to develop it
> from scratch?
I'm open for any of these two options, we're right now developing the
manufacturing process and will need a GPU in 1.5 years or so.
> 2. What does this cooperation require from both sides? What
> do you expect from me and what could I expect from you?
>
> mean would you mind be more clear about "If you like we can cooperate"?
You could join tomorrow in our mumble conference at 9pm Hong Kong time on
murmur.lanceville.hk and discuss our recent status of the process and the EDA
for starters.
Also we can test all the different GPU projects and see which one can be
turned into a standalone IP core which can also be synthesized into gate level
and already start developing a linux kernel driver and mesa/opengl driver so
that it can be used with OpenCL and Wayland/X11.
Because I've got quiet a bit of experience in writing Linux kernel drivers you
might like my help :-)

Cheers
David
signature.asc

Luke Kenneth Casson Leighton

unread,
Jun 23, 2018, 1:55:05 PM6/23/18
to Manili, RISC-V HW Dev
On Sat, Jun 23, 2018 at 4:52 PM, Manili <manili....@gmail.com> wrote:
> Dear Luke,
>
> Thanks a lot for the detailed reply.

no problem.

> Well I saw Miaow, GPLGPU and Nyuzi projects before. But the other ones were
> new to me. So that brings some new questions to my mind:
> 1. I did not understand what's wrong with ChiselGPU

it's not a GPU, it's a triangle rasteriser... only. also i don't
believe it does shading (i.e. you can't give it an image to rotate and
display), it does a single colour and that's all. it probably
wouldn't be that hard to enhance to add that capability, by someone
who knew what they were doing.

> and why you didn't
> choose the project for sponsorship?

without associated software (i.e. a port of OpenGL or a gallium3d
port [1]) and accompanying hardware (i.e. a full processor with the
rest of the functionality needed such as a shader engine) it's
completely useless.

the project i am dealing with is not one that can take huge
commercial risks: the basis of the project is to leverage libre / open
technology in order to *reduce* risk and cost, not *increase* risk and
cost.

sponsoring a team to create a libre / open commercially-viable GPU
would require a really, *REALLY* competent team to step forward, and
even then they would be competing against the "$250,000 GC800 plus
etnaviv" baseline.

now, if that team bolstered themselves financially and so on by taking
the initative on a crowd-funding campaign just like frank did with
gplgpu, *great*!


> 2. I did not find any HDL implementation of "VideoCore IV" in the link you
> mentioned.

nope... you won't. it's proprietary. the only reason i mention it is
because the documentation's been released (because broadcom is under
extreme pressure to do so) and it's an extremely good design.

> Is it legally possible to implement "VideoCore IV" from the
> released documents of Broadcom?

of course. it's just an API. broadcom might get a bit pissy because
they've mined the field with patents... which is why i outlined the
strategy utilised to good effect by the BBC. it totally nullifies any
efforts by existing patent holders.

why do you think apple dropped PowerVR? (btw if you ever thought
that microsoft was hated *man* you ain't seen nothing: *everyone* -
hardware engineers, users, software developers - HATEs powervr with a
vengeange. the sooner that company goes under the better). apple
dropped PowerVR *literally* the day that a key imgtec patent on tiling
expired :)

> 3. Also what do you think about Hwacha? Does it have the potential to become
> a GPU one day (of course with modifications)?

hwacha i left out because it's an oddity. there's something going
on, there. the only source code that i could find was some header
files. no HDL, no compiler toolchain, no simulators, *nothing*.
there's some published papers, some slides and... nothing.

so i read the papers, and i noticed something at the end, "This work
sponsored by NVIDIA".

... ah :)

> 4. What specs does this integrated GPU require (from aspects of hardware and
> software)?

caveat: i'm in no way a 3D software rendering expert, however the
likely easiest route - which would save vast amounts of time and
effort *not* having to write vast amounts of code - would be to target
gallium3d [1].

in other words if the hardware was deliberately designed to be easily
compatible with gallium3d, even to the extent of making writing a
gallium3d software library easy to write, then that saves having to
write an OpenGL library and so on.

the internals: i know now that it's not just about having a Vector
Unit, you need a z-buffer (or for the Vector Unit to efficiently do
something like it), you need a small local memory area ("tile
buffer"), you need an FP-to-int pixel converter instruction, and many
many other little things.

beyond that... really... i'm honestly not the best person to ask.
i've done a lot of research, talked to jeff bush a lot over the past
few months (he's a really nice guy, extremely knowledgeable... and
busy). i know enough to know that *wow* this requires some seriously,
seriously knowledgeable people with heavy-duty domain expertise.

oh, i just remembered another one: ORGFX [2] however again it's a
fixed-function design. you pass it a set of triangles, it blats them
onto a framebuffer. not optimsed - or optimiseable - at all. no
shader capability. still very cool... just not nearly enough to base
a commercial product off of.

l.

[1] https://www.freedesktop.org/wiki/Software/gallium/
[2] https://opencores.org/project/orsoc_graphics_accelerator

Manili

unread,
Jun 23, 2018, 9:28:30 PM6/23/18
to RISC-V HW Dev, manili....@gmail.com
Dear Luke,

I know you spent a lot of time to answer my questions so, thanks a lot for detailed relies. They were sooo useful and informative.

Best regards,
Manili

P.S. It seems that Jeff Bush is the best candidate for the $250,000 (if you don't want to buy a GC800). Isn't he? :D

Manili

unread,
Jun 23, 2018, 9:48:34 PM6/23/18
to RISC-V HW Dev, manili....@gmail.com, david.lan...@o2s.ch
Hello David,

Unfortunately I do not live in China or HongKong so I can't join the conference.
However there also two other problems for this cooperation:
1. Unfortunately you did not mention what kind of person do you need for this job/cooperation, and I guess the people should have a lot of expertise. I'm only a HW engineering master student with no/little industrial experience.
2. As I mentioned before I do not live in China or HongKong and at the time I can't change my current location. So the only way to cooperate is on the remote which is not acceptable for both sides, I think.
BTW, thanks again for the invitation. I wish you and your team all the best in the project.

Best regards,
Manili

Luke Kenneth Casson Leighton

unread,
Jun 24, 2018, 12:18:30 AM6/24/18
to Manili, RISC-V HW Dev
On Sun, Jun 24, 2018 at 2:28 AM, Manili <manili....@gmail.com> wrote:
> Dear Luke,
>
> I know you spent a lot of time to answer my questions so, thanks a lot for
> detailed relies. They were sooo useful and informative.

no problem

> P.S. It seems that Jeff Bush is the best candidate for the $250,000 (if you
> don't want to buy a GC800). Isn't he? :D

:)

i asked... he's busy.

l.

David Lanzendörfer

unread,
Jun 24, 2018, 4:31:29 AM6/24/18
to Manili, RISC-V HW Dev
Hi Manili
> Unfortunately I do not live in China or HongKong so I can't join the
> conference. However there also two other problems for this cooperation:
No problem. One of the owners of the company (he recently got 8% of it) lives
in Germany and we only can talk over Mumble.

> 1. Unfortunately you did not mention what kind of person do you need for
> this job/cooperation, and I guess the people should have a lot of
> expertise. I'm only a HW engineering master student with no/little
> industrial experience.
Right now we're... well... not having much cash anyway. I live from half the
minimal salary per month (half the salary of a Pizza delivery guy) here right
now, until we've manufactured and sold our first chip.
So hiring someone else is right now out of the question at the moment.
What I ment with cooperation is actually just use Mumble (https://
www.mumble.com/mumble-download.php) in order to work in tandem to get things
done quicker...

> 2. As I mentioned before I do not live in China or HongKong and at the time
> I can't change my current location. So the only way to cooperate is on the
> remote which is not acceptable for both sides, I think. BTW, thanks again
> for the invitation. I wish you and your team all the best in the project.
As I mentioned before, the conference is remote, using Mumble:

We meet on murmur.lanceville.hk at 9pm Hong Kong time (you've got to look up
what time that is in your time zone)
Everyone who would like to contribute is welcome to join and discuss how to
collaborate and contribute.
Essential contributors can then get a job at our company (if they wish so of
course) as soon as we generate revenue, but first we require a proof of work
on GitHub :-)

Cheers
David
signature.asc

cobby...@gmail.com

unread,
Jul 5, 2018, 4:04:17 PM7/5/18
to RISC-V HW Dev
We are currently working on RISC-V based GPU, using our massively multithreaded multicore architecture

Initial version is focused on running OpenCL / OpenMP for GPGPU and AI apps, but we already adding additional custom instruction for Graphics and coprocessors and follow up releases will support OpenGLES too.

check our progress at http://www.tensorfive.com

Thanks,

Cobby

Luke Kenneth Casson Leighton

unread,
Jul 5, 2018, 4:17:55 PM7/5/18
to cobby...@gmail.com, RISC-V HW Dev
On Thu, Jul 5, 2018 at 9:04 PM, <cobby...@gmail.com> wrote:

> We are currently working on RISC-V based GPU, using our massively
> multithreaded multicore architecture

that's fantastic! is it libre-licensed? (the HDL for the main
core)? if not, how much money would your company like to ask for to
*make* it libre-licensed? do contact me privately, off-list.

> Initial version is focused on running OpenCL / OpenMP for GPGPU and AI apps,
> but we already adding additional custom instruction for Graphics and
> coprocessors and follow up releases will support OpenGLES too.

one option to consider is not to put any actual "effort" into
developing any kind of special OpenGL / OpenGLES software, at all.
it'll take at least 2 years. i notice that the website says that you
have LLVM. well... if you have LLVM up and running, why not simply
compile up gallium3d-llvm along-side the absolute standard libmesa3d
*software-only* library, and see what happens?

traditional designs of GPUs have a bit of a problem: the instruction
set is completely alien to the main processor. it may sound obvious
but duh, you can't execute GPU instructions on a totally different
CPU.

to work round that, traditional proprietary GPUs are forced to
communicate via shared memory. *EVERY* single frickin OpenGL
operation has to pack up data on the CPU, communicate in some form of
synchronous way to the GPU, tell it to "do work", wait for the GPU to
say "ok that's done" before the next work cycle can be completed.

but... what if the CPU *was* the GPU? now the data doesn't need to
be "transferred": threads just simply execute GPU instructions (custom
instructions) on the data *right there and then*.

i presume that's the approach that you're taking, cobby? in which
case, you don't need to do any effort at all. if the LLVM port for
the parallel architecture that your company has devloped is mature
enough you should literally be able to follow the instructions here
and have something working and ready to begin investigating
optimisation and profiling, within a few hours:
https://mesa3d.org/llvmpipe.html

l.

Jacob Lifshay

unread,
Aug 1, 2018, 8:21:08 AM8/1/18
to RISC-V HW Dev, cobby...@gmail.com

If anyone's interested, I'd be very interested in working on an open-source RISC-V based GPU for a stipend. I partially completed a software-rendered implementation of Vulkan for GSoC 2017: https://github.com/kazan-3d/kazan I've also written a very simple RV32I implementation that I ran a 3D game on: https://github.com/programmerjake/rv32

I've been studying electrical and computer engineering at Walla Walla University and am currently taking time off due to lack of funds.

Jacob Lifshay

Luke Kenneth Casson Leighton

unread,
Aug 1, 2018, 9:37:24 AM8/1/18
to Jacob Lifshay, RISC-V HW Dev, Cobby Mora
On Wed, Aug 1, 2018 at 1:21 PM, Jacob Lifshay <program...@gmail.com> wrote:

> If anyone's interested, I'd be very interested in working on an
> open-source RISC-V based GPU for a stipend.

i *might* be able to cover that, for a few months, in a few months
time. we can talk off-list.

> I partially completed a software-rendered implementation of Vulkan
> for GSoC 2017: https://github.com/kazan-3d/kazan I've also written
> a very simple RV32I implementation that I ran a 3D game on:
> https://github.com/programmerjake/rv32

nice!

Jecel Assumpção Jr

unread,
Aug 1, 2018, 7:15:27 PM8/1/18
to RISC-V HW Dev


On Friday, June 22, 2018 at 6:24:16 PM UTC-3, Manili wrote:

Hello,


I’m new to the RISC-V ecosystem…

1. I’d like to know if it is possible/suitable to design a GPU based on RISC-V (i.e. a kind of extension of RISC-V)? 

2. If it’s a good idea to implement, are there any projects currently working on it?

      If the answer is yes, would you mind mention the project’s name and website?


Prof. Michael Taylor

unread,
Aug 1, 2018, 7:41:38 PM8/1/18
to RISC-V HW Dev
Hi,

You can also check out Celerity, which is a parallel RISC-V fabric:



Our plan is not to replicate an NVidia or AMD gpu, but rather to make it relatively easy to move CUDA code to the architecture.

We have 16nm silicon and are interested in folks who are serious about building a software stack on top of it, and future generations of the chip.

M

--
You received this message because you are subscribed to the Google Groups "RISC-V HW Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hw-dev+unsubscribe@groups.riscv.org.
To post to this group, send email to hw-...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/hw-dev/.
To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/hw-dev/5fd7bf3f-f4a0-405b-9d65-845bcb8621f5%40groups.riscv.org.

Manili

unread,
Feb 7, 2023, 2:06:32 AM2/7/23
to RISC-V HW Dev
Hi all,
I'm really wondering to know about any updates regarding RISC-V/Libre GPUs from 2018?
Do we have any working GPU right now?
Recently I saw the following project, what do you think?
On Thursday, August 2, 2018 at 4:11:38 AM UTC+4:30 prof.taylor wrote:
Hi,

You can also check out Celerity, which is a parallel RISC-V fabric:



Our plan is not to replicate an NVidia or AMD gpu, but rather to make it relatively easy to move CUDA code to the architecture.

We have 16nm silicon and are interested in folks who are serious about building a software stack on top of it, and future generations of the chip.

M

On Wed, Aug 1, 2018 at 4:15 PM, Jecel Assumpção Jr <je...@merlintec.com> wrote:


On Friday, June 22, 2018 at 6:24:16 PM UTC-3, Manili wrote:

Hello,


I’m new to the RISC-V ecosystem…

1. I’d like to know if it is possible/suitable to design a GPU based on RISC-V (i.e. a kind of extension of RISC-V)? 

2. If it’s a good idea to implement, are there any projects currently working on it?

      If the answer is yes, would you mind mention the project’s name and website?


Two things you might want to look at are the Simty project:



-- Jecel

--
You received this message because you are subscribed to the Google Groups "RISC-V HW Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hw-dev+un...@groups.riscv.org.

To post to this group, send email to hw-...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/hw-dev/.
Reply all
Reply to author
Forward
0 new messages