Does RISC-V privilege some programming languages over others, or constrain future PL design?

2,447 views
Skip to first unread message

Joe Duarte

unread,
Jul 19, 2018, 3:45:25 PM7/19/18
to RISC-V ISA Dev
Hi all,

I'm a social scientist (formerly a QA manager in the software industry) working on various questions around programming language design with respect to learnability, user experience, and impact on choice of programming as a major/career. I have a strong interest in helping design new programming languages over the next 20 years or so, based on all manner of empirical research.

I've been aware of, and excited by, RISC-V for years, and was just struck by the question of whether its design, semantics, assumptions, or other characteristics favor – or implicitly assume – particular programming languages. The obvious candidates would be C and C++ (or perhaps given its academic roots, Haskell and Scala, or given some of the designers' HPC backgrounds, Fortran).

I forget the source, but one developer wrote a widely-circulated post arguing that our programming paradigms are based on ancient 1970s minicomputers, I think the PDP-10 or -11. He was talking about C and its roots in those architectures, and how x86 might in turn have been shaped by C.

So I ask you, do you see any particular languages as more RISC-V-friendly than others? Do you see anything in RISC-V that assumes certain programming paradigms or idioms specific to certain PLs?

My broad bias or concern here is that I don't want to see C in 2030, or any of its kin. It would be tragic, from my perspective, if the designers of RISC-V had C in their bones to such an extent that it implicitly shaped RISC-V's design in ways that constrained the development of radically new and innovative programming languages in the coming years. Do you think this is plausible? I don't have the expertise to look at an ISA and situate it in the broad constellation of all plausible current and future programming languages and see how it meshes with some but not others. There are a few levels of abstraction here, obviously. How an ISA can be said to favor a particular PL is an interesting and complicated question that travels through different levels of abstraction and different constructs.

So what do you all think? And what constraints or costs, if any, might RISC-V impose on future programming languages?

Cheers,

Joe Duarte
PhD -- Social Psychology

Iztok Jeras

unread,
Jul 19, 2018, 4:13:11 PM7/19/18
to Joe Duarte, RISC-V ISA Dev

On a generic level each universal machine can emulate any other universal machine, so there are no hard limitations.

Regarding C being some kind of native language, this is true for almost all processors in the last 60 years. It was the only successful approach, other approaches were more or less failures (we did study some at university). So almost all modern programming languages are based on the same paradigm.

Recent advances in AI, deep learning might lead to new programming paradigms, languages. Quantum computers could lead to a third approach.

If there are any high level language specific optimizations in any CPU they probably only offer minor advantages, but C would generally still be the fastest solution to most problems.

Regards,
Iztok


--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+u...@groups.riscv.org.
To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.
To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/2670310b-d27f-46a5-a724-6ad294607305%40groups.riscv.org.

Samuel Falvo II

unread,
Jul 19, 2018, 4:45:32 PM7/19/18
to Joe Duarte, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 12:45 PM, Joe Duarte <songof...@gmail.com> wrote:
> I forget the source, but one developer wrote a widely-circulated post
> arguing that our programming paradigms are based on ancient 1970s
> minicomputers, I think the PDP-10 or -11. He was talking about C and its
> roots in those architectures, and how x86 might in turn have been shaped by
> C.

Oh, there is *no* *question* that C influenced the design of the x86
architecture. Intel was perfectly happy with a segmented architecture
until the 80386 rolled out supporting a flat address space. That, in
turn, was due to customer demand, who were mainly developers wishing
to program in Pascal or C. (For all intents and purposes, C and
Pascal can be considered close enough to be lumped together.)

Had a proper object-oriented programming language become popular,
which mapped one object = one segment (c.f., GeoWorks Ensemble, which
used object-oriented assembly language, largely because of a lack of
suitable programming language), the evolution of the x86 family would
have been *very* different indeed.

> So I ask you, do you see any particular languages as more RISC-V-friendly
> than others?

C. ALL other languages are lacking in some capacity (including C++).
That RISC-I's instruction set was derived by careful analysis of
programs that were written in C is telling. Ever since then, all CPU
architectures have used C as their instruction use benchmark. Despite
this, at least with respect to RISC-V, even plain C doesn't provide a
perfect mapping.

For example, look at how function pointers are implemented in C,
particularly in position-independent code. You need to load two
pointers before invoking a function: the first is a global pointer
register of some kind, and the second is the actual function pointer
itself. This is because the code receiving control flow doesn't know
if it's receiving it from local code or from foreign-code, so it's
left up to the callER (rather than the callEE) to load whatever
pointers refer to the global data of the callEE's module. Throw in
the GOT and PLT, and it becomes a royal mess; GOT and PLT wouldn't be
necessary if the underlying hardware was properly aware of module
boundaries and made switching between them effortless.

Some have tried to make such an architecture, by the way (e.g.,
NS32032). In these architectures, modules have first-class support,
and there are dedicated machine-level instructions to handle
inter-module control transfers, which makes explicit global pointer
management almost invisible even to the machine language programmer.
These architectures tend to be favored not by C, but by the likes of
Modula-2 or Oberon, languages which also support first-class modules.

> Do you see anything in RISC-V that assumes certain programming
> paradigms or idioms specific to certain PLs?

Indirectly, yes. RISC-V, being essentially optimized to make running
C code efficient and fast (even if only as an accident of history),
necessarily favors any programming paradigms and idioms that are
common as a result of the limitations of the C language family.

This implies that it's well suited to running imperative programs,
with explicit memory management (this is taken to the extreme: observe
RISC-V has no automated stack management instructions), and with the
assumption that memory protection is provided externally to the
program itself (e.g., as with a page-based MMU maintained by the
operating system on your behalf).

> of its kin. It would be tragic, from my perspective, if the designers of
> RISC-V had C in their bones to such an extent that it implicitly shaped

I'm afraid this is the nature of any kind of data-driven analysis. C
was already quite popular in the academic community when RISC concepts
were first being studied, and as a compiler, it was readily amenable
to student exploration. PL/1, COBOL, Fortran, et. al. were all
equally popular, but were closed or required expensive licenses to
use. So, pegging the continued success of C-like languages on RISC
research is somewhat unfair.

That said, the nature of RISC wasn't *consciously* intended to favor
C. That C favors RISC and vice versa was an organic evolution of the
technologies. RISC favored C programs because that's what compiler
technology they had access to. C programs favored RISC because
optimizer technology favored the simpler, more exposed
micro-architecture that RISC provided. This provided a positive
feedback loop. SSA intermediate representation can be thought of as
the theoretically perfect RISC, which is why it's use is important,
even for compilers targeting CISC architectures.

So, it's more correct to say that RISC(-V) today favors whatever
language you can compile down to an SSA representation or its logical
equivalent (e.g., continuation-passing style). But, for now, that
means C-like languages. (I should note that many have reported
excellent results with the ML-family of languages, which these days
would also include Rust. So while language evolution is perhaps
slower than you'd like, it *is* happening.)

I would strongly suspect that as usage patterns change and evolve, the
available instructions that make up a RISC processor will change. For
example, x86 architecture eventually acquired a handful of
instructions to make operating system calls *much* faster than the
old-style INT instruction. It took several decades, but it came.
Similar evolution can be expected to happen with any RISC platform.

> levels of abstraction here, obviously. How an ISA can be said to favor a
> particular PL is an interesting and complicated question that travels
> through different levels of abstraction and different constructs.

Or not.

I am also a Forth programmer, and part of my personal hobby involves
making stack-architecture processors that run competitively against
most RISCs when running equivalent C code for a given program (e.g.,
https://github.com/sam-falvo/S64X7). These are small processor
designs, and they're tightly coupled to the underlying implementation
technology. Stack CPUs are *great* for FPGA implementation, and with
proper design, for ASICs as well (vis a vis the Green Arrays GA144
chip).

However, while using but a tiny, tiny, tiny fraction of the logic that
even a minimal RISC-V would use, it's clear that a stack CPU would
behave very poorly when running C code (at least, not without
extensions to the instruction set which enable it to perform better).

There are two approaches to making Forth code run fast on RISC. The
first is to run your static program representation through a compiler
to emit native code. You can use the usual compile-to-SSA, optimize,
translate, and assemble approach in order to get high performance
output. I've done this with my toy compiler, BSPL. Even then,
though, it's still more limited because it's a static representation
of the most likely usage profile for that code. A real stack CPU
doesn't care, for example, how many layers of parameters are on the
evaluation stack -- it's designed to handle arbitrary depth by its
nature. A RISC-V equivalent, however, *MUST* always expect the top of
stack to be in a known register (say, A0) ahead of time. This
requires a ton of "stack manipulation" instructions in the RISC-V code
(to canonicalize the stack representation) to make it work, which is
exactly the crime the RISC-afficionados levy against those in favor of
stack architecture processors.

It turns out you can do the reverse as well: take C, compile it to
SSA, and then use a bottom-up synthesis of stack code to produce
hopefully efficient stack code that can run that program. The result
is a lot better than trying to shoe-horn everything onto the stack, to
be sure; but it'll never run as quickly native Forth.

The second approach is to support running different work-loads
*dynamically* rather than statically, and this involves adding
dedicated instructions to support different workload profiles. Just as
a Forth CPU would need C-compatible ISA extensions to efficiently
support C code, so too a RISC-V processor would need dedicated stack
instructions to *really* support the best possible run-time
efficiencies when running Forth(-like) code.

So, yes, RISC definitely favors C. And stack CPUs definitely favor
Forth(-like) languages. That should be no surprise, and to me at
least, is anything but complex. It's actually rather patently
obvious. ;)

--
Samuel A. Falvo II

Allen Baum

unread,
Jul 19, 2018, 6:05:32 PM7/19/18
to Samuel Falvo II, Joe Duarte, RISC-V ISA Dev
I thought the original x86 architecture was more influenced by Pascal. Steve Glanville ( ?- 40 year.old memory) gave a pitch to Apple for the Lisa project about how the 8086 was perfect for supporting Pascal. Woz decided to try building a bit slice Pascal machine instead, and by the time they gave up, the 68000 was available.

-Allen

Ray Van De Walker

unread,
Jul 19, 2018, 7:18:48 PM7/19/18
to RISC-V ISA Dev

Yes, it does. Re new approaches…

The biggest break I’ve seen with the “automate an abacus” approach to computing is delta-coded calculation.

See Zrilic’s “Circuits and Systems Based on Delta Modulation”

https://www.amazon.com/gp/product/3540237518/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

He has circuits that do all the math operations.  And, it’s extremely parallel, extremely low-power, digital and fits into modern CMOS logic.

It’s well outside the Von Neumann cycle/Harvard cycle/C language paradigms.

And, of course there is no von Neumann bottleneck.

(RISC is just a memory system with a small automated abacus attached on the side.)

Samuel Falvo II

unread,
Jul 19, 2018, 9:21:47 PM7/19/18
to Ray Van De Walker, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 4:18 PM, Ray Van De Walker
<ray.van...@silergy.com> wrote:
> Yes, it does. Re new approaches…

Not sure to whom you are replying. I'm assuming you're replying not
to Iztok, but to Joe?

> The biggest break I’ve seen with the “automate an abacus” approach to
> computing is delta-coded calculation.

Are there any other references you can provide which doesn't require
me to purchase a large book?

I did some Google searching, and this appears to be a technique which
sits exclusively in the DSP domain of applications; put another way,
it does not look like it can be applied to general purpose computing
applications. But, if there's prior art to the contrary, I'd love to
see it!

To that end, I do find delta-coding fascinating. Thank you for
mentioning it, and I'll be researching this further.

> And, of course there is no von Neumann bottleneck.

Oh, but there is; it's just that FL and FP (and the litany of
functional languages which have come since) have been shown, perhaps
to Backus' chagrine, to not be viable work-arounds. ;D

> (RISC is just a memory system with a small automated abacus attached on the
> side.)

Love this analogy. :-)

Allen Baum

unread,
Jul 19, 2018, 9:36:16 PM7/19/18
to Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
IF you wan tto look at a different approach - not ISC, not CISCm you should look into the Mill architecture.
millcomputing.com has links to many video talk given in multiple venues over the year describing aspects of the architecture. It's chief architect is a compiler writer, and he doe not shirk the features o odern programming languages.

--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.

To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.

Bruce Hoult

unread,
Jul 19, 2018, 10:14:25 PM7/19/18
to Allen Baum, Samuel Falvo II, Joe Duarte, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 3:05 PM, Allen Baum <allen...@esperantotech.com> wrote:
I thought the original x86 architecture was more influenced by Pascal. Steve Glanville ( ?- 40 year.old memory) gave a pitch to Apple for the Lisa project about how the 8086 was perfect for supporting Pascal. Woz decided to try building a bit slice Pascal machine instead, and by the time they gave up, the 68000 was available.

I don't think either Pascal or C was well enough established to be an influence in 1976. However Algol *was*, and was a definite influence in the direction of enhancements over the 8080 -- and Pascal is very very close to Algol in what it suits in an implementation with any given function having fixed sized function arguments, fixed size stack frame, a static link (or global "display" to update). This makes it possible to design a single instruction that cleans up the stack frame and arguments, restores the static link, and returns.

Of course Dave Patterson and others soon showed that this doesn't actually help speed (though it does help program size).

Perhaps ironically to the subject of this thread, early C was actually quite hostile to this! It wasn't until ANSI C that C was able to use the instructions and conventions designed for "Pascal" (really Algol). In K&R C all functions were effectively varargs. Remember having to push C arguments right to left so that things like printf format strings were in a known location relative to the stack pointer? And only the caller knew how many bytes of arguments had been pushed.

Having enough registers that (almost all) functions could pass arguments in registers instead of the stack was a huge advance that 8086 missed out on, as was optimising for leaf functions by saving the return address to a register instead of the stack, and having a set of registers the called function is entitled to stomp on without having to save and restore them. Missing all that is, in retrospect, another error of the VAX, which did have enough registers. (and PDP11 used an arbitrary general register as a link register, just like RISC-V)

Oops enough rambling ... back to the question...

One thing that RISC-V doesn't give any help for is supporting languages that want to have dynamic typing, with a variable that could contain an integer or a floating point value or a pointer. This might help languages such as Lisp/Scheme, Javascript, or the languages that while not dynamically typed do rely heavily on static types that are tagged "unions" that are effectively the same thing.

SPARC tried to provide some support for this, but it's not clear it was useful.

I'm sure someone will push for this as a RISC-V extension, so I'll give my opinion on it:

1) in an interpreted implementation, it helps but not by enough to make a difference.

2) in a compiled implementation, it's possible to know the actual type statically enough of the time (and in the important places) that the value can be stored and used unboxed. I know this from personal experience with the CMU "Gwydion" Dylan compiler which I used heavily from 1998 through the early 2000s, winning prizes in the 2001, 2003, 2005 ICFP programming contest with it. Gwydion provided full machine word integers and pointers (*very* important for inter-operation with C), and when the type was not known it used a boxed form that used *two* machine words. On the face of it that seems hugely inefficient, but in practice it outperformed Harlequin Dylan which used a tagged word scheme.

Bruce Hoult

unread,
Jul 19, 2018, 10:24:05 PM7/19/18
to Allen Baum, Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 6:36 PM, Allen Baum <allen...@esperantotech.com> wrote:
IF you wan tto look at a different approach - not ISC, not CISCm you should look into the Mill architecture.
millcomputing.com has links to many video talk given in multiple venues over the year describing aspects of the architecture. It's chief architect is a compiler writer, and he doe not shirk the features o odern programming languages.

The Mill is extremely interesting, quite radical but it looks as if it should work, and Ivan has invited me to join them in the past.

1) The effort has been going for considerably longer than RISC-V has.
2) So far there is as far as I know not even one single FPGA implementation (even internally), let alone a tape-out.
3) It has been unable to attract (or has eschewed) funding. At least the offer that was made to me in 2014 was for pure sweat equity.

There are some very good people involved (Terje Mathisen for example) but they seems to be acting like a fun ongoing hobby project with no urgency whatsoever to actually ship anything.

Samuel Falvo II

unread,
Jul 19, 2018, 10:34:48 PM7/19/18
to Bruce Hoult, Allen Baum, Joe Duarte, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 7:14 PM, Bruce Hoult <bruce...@sifive.com> wrote:
> Perhaps ironically to the subject of this thread, early C was actually quite
> hostile to this! It wasn't until ANSI C that C was able to use the

I'm going to ask to see some citations for this, because the argument
order for Pascal vs C is entirely arbitrary for functions with finite
argument lists. This is evidenced by many C compilers which supported
Pascal calling conventions natively (e.g., MPW C for Macintosh, and
early Microsoft C compilers for Win16 application development). C's
predecessor, BCPL, also supported varargs, yet pushed arguments from
left to right, not right to left, so it is entirely possible to
support varargs that way as well (IIRC, BCPL also sent an implicit
parameter indicating how many arguments were actually pushed, since
that is always known at compile-time). When push comes to shove, the
C "VM" and the Algol/Pascal "VM" really are far more similar than
different. Feature-wise, the C "VM" is a strict and proper subset of
the Algol "VM".

Samuel Falvo II

unread,
Jul 19, 2018, 10:39:41 PM7/19/18
to Bruce Hoult, Allen Baum, Ray Van De Walker, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 7:24 PM, Bruce Hoult <bruce...@sifive.com> wrote:
> There are some very good people involved (Terje Mathisen for example) but
> they seems to be acting like a fun ongoing hobby project with no urgency
> whatsoever to actually ship anything.

To be fair, if you cannot attract funding AND you want to realize your
goal still, then this is all you can do. :(

It's sad that Mill cannot attract financing. I am fascinated by its
architecture, and would love to see a prototype some day.

However, I do have to admit, if I understand how the Belt works
correctly, that would be very hostile to an FPGA implementation (it
seems to me that the dynamic addressing of registers holding belt
values would take up a lot of FPGA resources.

Allen Baum

unread,
Jul 20, 2018, 12:28:01 AM7/20/18
to Samuel Falvo II, Bruce Hoult, Ray Van De Walker, RISC-V ISA Dev
They do have some investors, but not enough to field an entire development team.
The main issue is not wanting VCs to get involved.

Bruce Hoult

unread,
Jul 20, 2018, 12:30:50 AM7/20/18
to Samuel Falvo II, Allen Baum, Ray Van De Walker, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 7:39 PM, Samuel Falvo II <sam....@gmail.com> wrote:
On Thu, Jul 19, 2018 at 7:24 PM, Bruce Hoult <bruce...@sifive.com> wrote:
> There are some very good people involved (Terje Mathisen for example) but
> they seems to be acting like a fun ongoing hobby project with no urgency
> whatsoever to actually ship anything.

To be fair, if you cannot attract funding AND you want to realize your
goal still, then this is all you can do.  :(

It's sad that Mill cannot attract financing.  I am fascinated by its
architecture, and would love to see a prototype some day.

It seems surprising that *someone* like Google or Intel or Microsoft or DARPA isn't interested in throwing a few million dollars at it, as a punt. They all waste plenty of money on much less promising things. Or, maybe Ivan & crew haven't wanted to give up any ownership.
 
However, I do have to admit, if I understand how the Belt works
correctly, that would be very hostile to an FPGA implementation (it
seems to me that the dynamic addressing of registers holding belt
values would take up a lot of FPGA resources.

The Belt isn't a literal data structure any more than the register file is on a conventional OOO. If anything, you can get away with a smaller belt than conventional registers AND it eliminates the need to actually write results back to architectural registers. If anything still needed is about to fall off the belt then the compiler generates explicit instructions to save it to the "scratchpad".

Samuel Falvo II

unread,
Jul 20, 2018, 12:45:43 AM7/20/18
to Bruce Hoult, Allen Baum, Ray Van De Walker, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 9:30 PM, Bruce Hoult <bruce...@sifive.com> wrote:
> The Belt isn't a literal data structure any more than the register file is
> on a conventional OOO. If anything, you can get away with a smaller belt
> than conventional registers AND it eliminates the need to actually write
> results back to architectural registers.

Yes, it's similar in nature to a transport-triggered architecture
processor, where you can feed results from one logical unit directly
into another without affecting storage registers. However, in a TTA,
the addresses of the various registers remain fixed. In a Mill, the
addresses change dynamically as computations proceed.

It's the dynamic address decoding and the interconnects between them
that I'm referring to. It seems like it'd require more resources than
a simple TTA.

Bruce Hoult

unread,
Jul 20, 2018, 12:48:05 AM7/20/18
to Samuel Falvo II, Allen Baum, Joe Duarte, RISC-V ISA Dev
On Thu, Jul 19, 2018 at 7:34 PM, Samuel Falvo II <sam....@gmail.com> wrote:
On Thu, Jul 19, 2018 at 7:14 PM, Bruce Hoult <bruce...@sifive.com> wrote:
> Perhaps ironically to the subject of this thread, early C was actually quite
> hostile to this! It wasn't until ANSI C that C was able to use the

I'm going to ask to see some citations for this, because the argument
order for Pascal vs C is entirely arbitrary for functions with finite
argument lists.  This is evidenced by many C compilers which supported
Pascal calling conventions natively (e.g., MPW C for Macintosh, and
early Microsoft C compilers for Win16 application development).

You can implement anything in multiple ways. I'm talking about the conventional implementations, which tended to be the lowest overhead ways for typical cases. The existence of support for *non*-native calling conventions such as _stdcall (where "standard" means "Pascal") and _fastcall (registers) as opposed to _cdecl which was still needed for varargs functions is support for my case.

 
C's predecessor, BCPL, also supported varargs, yet pushed arguments from
left to right, not right to left, so it is entirely possible to
support varargs that way as well (IIRC, BCPL also sent an implicit
parameter indicating how many arguments were actually pushed, since
that is always known at compile-time).

Yes. And the arg count had to be the *last* argument pushed, regardless. Adding it is a source of significant size&speed penalty for short functions with few arguments.
 

  When push comes to shove, the
C "VM" and the Algol/Pascal "VM" really are far more similar than
different.  Feature-wise, the C "VM" is a strict and proper subset of
the Algol "VM".

I agree with that. Pascal and C are the same language, with a very few exceptions, which were the things I mentioned previously: argument passing and nested environments.
 

Bruce Hoult

unread,
Jul 20, 2018, 12:55:04 AM7/20/18
to Samuel Falvo II, Allen Baum, Ray Van De Walker, RISC-V ISA Dev
Those constantly changing belt positions only need to be present in instruction decode.The only thing that needs to be shifted is a table mapping current belt position to actual (fixed) result number. The execution engine doesn't need to know anything about the Belt.

Joe Duarte

unread,
Jul 20, 2018, 2:10:34 AM7/20/18
to bruce...@sifive.com, sam....@gmail.com, allen...@esperantotech.com, ray.van...@silergy.com, isa...@groups.riscv.org
Yes, the Belt is a programming model, not a physical instantiation or design. One Mill engineer said that the hardware is basically a DSP. This is off topic, but it was my topic and I've enjoyed everything I've read so far. As for the broader speculation​ about Mill, the company, they took a patent pause – that is, they focused their efforts on writing patents, making sure they were in a defensible position moving forward. In 2013, America switched to a first-to-file system, which sucks, as opposed to first-to-invent. The Mill team seemed to be under external pressure at that point to get their patents in order, and I assume such pressure can only come from one place – funders.

I just noticed that they also have a completely new, better website, where in fact they list their patents along with all their other content, videos, etc (https://millcomputing.com/). To me, patents + new website = forward movement. Though yes, it has seemed slow.

The confusion I have re: Mill Computing and also Rex Computing (which is building a new CPU architecture, and presumably an ISA; http://www.rexcomputing.com/) is that they seem to be fairly open about their ideas and plans in a universe where Intel exists. And ARM. And AMD, etc, etc. I would assume that anything that Mill or Rex can do, Intel and ARM can do better and faster, because I think these startups are just a few dozen people at most. Couldn't Intel punt, as someone said, and throw 50 veteran engineers on the Mill or Rex paradigms to see if it delivered? (before Mill filed their patents, which was a several year span).

Back to the original topic, I'm surprised to learn that C and Pascal are viewed as almost the same language. Pascal seems so much cleaner in its syntax that it never occurred to me, but I'm only familiar with it superficially.

I have some questions:

1. Do functional languages require anything special from an ISA? Well, since functional languages presently exist and are compiled into binaries for mainstream ISAs, they must not require anything special. So, maybe a better question is would they benefit from consideration in an ISA's design? What sorts of ISA features would benefit functional languages (or their compilers)?

2. Does RISC-V have an orthogonal instruction set?

3. All present-day general purpose operating systems seem to be profoundly and permanently insecure. Does RISC-V make secure programming easier than extant ISAs? Agner Fog says that RISC-V doesn't include any "security features", as opposed to ForwardCom (http://www.forwardcom.info/comparison.html), but I'm not sure what role an ISA plays in security in general.

4. We seem to be heading toward a new computing paradigm of persistent memory née RAM (Intel's Optane and its successors and competitors, phase change memory, MRAM, etc.) How does this impact RISC-V, if at all? How should it impact programming language design, if at all?

5. Can RISC-V accommodate a logarithmic number system? (https://en.wikipedia.org/wiki/Logarithmic_number_systemThey perform better than FP for some workloads, and might be easier for programmers to understand. Would it just be a coprocessor?

Cheers,

JD
 

Luke Kenneth Casson Leighton

unread,
Jul 20, 2018, 2:23:55 AM7/20/18
to Joe Duarte, Bruce Hoult, Samuel Falvo II, Allen Baum, Ray Van De Walker, RISC-V ISA Dev
On Fri, Jul 20, 2018 at 7:10 AM, Joe Duarte <songof...@gmail.com> wrote:

> 3. All present-day general purpose operating systems seem to be profoundly
> and permanently insecure. Does RISC-V make secure programming easier than
> extant ISAs?

not as such. ok it's important to separate the fact that RISC-V has
a Foundation behind it, not a pathologically profit-maximising
corporation. thus the *Foundation* decided to do a... well it's not
open (it's a cartel / private club that requires giving up
unacceptable rights in order to join)... they decided to "publish the
results openly" [a simplification], which is a huge difference,
ultimately to most corporations it's the same thing *as* "open" but
it's really really psychologically important not to confuse the two.

so you can *get* the specification and the ISA etc. and thus it is
much easier and in fact actively encouraged to *do* security research
into secure programming: lowRISC are doing a security tagging model
(and associated Custom Extension), and i found out yesterday that the
RISE Group IIT Madras also have someone doing tagged memory that
allows hardware-level tracking of malloc and free.

so what the RISC-V Foundation have set up is by no means perfect, it
is however a massive leap and improvement over trying the same kind of
research involving the proprietary x86 or ARM instruction sets. even
trying to use e.g. OpenRISC1200, which is an *entirely* libre ISA, has
not had as much attention - and adoption - and consequently i would be
very surprised if any google search "OR1200 secure programming
research" came up with any significant hits.

l.

Paolo Bonzini

unread,
Jul 20, 2018, 2:44:23 AM7/20/18
to Samuel Falvo II, Bruce Hoult, Allen Baum, Joe Duarte, RISC-V ISA Dev
On 20/07/2018 04:34, Samuel Falvo II wrote:
>> Perhaps ironically to the subject of this thread, early C was actually quite
>> hostile to this! It wasn't until ANSI C that C was able to use the
> I'm going to ask to see some citations for this, because the argument
> order for Pascal vs C is entirely arbitrary for functions with finite
> argument lists.

Pushing right to left is slightly less optimal indeed, especially for
old compilers that were just walking an abstract syntax tree. You have
to either:

- subtract the callee stack frame from the stack pointer, and then use
memory stores instead of pushes. These are bigger instructions on a
CISC architecture (and thus slower);

- invent rules like the C sequence points, so that you can visit the
abstract syntax tree right to left and blame the programmer if the
results are unexpected.

However, register-based calling conventions do not eliminate these
problems altogether; they only eliminate them for functions with a
small-enough number of arguments.

Paolo

Tommy Thorn

unread,
Jul 20, 2018, 1:10:58 PM7/20/18
to Joe Duarte, bruce...@sifive.com, sam....@gmail.com, allen...@esperantotech.com, ray.van...@silergy.com, isa...@groups.riscv.org
Back to the original topic, I'm surprised to learn that C and Pascal are viewed as almost the same language.

This is an ISA list so the lens is execution semantics.  For that point of view, all modern processors are machines designed to execute C and C-like semantics.  There's definitely a bias.  Different paradigms (inference [Prolog..], function languages [O'Caml, SML..], functional [Haskell, LML..], dynamic languages [Lisp, Smalltalk, Self, Javascript,...]) can obviously be implemented [Turing Tarpit], but not as _efficiently_ as could be done with the same transistor/power budget. For an enlightening example of just how ASTOUNDINGLY much better you can do with a targeted architecture, you can look at the Reduceron papers [1,2].

All this aside, as the critical workloads shift, CPU architectures adopt.  Examples: SIMD arrived with Multimedia (MMX, AVX, Neon, AltiVec ...)  Better support for indirect branches has in part been forced by C++ virtual (and similar).  There was a brief attempt at supporting dynamic languages (eg. SPARC's tagged TADD and register windows).  Today, everyone is scrambling to do better on machine learning.

Tommy

Allen Baum

unread,
Jul 20, 2018, 3:12:41 PM7/20/18
to Joe Duarte, Bruce Hoult, Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
1. special functional language support: not a language expert, I'll let others give fact-based answers

2. Orthogonal instructions set: I think you need to define orthogonal very carefully - that's too ambiguous a term.

3. Security: there are ongoing efforts to make systems using it more secure. To say there are no built-in security features is a bit strange. 
    Beyond the usual machine/supervisor/user and hypervisor modes and page based permissions, there is an optional Physical Memory Protection feature, and even the debug infrastructure has an optional authentication mechanism. All of these are standard - but optional, so technically not always built in
  But, if you're asking about random number generators, crypto acceleration HW, etc, then the answer is still nuanced: either they are coming, or you can buy IP that does all that, designed for RISC-V, with root of trust support, code authentication, attestation, and  all sorts of other buzzwords.
There is more than one vendor that is providing products like this.
  And that doesn't even include tagged versions that are being worked on.

4. Interesting question. There is a HW effects - instructions/ cache hints/ access modes that take advantage of it - and there could be algorithm designs that take advantage of it (including OS level services). But I don't see much in the way of programming languages that would . But see #1 above and discount my opinion. Note that Intel did introduce support for persistant memories. Some that may be very specific to device characteristics, e.g. write vs. read times, read and write granularities, wear management (sort of like garbage collection...)
5. You are free to implement your own logarithmic functional unit  extension - that's one of the architetcural principles behind RISC-V.
  Getting it accepted as a standard extension and getting OS and compiler support is another question. 
  It would have to provide pretty compelling advantages.

Rishiyur Nikhil

unread,
Jul 20, 2018, 3:32:37 PM7/20/18
to Allen Baum, Joe Duarte, Bruce Hoult, Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
>    1. special functional language support: not a language expert, I'll let others give fact-based answers

The 1980s were the heydey of attempts to build special architectures
for functional programming languages (FPLs); none of them got too far.
The ACM conference FPCA (Functional Programming and Computer
Architecture) eventually morphed into today's ICFP (Intl. Conf. on
Functional Programming) as the computer architecture attempts ramped
down.

A general lesson was: never underestimate what can be achieved by good
compilers.  The early implementations of FPLs were quite naive (e.g.,
direct interpretation of combinator graph reduction).  Many of the proposed
architectures were direct HW implementations of these interpreters.
But over time, there were spectacular improvements in techniques to compile FPLs,
which underly today's implementations of Haskell (ghc), SML, OCaml,
etc.  See for example papers on "Lazy ML (LML)" from Thomas Johnsson
and Lennart Augustsson at Chalmers Sweden, and "Spineless Tagless G Machine"
from Simon Peyton Jones and others at Glasgow (both of which addressed
efficient compilation of lazy FPLs); compilation using continuations in the Scheme
and ML communities, etc.

In the end, it's not clear that any of the proposed parallel
architectures of the 1980s would have been an improvement over such smart
compilation of FPLs for mainstream CPUs.  Nor that FPLs need any
special treatment compared to other managed languages.  This year's
ISCA has a paper by Maas et. al. about accelerating Garbage
Collection, which could benefit all managed languages, including FPLs.

Nikhil


--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.
To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.

Tommy Thorn

unread,
Jul 20, 2018, 3:47:39 PM7/20/18
to Rishiyur Nikhil, Allen Baum, Joe Duarte, Bruce Hoult, Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
[Disclaimer: I am so very much speaking on my own behalf, not representing any company].

Rishiyur,

your info is not up to date, see fx. the Reduceron work from 2008-2010.

Generally all this work suffered from simple economics: no academic
have the resources that is afforded to the Intels of the world.  I needn't
list the heroic efforts that have gone into making sequential scalar code
run fast and on the bleeding edge process, but these far outpaced what
academic efforts (except Reduceron, which, on FPGA, was within a small
factor of a state-of-the-art Core 2).

CPUs are optimized to run existing workloads (benchmarks) and as long
as they are written in C and C-like they will not change.  Workloads are
optimized to run well on existing hardware.

I find relief that in the last decade, especially the last five years, security
has finally become a business decision and pushed more code away
for the buffer-overflow-R-us world of C.  I see Rust, Go, and Swift as
some of the brighter lights at the end of the tunnel.  In particular, IIUC,
integer arithmetic in Swift traps on overflow.  This isn't cost-free in
RISC-V but should be.

Tommy


To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+u...@groups.riscv.org.

To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.

Rishiyur Nikhil

unread,
Jul 20, 2018, 4:41:21 PM7/20/18
to Tommy Thorn, Allen Baum, Joe Duarte, Bruce Hoult, Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
Hi Tommy,

Yes, I'm aware of the Reduceron work, and know some of the principals.

Thanks,

Nikhil

To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+u...@groups.riscv.org.
To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.
To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/CAF4tt%3DCCqoA25LsSSwbyJKOhbKP4_bOMybHLHsNhzZUSsmKiFA%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.
To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.

Jacob Bachmeyer

unread,
Jul 20, 2018, 6:37:23 PM7/20/18
to Tommy Thorn, Rishiyur Nikhil, Allen Baum, Joe Duarte, Bruce Hoult, Samuel Falvo II, Ray Van De Walker, RISC-V ISA Dev
Tommy Thorn wrote:
> [...]
> I find relief that in the last decade, especially the last five years,
> security
> has finally become a business decision and pushed more code away
> for the buffer-overflow-R-us world of C.

Properly-written C does not have buffer overflows. Such problems *can*
be avoided, if the programmers put in the appropriate work and are
competent. (Neither of these are guaranteed in the typical modern
commercial environment, but note that the Free Software community seems
to have generally done better. Generally. These conditions are not
guaranteed there either. That hobbyists have a better security record
than commercial development is an indictment of management in the
well-known commercial development shops.)

> I see Rust, Go, and Swift as
> some of the brighter lights at the end of the tunnel. In particular,
> IIUC,
> integer arithmetic in Swift traps on overflow. This isn't cost-free in
> RISC-V but should be.

Trapping on integer overflow is not cost-free, period. A processor that
traps on any integer overflow (and there have been such designs) must
check every operation for an overflow result, and must be prepared to
trap on any arithmetic instruction. This may affect the processor's
critical path length and therefore increase the processor's cycle time.
In RISC-V, arithmetic instructions cannot trap under any circumstance
and this may allow for specific hardware optimizations, like avoiding
the overhead of tracking *epc for arithmetic instructions.
Further, some integer operations can be allowed to overflow or may even
be *expected* to overflow if the arithmetic is explicitly modulo
2^XLEN. Trapping on any integer overflow is a waste. Only some integer
operations are actually sensitive to overflow, such as determining sizes
of memory buffers or addresses, which a C compiler can detect as
arithmetic on size_t or any pointer type. Inserting additional
instructions to check for overflow only when needed makes sense.


-- Jacob

Samuel Falvo II

unread,
Jul 21, 2018, 1:12:30 AM7/21/18
to Jacob Bachmeyer, Tommy Thorn, Rishiyur Nikhil, Allen Baum, Joe Duarte, Bruce Hoult, Ray Van De Walker, RISC-V ISA Dev
On Fri, Jul 20, 2018 at 3:37 PM, Jacob Bachmeyer <jcb6...@gmail.com> wrote:
> Properly-written C does not have buffer overflows. Such problems *can* be
> avoided, if the programmers put in the appropriate work and are competent.

Sorry; nerve triggered.

This variation of the True Scotsman fallacy doesn't contribute to any
solutions. Properly-written BASIC or COBOL with GOTOs can be
perfectly readable and maintainable too, if only people bothered to
exercise the discipline to learn the relevant design patterns and to
resist the temptation to write spaghetti code. Yet, today, structured
programming has dominated the more free-form GOTO-oriented languages.
Why? Because programmers couldn't be bothered to learn how to write
well-structured GOTO-oriented code. It got so bad that the computer
industry invented a term, the "software crisis", to describe one of
the first major problems in applied programming disciplines: a barrier
to software complexity that was seemingly impenetrable while
maintaining the ability to deliver software on-time and within budget.
And this was from the mainframe community, which revels in documented
processes and CMMI certifications!! If anyone was up to the task of
following rigorous disciplines and coding standards, it was the
mainframe community!

YES, what you say is correct; HOWEVER, the fact is, what you ask
requires discipline and people don't like to receive disciplinary
training. We'd have 1/10th as many programmers in the industry if
people were obligated to apply Hoare Triple analysis to their code to
make sure every precondition is met before executing some statements
which touches a buffer. From a quality vs quantity perspective, this
might not be a bad thing. From a purely economic stand-point,
however, it would be a total disaster.

Finally, history has shown that a properly selected set of
abstractions allows a person to think of bigger, more sophisticated
problems. This is because those abstractions offloads and automates
the intellectual minutia that goes into keeping a standard of rigor.
Assemblers have displaced toggling switches at the front-panel for a
reason. Compilers have displaced assemblers for a reason. If a
programmer could be made to properly pay attention to details in C,
then that same programmer can be trained to write code in assembler
with a good macro package (after all, C is essentially just an infix
PDP-11 high-level assembler anyway). So why don't we all just write
in assembly language and just demand programmers provide kick-butt
macro packages instead?

> (Neither of these are guaranteed in the typical modern commercial
> environment, but note that the Free Software community seems to have
> generally done better. Generally. These conditions are not guaranteed
> there either. That hobbyists have a better security record than commercial
> development is an indictment of management in the well-known commercial
> development shops.)

I'd like to see some empirical data to support this. My experience in
both commercial and free software from reputable sources is that
buffer overflow bugs come at roughly identical rates.

I think the *perception* that free software is somehow better at this
persists because free software has faster *feedback loops* than
commercial industry does. E.g., Microsoft doesn't generally patch
Windows within days of discovering a crucial bug, as can sometimes
happen with Linux. On the other hand, when they do offer a patch, it
usually covers a larger set of issues than the more finely focused
Linux patches. Meanwhile, if you look at the issue tracker for
Mozilla Firefox, you'll find bugs which have remained open for
*years*, sometimes *decades*. Didn't OpenSSL have some critical
vulnerabilities not too long ago which have existed for years? I seem
to recall that being a thing.

> Trapping on integer overflow is not cost-free, period. A processor that
> traps on any integer overflow (and there have been such designs) must check
> every operation for an overflow result

Unless the type system can statically prove that overflow is impossible.

> length and therefore increase the processor's cycle time. In RISC-V,
> arithmetic instructions cannot trap under any circumstance and this may
> allow for specific hardware optimizations, like avoiding the overhead of
> tracking *epc for arithmetic instructions.

A brilliant example of where RISC-V favors C over newer languages like Swift.

> Trapping on any integer overflow is a waste. Only some integer operations

That's your opinion, which thankfully, not everyone shares.

> are actually sensitive to overflow, such as determining sizes of memory
> buffers or addresses, which a C compiler can detect as arithmetic on size_t
> or any pointer type.

Which in most C programs I either maintain or write, occurs at least
50% of the time. On a project I'm working on right now, a program
which uses SDL2 library to draw graphics, pointer arithmetic occurs at
least as frequently as non-pointer arithmetic. I've already run into
cases where I've gotten it wrong and have segfaulted many times.
Having some tooling in place to automate overflow detection would have
saved me a lot of time.

Madhu

unread,
Jul 21, 2018, 1:26:28 AM7/21/18
to Samuel Falvo II, Joe Duarte, RISC-V ISA Dev
Glad Samuel brought it up. The lack of module boundaries and
recognition of modules
as a first class object is a major problem. At least for us. This
stems from C's indifference to
modules and RISC-V reflects that. So we are doing the obvious !
Enhance the RISC-V ISA to support languages that support modules as
first class objects.
We are trying to figure out if Rust can be enhanced for this purposes since we
need a language to write OSs. We use tagged ISAs and micro-VMstyle
compartmentalisation a lot
in our research and naturally formal object boundaries make life a lot easier.

Hope to release a very usable proto before Christmas. We may be able
to avoid ISA extensions by
a CSR type scheme. No definite conclusions yet but we do plan to ship
this support in
production parts so this is not a hobby project.
> --
> You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+u...@groups.riscv.org.
> To post to this group, send email to isa...@groups.riscv.org.
> Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.
> To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/CAEz%3DsomSv%2BYZPW9w3TNZtMVZzcMP%3D36g7_dS2%3DHjim_GB02LRQ%40mail.gmail.com.



--
Regards,
Madhu

Joe Duarte

unread,
Jul 23, 2018, 1:36:00 AM7/23/18
to ma...@macaque.in, Samuel Falvo II, isa...@groups.riscv.org
Hi Madhu, can you define what you mean by "modules" in this context? I've seen "modules" pop up almost everywhere recently: my initial discovery of the D language, JavaScript 2016 or 2017, Java 8 or 9, a proposal for C++ that I believe did not make it into C++17, but it may be marked for a future version. I don't know if these are the same sorts of modules. In D, they seemed to be precompiled headers if I recall correctly.

Unrelatedly, I forgot to mention earlier that I really like the question posed by John Regehr in 2012. He asked what instructions we would have or create if we cleared the table and could start over constrained by nothing but the physical engineering constraints of microprocessors: https://blog.regehr.org/archives/669

JD

Madhu

unread,
Jul 23, 2018, 11:32:54 AM7/23/18
to Joe Duarte, Samuel Falvo II, RISC-V ISA Dev
There are as many definitions of modules floating around as there are
languages !

Our definition of module comes from a need to define security
characteristics for a body of code. This in turn
implies formally defined exit and entry criteria. We are in the
process of more rigidly defining what we
term as a module. Things that are problematic in a language like C include
- lack of a mechanism to specify pointer bounds formally
- the paraphernalia used to construct a function like stack frames etc
are a little
too adhoc

So the question is how strict an entry/exit mechanism for
a module is desirable. If I define a security capabilities for a module
via a tag, how do I enforce those capabilities if the module is too amorphous.
This is in turn is tied to precise capabilities desired. Cannot devise
a scheme if it cannot be enforced.

It is partly tied to our single address space OS work where module
level security will be the only protection available.

A good compromise is probably normal functions that do not
entail any penalty and specialised functions that have more stricter
semantics in terms of isolation. Trying to see if we can something up in Rust.
--
Regards,
Madhu

Michael Chapman

unread,
Jul 23, 2018, 11:43:35 AM7/23/18
to Madhu, Joe Duarte, Samuel Falvo II, RISC-V ISA Dev

How is a module different from a singleton object?

Samuel Falvo II

unread,
Jul 23, 2018, 12:01:50 PM7/23/18
to Michael Chapman, Madhu, Joe Duarte, RISC-V ISA Dev
On Mon, Jul 23, 2018 at 8:43 AM, Michael Chapman
<michael.c...@gmail.com> wrote:
>
> How is a module different from a singleton object?

This is perhaps best answered by this paper:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.2290&rep=rep1&type=pdf

Jecel Assumpção Jr

unread,
Jul 23, 2018, 2:29:54 PM7/23/18
to RISC-V ISA Dev
About Pascal and C being essentially the same language, I would like to point out that while both do a lot of pointer chasing the fact that in Pascal/Algol that is implicit due to lexical scoping allows certain hardware optimizations. In C it all depends on what the programmer is doing so there isn't much that the hardware can help with except optimize the first level either with lots of registers or a stack cache like in CRISP.

The 8086 was optimized for a mix of Pascal and assembly with its fancy address modes allowing very compact representations of accessing stuff in non local stack frames. On the other hand, it really couldn't support C programs with more than 64KB (or 128KB with separate code and data segments). We had to deal with a C-like language with far and near pointers which required a total rewrite when moving programs to and from a VAX, for example.

The 286 borrowed features from the Ada+objects optimized iAPX432 while the 386 fully embraced C and Unix in a clever way while being backwards compatible.

RISC I and II were specifically designed after looking at the output of C compilers.

RISC III was known as SOAR - Smalltalk On A RISC. It added many interesting features, most of which turned out to not help as much as initially hoped. For example, all instructions could use either tagged or raw data but only the tagged ADD instruction made any difference in performance (which is why SPARC had only that). Complicated hardware to fill stack frames with NILs actually hurt performance.

RISC IV was known as Spur and was optimized for multiprocessing Lisp machines.

RISC V builds on all this experience and reflects the disappointment with the extra stuff in RISC III and IV. On the other hand, we do have the RISC V J extension working group that is trying to see what could help with languages like Java, Javascript and Smalltalk. ARM had a certain level of success with its two generations of Jazelle technology.

-- Jecel

Bruce Hoult

unread,
Jul 23, 2018, 10:28:43 PM7/23/18
to Joe Duarte, ma...@macaque.in, Samuel Falvo II, RISC-V ISA Dev
I think what a lot of people mean by modules, at a low level, is essentially shared libraries with simplified access to functions and library-global data from other things in the library.

On PowerPC machines (or at least AIX and MacOS) there is a TOC (Table of Contents) register that is akin to the RISC-V GP (Global Pointer) register and points to the global data for the current library.


Looks similar in Linux.


There is no direct way to access global data or functions from another library. Calls to functions in other libraries go via thunks in the current library's TOC that load the TOC for the called library before calling the function. References to data in other libraries also have an entry in the referring library's TOC.

I didn't pay attention to what happened at this level in the transition to OS X, and pre-OS X was 20 years ago so be gentle with me :-)

A scheme like this would actually be even more useful on RISC-V than on PowerPC, with only 4 KB directly accessible from GP instead of 64 KB.


> To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.

--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.

To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.

Samuel Falvo II

unread,
Jul 23, 2018, 10:51:55 PM7/23/18
to Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Mon, Jul 23, 2018 at 7:28 PM, Bruce Hoult <bruce...@sifive.com> wrote:
> I think what a lot of people mean by modules, at a low level, is essentially
> shared libraries with simplified access to functions and library-global data
> from other things in the library.

That's one method of implementing them, to be sure. Because modules
are a syntactic construct, however, they can be significantly finer
grained than your typical shared object library. I can easily foresee
the case where a shared library, such as SDL for instance, can itself
be comprised of many modules (event-handling, low-level blitter
functionality, etc).

Bruce Hoult

unread,
Jul 23, 2018, 10:54:27 PM7/23/18
to Samuel Falvo II, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
In fact the Apple system agrees with that .. it's the "Code Fragment Manager", not the "Shared Library Manager" and a library can contain many Fragments.


--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.
To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.

Madhu

unread,
Jul 23, 2018, 10:56:48 PM7/23/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, RISC-V ISA Dev
If security is a concern, as is the csse for us, a security model has to be defined first and then a module mechanism devised to conform to the model.

currently we do the reverse and the resultant module mechanism leaks like a sieve. 

Samuel Falvo II

unread,
Jul 23, 2018, 11:06:53 PM7/23/18
to Madhu, Bruce Hoult, Joe Duarte, RISC-V ISA Dev
On Mon, Jul 23, 2018 at 7:56 PM, Madhu <ma...@macaque.in> wrote:
> If security is a concern, as is the csse for us, a security model has to be
> defined first and then a module mechanism devised to conform to the model.
>
> currently we do the reverse and the resultant module mechanism leaks like a
> sieve.

Regardless, I think we can agree that Niklaus Wirth's view of a
module, a syntactic unit of encapsulation and namespace support,
represent the bare minimum requirements of what constitutes a module.

I would like to know how the semantics of a Modula-2 module or
Oberon(-2) module compares with what you have in mind, security-wise.
If you're talking about casual introspection and violation of a module
(a la using dir() on a Python module and abusing that information to
gain access to internal procedures or data), I know that Oberon System
lacks any capability for that to happen at all. If you're talking
about something else (e.g., making sure a module clears RAM of any
passwords it might use), that doesn't seem like the kind of thing
RISC-V ISA extensions could help with. So, before going forward, I
agree that we need a more precise definition.

However, I referenced a paper that gives a formal definition of what a
module is, one which I'm in agreement with. What I'd like to know is
what you have in mind? Maybe we can meet in the middle, or find that
we're already in agreement.

Madhu

unread,
Jul 23, 2018, 11:46:12 PM7/23/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, RISC-V ISA Dev
I think we are basically in agreement, it is just a matter of sweating
out the details.
1. Wirth's basic definition is a good place to start, the doc you sent is a good
model for a module.
2. Oberon-2 will probably do the job security wise. Part of the problem
is that a C based OS env is not exactly Oberon-2 !

3. I am not concerned about memory wipe and similar techniques.
Well I am ! But do not expect any ISA support for that.

4. My primary concern is control flow, access restrictions/privilege control
and restriction on introspection.

As I said, I need to have a target language (at least a proposal) and
well defined
semantics, before any ISA extensions can be contemplated. Having said that
a low granularity tag scheme for modules and a watertight call/exit scheme
should do the job. For the later, teh current pointer distance control
may well do the
job, provided we can define a security model for "far" pointer operations.

Fortunately, we have a faculty member working full time on these issues, so
I hope to have some clarity from our end in a few weeks
--
Regards,
Madhu

Luke Kenneth Casson Leighton

unread,
Jul 24, 2018, 12:09:07 AM7/24/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
the cambridge capability system subdivided access down even to the
function call level (providing a hardware variant of SE/Linux aka the
"FLASK" Security Model). in that context the function may be defined
to be a "module". the objects that are passed around in OO languages
may also be thought of as "modules". the word's kiiinda meaningless
in its generality.

i spoke to one of the students here at the RISE Lab, about the
memory-tracking that they're implementing. malloc and free will be
tracked by a double-width pointer, which, when free'd, the hidden tag
is set to zero, preventing further use.

i mentioned to them that OO languages (and webkit) have the concept
of objects, with automatic ref-counting. webkit's c++ templates even
track the simplest of things like pointer de-referencing, increasing
the ref-counter for the duration of the access to the pointed-to
object. when the pointer-to-object variable goes out of scope in a
local function, the destruction of that pointer-to-object variable
causes the object that it is being pointed to to have its ref-counter
decreased.

only when the ref-count reaches zero can the system automatically
(and immediately) call the destructor and free up that memory.
providing hardware-level support for memory ref-counting is very very
different from simply tracking malloc and free.

hardware-level support for this concept would be *really* useful, as
it would allow both catching - and tracking - of some of the most
insane and obscure OO memory bugs in computer science. webkit, with
complex HTML5 pages, can have hundreds of thousands of objects to
track (pieces of DOM, referenced by both the rendering engine and the
javascript engine). software to do real-time analysis of these kinds
of errors is both intrusive (which does a heisenberg number on the
program) and horribly expensive.

l.

Samuel Falvo II

unread,
Jul 24, 2018, 12:36:53 AM7/24/18
to Luke Kenneth Casson Leighton, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Mon, Jul 23, 2018 at 9:08 PM, Luke Kenneth Casson Leighton
<lk...@lkcl.net> wrote:
> the objects that are passed around in OO languages
> may also be thought of as "modules". the word's kiiinda meaningless
> in its generality.

I disagree, for reasons outlined in the paper I linked to in a previous message.

> hardware-level support for this concept would be *really* useful, as
> it would allow both catching - and tracking - of some of the most
> insane and obscure OO memory bugs in computer science. webkit, with
> complex HTML5 pages, can have hundreds of thousands of objects to
> track (pieces of DOM, referenced by both the rendering engine and the
> javascript engine). software to do real-time analysis of these kinds
> of errors is both intrusive (which does a heisenberg number on the
> program) and horribly expensive.

Wouldn't RAII solve this though? C++ and Rust both seem to be making
excellent use of RAII for things like automatic reference counting and
such these days, and a lot of people, broadly speaking, are quite
happy with the results.

It's not clear to me how hardware support for reference counting would
work, *unless* you used something like segmentation to implement it.
To borrow some tricks from POWER architecture (and, to a lesser degree
z/Architecture), the top five bits of a register would refer to one of
32 segment registers (it's arbitrary; I picked 32 because there are 31
general purpose registers). These segments, in turn, would point to
the actual objects desired, but would also maintain a reference count
of some kind. You'd need to introduce special instructions to
manipulate and check this reference count, to alter what the segment
register refers to, et. al. in a safe way. Not so much Intel-style,
but certainly at *least* Burroughs-style or MULTICS-style.

I am a general fan of segmentation; but even I am having a hard time
seeing how it'd help here.

Luke Kenneth Casson Leighton

unread,
Jul 24, 2018, 12:55:19 AM7/24/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Tue, Jul 24, 2018 at 5:36 AM, Samuel Falvo II <sam....@gmail.com> wrote:

> Wouldn't RAII solve this though?

https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization

if you trust the hardware (and the OS, and the design of the
language, and the programmer) on which the RAII-enabled application is
running.... yes [1]

l.

[1] (translation: no)

Samuel Falvo II

unread,
Jul 24, 2018, 1:14:31 AM7/24/18
to Luke Kenneth Casson Leighton, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Mon, Jul 23, 2018 at 9:54 PM, Luke Kenneth Casson Leighton
<lk...@lkcl.net> wrote:
> if you trust the hardware

Which I do.


> (and the OS

which has no say in the matter, since it's all user-level code.

> and the design of the
> language

Which I do.

> and the programmer

Which one? The programmer of the library that implemented
auto-ref-counting? Or the programmer actually using it?

If the former, then yes, especially after testing the software with
simple examples to prove to myself how the API even works in the first
place.

If the latter, well, isn't that why the ARC library exists to begin with?

A lack of trust in any one of these items, save for the application
developer using this stack, indicates a **vastly** more significant
problem on your hands than keeping track of memory allocations.

Luke Kenneth Casson Leighton

unread,
Jul 24, 2018, 1:41:03 AM7/24/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Tue, Jul 24, 2018 at 6:14 AM, Samuel Falvo II <sam....@gmail.com> wrote:
> On Mon, Jul 23, 2018 at 9:54 PM, Luke Kenneth Casson Leighton
> <lk...@lkcl.net> wrote:
>> if you trust the hardware
>
> Which I do.

you may: there are however very specific circumstances
(security-conscious scenarios) where that is not acceptable.

>
>> (and the OS
>
> which has no say in the matter, since it's all user-level code.

... which may be compromised via many different vectors.

>> and the design of the
>> language
>
> Which I do.

you may: there are however very specific circumstances
(security-conscious scenarios) where it is extremely dangerous to
trust the implementation (or the design). rules (conventions) have to
be set for safe c++ programming, for example: as far back as 1993 when
working for Pi Technology we were told in no uncertain terms that the
use of malloc and free was absolutely flat-out prohibited.

... but the *design* of c++ *permits* malloc and free, doesn't it?


>> and the programmer
>
> Which one? The programmer of the library that implemented
> auto-ref-counting? Or the programmer actually using it?

now you're starting to get it. both of them contribute to the problem.

> If the former, then yes, especially after testing the software with
> simple examples to prove to myself how the API even works in the first
> place.
>
> If the latter, well, isn't that why the ARC library exists to begin with?
>
> A lack of trust in any one of these items, save for the application
> developer using this stack, indicates a **vastly** more significant
> problem on your hands than keeping track of memory allocations.

yyup. hence the reason for the research into system-wide /
application-wide memory-tracking.

i sent this (below) to someone in a private reply. RAII is a pattern
that's obeyed by convention: it's not something that is formally
required or enforced. when creating language bindings, it was
*required* that the RAII convention be broken, manually, because there
is no other choice but to do so. or, there is a choice (which is no
choice at all), which in the case below boiled down to "reimplement
the entirety of python in c++... oh and maintain it as a private
hard-fork".



-----

when developing the python-webkit bindings (and the python-gobject
bindings), the python code was in c, and so was gobject, but it was
still required to map to the polymorphic "dynamically-typed" objects.
the "expectation" that the refcounting of c++ could be obeyed was
flat-out impossible.

so i had to *manually bypass* the entire c++ ref-counting
safety-system, type-casting the c++ object down to a void* (in order
to get round the c++ type-checking), then cast *back* to one and only
one type of c++ object, call the ref (or unref) function *MANUALLY*...
and i had to create a c struct which was passed to the python (or
gobject) code which *contained* that void* pointer.

it was absolutely dreadful, and completely impossible to debug. the
only way to guarantee the correctness of the program was that by
virtue of the auto-code-generator creating hundreds of
identically-formatted code, which was literally called millions of
times, even the slightest mis-step was amplified to such a level that
even a blind person would be able to tell that there was a memory leak
/ problem.

basically it was not possible under these circumstances to stick
within the RAII paradigm because a SWIG-like auto-generated interface
to a completely foreign programming language has to be done at the
level of the *implementation* of that language. in python that meant
c... *not* c++.

Samuel Falvo II

unread,
Jul 24, 2018, 1:52:59 AM7/24/18
to Luke Kenneth Casson Leighton, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Mon, Jul 23, 2018 at 10:40 PM, Luke Kenneth Casson Leighton
<lk...@lkcl.net> wrote:
> when developing the python-webkit bindings (and the python-gobject
> bindings), the python code was in c, and so was gobject, but it was
> ...
> basically it was not possible under these circumstances to stick
> within the RAII paradigm because a SWIG-like auto-generated interface
> to a completely foreign programming language has to be done at the
> level of the *implementation* of that language. in python that meant
> c... *not* c++.

OK, but I feel you're now moving the goal posts of the discussion.
The original question was whether or not RISC-V favored one class of
language over another, and from that, the discussion of whether or not
hardware-augmented memory management extensions were a useful feature.
I didn't see how it could be used in a real-world situation, and even
with this new information, I *still* don't. I feel that ARC *could*
have been used to track these things, using the exact same method that
Python manages to bind to COM objects (another reference-counted
technology), had SWIG bothered to be so intelligent about how to do
it. Regardless, these are all software conventions, and I still don't
see how dedicated hardware support helps.

Luke Kenneth Casson Leighton

unread,
Jul 24, 2018, 2:06:41 AM7/24/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Tue, Jul 24, 2018 at 6:52 AM, Samuel Falvo II <sam....@gmail.com> wrote:

> it. Regardless, these are all software conventions, and I still don't
> see how dedicated hardware support helps.

that's why, as i understand it, the research is being carried out.
to find out how dedicated hardware support can help.

the reason i mentioned the refcounting is because that is an
important area to research, too.

l.

Samuel Falvo II

unread,
Jul 24, 2018, 2:09:33 AM7/24/18
to Luke Kenneth Casson Leighton, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Mon, Jul 23, 2018 at 11:06 PM, Luke Kenneth Casson Leighton
<lk...@lkcl.net> wrote:
> that's why, as i understand it, the research is being carried out.
> to find out how dedicated hardware support can help.

OOOHHH, My apologies!! I thought you had something specific already
in mind, but just wasn't detailing specifics! OK, it's bed-time for
me.

Yes, I can agree that R&D in that area would be valuable. Perhaps
that would fall under the purview of the J extension?

Luke Kenneth Casson Leighton

unread,
Jul 24, 2018, 2:16:25 AM7/24/18
to Samuel Falvo II, Bruce Hoult, Joe Duarte, Madhu (Macaque Labs), RISC-V ISA Dev
On Tue, Jul 24, 2018 at 7:09 AM, Samuel Falvo II <sam....@gmail.com> wrote:
> On Mon, Jul 23, 2018 at 11:06 PM, Luke Kenneth Casson Leighton
> <lk...@lkcl.net> wrote:
>> that's why, as i understand it, the research is being carried out.
>> to find out how dedicated hardware support can help.
>
> OOOHHH, My apologies!! I thought you had something specific already
> in mind, but just wasn't detailing specifics! OK, it's bed-time for
> me.

:)

> Yes, I can agree that R&D in that area would be valuable. Perhaps
> that would fall under the purview of the J extension?

someone mentioned that idea, earlier. unfortunately that approach
means that only members of the J Extension WG will have their input,
ideas, needs and requirements go into the J-Extension.

l.

Jecel Assumpção Jr

unread,
Jul 25, 2018, 1:04:49 PM7/25/18
to RISC-V ISA Dev
Luke,

On Tue, Jul 24, 2018 at 7:09 AM, Samuel Falvo II wrote:
> Yes, I can agree that R&D in that area would be valuable.  Perhaps
> that would fall under the purview of the J extension?

 someone mentioned that idea, earlier.  unfortunately that approach
means that only members of the J Extension WG will have their input,
ideas, needs and requirements go into the J-Extension.

Any foundation member can join the work group. If someone prefers not
to do that, they can post their ideas here (or on other relevant lists) and
ask the WG members take them into account.

Do you have a better alternative?

-- Jecel

Luke Kenneth Casson Leighton

unread,
Jul 26, 2018, 12:32:33 AM7/26/18
to Jecel Assumpção Jr, RISC-V ISA Dev
On Wed, Jul 25, 2018 at 6:04 PM, Jecel Assumpção Jr <je...@merlintec.com> wrote:

> Luke,
> On Tue, Jul 24, 2018 at 7:09 AM, Samuel Falvo II wrote:
>>
>> > Yes, I can agree that R&D in that area would be valuable. Perhaps
>> > that would fall under the purview of the J extension?
>>
>> someone mentioned that idea, earlier. unfortunately that approach
>> means that only members of the J Extension WG will have their input,
>> ideas, needs and requirements go into the J-Extension.
>
>
> Any foundation member can join the work group.

... which requires joining the Foundation, which in turn requires
giving up sovereignty and rights that are completely unacceptable (and
completely unnecessary to give up, as Copyright Law in the form of
Trademark / Certificattion Law covers things perfectly well).

> If someone prefers not
> to do that, they can post their ideas here (or on other relevant lists) and
> ask the WG members take them into account.

right. so anyone "outside" of the cartel is treated as a {insert
appropriate derogatory adjective} slave / second-rate citizen, yes?
"oh we'll get to it when we feel like it", yes? or, as has happened
in the past, multiple times, "we're too busy to give you that
information", or "that was discussed and agreed already [based on the
limited scenarios and use-cases that ONLY WE could envisage as being
relevant] several months ago, sorry we can't give you access to the
notes so that you can verify our claims or help add extra scenarios
and use-cases, we're now far too busy"

... can you see how that *already* is not working? that there are
people being prevented and prohibited from *HELPING* - not hindering
or spongeing off of the time and expertise of the members - prevented
from *HELPING* because they DON'T KNOW THE CURRENT STATUS.

people on the "outside" who may have extremely valuable insights,
resources and knowledge, are left *completely* out of the picture.
they're left to duplicate work and effort, because they're prevented
and prohibited from accessing information, prevented and prohibited
from seeing the working notes - everything.

the members of the RISC-V Foundation are not the absolute peak and
fount of all knowledge and expertise, basically. it takes months if
not *years* to get some of these extensions done properly, yet some
organisations are being forced to *literally* duplicate all of the
research that's going on.


> Do you have a better alternative?

yes, and it's a really simple one. follow pre-existing
well-established practices set by for example the Apache Software
Foundation.

that means: remove the requirement that access to WG discussions and
documents require Foundation membership. it's completely unnecessary,
counter-productive, and is unethical (causing harm to the RISC-V
ecosystem) as it is creating a "two-tier' privileged / cartel system.

at the *absolute minimum* the notes, documentation and mailing list
archives need, just like with any ASF Member Project, to be entirely
public.

claims that the cartelling / private "club" membership "protects" the
RISC-V ecosystem are bogus: Copyright Law regarding Trademarks and
Certification Marks is perfectly sufficient protection.

claims that there may be SPAM on the WG discussions, or that there may
be people who join that "waste time" is also nonsense. SPAM can be
dealt with in the usual way. and anyone who really seriously wants to
contribute to something as complex as an ISA has a *damn* good reason
for sticking it out. it's extremely specialist.

l.

Jecel Assumpção Jr

unread,
Jul 27, 2018, 3:55:43 PM7/27/18
to RISC-V ISA Dev
Luke,

I can understand not agreeing with a group's bylaws, but choosing
not to join them has consequences. I don't agree with the Open
Cores policy of hosting all code exclusively on their site (they made
a few exceptions to join forces with projects like JOP) so I didn't
join them, though I participate in the mailing lists. It wouldn't be
reasonable for me to expect to participate in the decision making
process for OpenRISC, though I am of course free to make
suggestions on the list.

Some groups like Apache, the Linux kernel, or the FreedomCPU
do everything in the open. That isn't the case for the RISC-V
Foundation, but things are much better in practice than you seem
to imply. Take the V Extension, for example. I don't have access
to their meeting or external documents, but they do a very good
job of keeping me informed about what they are doing in the form
of videos and slides in the various workshops. The J Extension
group has not produced any information so far that you don't have.

About outside suggestions, even participating in a workgroup
doesn't mean that your ideas will be adopted. Each group has at
least a few people with different opinions.

-- Jecel

Madhu

unread,
Jul 27, 2018, 9:31:34 PM7/27/18
to Jecel Assumpção Jr, RISC-V ISA Dev
The point Luke is making here is that for an open standards organisation
the idea of non-open discussions and documents is a contradiction in terms.
I tend to agree and frankly I find this aspect of RISC-V extremely disappointing
and counter-productive to the evolution of the standard.
But it is what it is and we chose to live with it.
We protest privately, Luke chooses rather colourful language to do
so publicly ! In that respect I do wish he were a trifle more polite
and brief in his postings.
It takes away from the message.

For the Shakti Consortium that we are forming to standardise the SoCs built
around our RISC-V cores and our IP blocks/ extensions, we naturally
will be putting
our money where our mouth is !
> --
> You received this message because you are subscribed to the Google Groups
> "RISC-V ISA Dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to isa-dev+u...@groups.riscv.org.
> To post to this group, send email to isa...@groups.riscv.org.
> Visit this group at
> https://groups.google.com/a/groups.riscv.org/group/isa-dev/.
> To view this discussion on the web visit
> https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/51e2a779-52e6-49f1-a0d8-e1a48a88f0a0%40groups.riscv.org.



--
Regards,
Madhu

Bruce Hoult

unread,
Jul 27, 2018, 9:59:39 PM7/27/18
to Madhu, Jecel Assumpção Jr, RISC-V ISA Dev
On Fri, Jul 27, 2018 at 6:31 PM, Madhu <ma...@macaque.in> wrote:
The point Luke is making here is that for an open standards organisation
the idea of non-open discussions and documents is a contradiction in terms.
I tend to agree and frankly I find this aspect of RISC-V extremely disappointing
and counter-productive to the evolution of the standard.

There are very real benefits for RISC-V's future in having as many organisations as possible join the foundation. Those marketing slides with all the logos are useful, as are the declarations that RISC-V doesn't infringe an organisation's patents.

There aren't a lot of ways the foundation can provide incentives for organisations to join. Using the RISC-V name and logo on products is one, and influence over future directions via participation in working groups is another.

Personal opinion only.
 

Madhu

unread,
Jul 27, 2018, 10:20:42 PM7/27/18
to Bruce Hoult, Jecel Assumpção Jr, RISC-V ISA Dev
Bruce,
nobody denies all of that. We are founding members of the foundation
and are well aware of its benefits

I am simply pointing out that nobody has come up with a convincing
argument as to why the public
cannot view work in progress and the need to have confidentiality in
our technical discussions.
I represent IIT-M in so many standards efforts that I have lost
count. Including standards for data
privacy ! Apart from some rare cases, I have never heard a
need for confidentiality convincingly espoused. In all our research with our
industry partners on RISC-V enhancements/exploration, we never agree
to a confidentiality clause
unless there is a very valid reason. We always insist that all aspects
of the research done by us will be public
including work in progress.
This helps the cause of RISC-V, a cause we actively advocate.
But even in this discussion, nobody has made a case for confidentiality !

I will stop here, since I do not want to hijack the thread.
Please feel free to write to me privately or start a thread
in an appropriate forum .
--
Regards,
Madhu

Luke Kenneth Casson Leighton

unread,
Jul 27, 2018, 10:47:23 PM7/27/18
to Jecel Assumpção Jr, RISC-V ISA Dev


On Saturday, July 28, 2018, Jecel Assumpção Jr <je...@merlintec.com> wrote:
Luke,

I can understand not agreeing with a group's bylaws, but choosing
not to join them has consequences.

Indeed. I have some simple and nonoptional criteria by which i evaluate decisions, and am happy with those consequences.

The criteria is tgat of an ethical act, which is a very strict, clear and formal definition, that an ethical act increases truth awareness love or creativity for ine or more people including yourself, without decreasing any of those same qualities for anyone
 
Against these simple criteria it is by definition clear that joining the RISCV foundation is unethical, as it would subsume and restrict my right to distribute truth, awareness and would stifle my creativity.

Now it turns out that a logical consequence of the definition of an ethical act is that to support unethical acts is itself unethical.

Consequently, giving money to the RISCV foundation is therefore clearly unethical.

I just wanted to say that to illustrate to you that I am keenly aware (word used very deliberately) of the consequences of my decisions, and accept them entirely.


 I don't agree with the Open
Cores policy of hosting all code exclusively on their site (they made
a few exceptions to join forces with projects like JOP) so I didn't
join them, though I participate in the mailing lists. It wouldn't be
reasonable for me to expect to participate in the decision making
process for OpenRISC, though I am of course free to make
suggestions on the list.

Some groups like Apache, the Linux kernel, or the FreedomCPU
do everything in the open. That isn't the case for the RISC-V
Foundation, but things are much better in practice than you seem
to imply. Take the V Extension, for example. I don't have access
to their meeting or external documents, but they do a very good
job of keeping me informed about what they are doing in the form
of videos and slides in the various workshops. 



Have you tried contributing or actively participating in the development?  Have you tried making your needs clear? 

Try it. I did.  Several times.  Several other people have as well.  All of them have given up hope of their constructive input, their needs and willingness to take responsibility being considered and respected as equal and peer to that of the members of the Foundation.


The J Extension
group has not produced any information so far that you don't have.

About outside suggestions, even participating in a workgroup
doesn't mean that your ideas will be adopted. Each group has at
least a few people with different opinions.

Indeed. Thoughts and insights on that will have to be another time / message, there are structures that are effective for groups, its quite involved, as we know and have witnessed many times.

  Key is communication and feedback.  Restrict or deny either and an organisation is guaranteed to get into trouble, sooner or later. I cant even find a  way on the RISCV website to give them constructive feedback. Can you? And weve already established that the WG lists are restricted access....


-- Jecel

--
You received this message because you are subscribed to the Google Groups "RISC-V ISA Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isa-dev+unsubscribe@groups.riscv.org.
To post to this group, send email to isa...@groups.riscv.org.
Visit this group at https://groups.google.com/a/groups.riscv.org/group/isa-dev/.
To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/51e2a779-52e6-49f1-a0d8-e1a48a88f0a0%40groups.riscv.org.


--
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

Reply all
Reply to author
Forward
0 new messages