Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MISD example?

4,026 views
Skip to first unread message

Mark Smotherman

unread,
Sep 2, 1991, 11:43:51 AM9/2/91
to
Is there an example of an MISD machine
(i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
or is it generally agreed that such a beast doesn't make sense?
--
Mark Smotherman, CS Dept., Clemson University, Clemson, SC 29634-1906
(803) 656-5878, ma...@cs.clemson.edu or ma...@hubcap.clemson.edu

vu0...@bingvaxu.cc.binghamton.edu

unread,
Sep 2, 1991, 8:19:53 PM9/2/91
to

It is generally believed that MISDs don't exist, howevere one can
claim that pipelined vector processors are MISDs. Also called vector
processors, in which multiple instructions are issued to various
segments of the pipeline-processor.

Examples: CDC STAR100
TI-ASC
CRAY-1 (it's vector processor)!!!

--
*
* *

* *

dholl...@imecom.imec.be

unread,
Sep 2, 1991, 6:11:45 PM9/2/91
to
In article <1991Sep2.1...@hubcap.clemson.edu>, (Mark Smotherman) writes:
> Is there an example of an MISD machine
> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
> or is it generally agreed that such a beast doesn't make sense?
> --

What about a pipeline architecture:
Multiple Instruction stages operate upon a Single Data stream.

Gregory T. Byrd

unread,
Sep 3, 1991, 7:48:20 AM9/3/91
to
In article <1991Sep2.1...@hubcap.clemson.edu>,

ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>Is there an example of an MISD machine
> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
>or is it generally agreed that such a beast doesn't make sense?

systolic arrays (e.g., Warp)

...Greg Byrd North Carolina Supercomputing Center
gb...@ncsc.org P.O. Box 12889
(919)248-1439 Research Triangle Park, NC 27709-2889

Robert Hyatt

unread,
Sep 3, 1991, 9:31:41 AM9/3/91
to


wrong concept.... MISD = Multiple Instruction Single Datum.... not data
stream.... ie, something like a systolic array of sorts where each element
executes a single operation on a piece of data that is passed down the
assembly line....... however, all such machines are really MIMD as they don't
pass a single datum down the assembly line, but a stream of data (usually in
the form of a vector)

loosely speaking, you might make a case for a Cray here (at times) where multiple
instructions can issue in successive clock periods but have to wait for the
operand (result from previous instruction) to complete before they can actually
execute... I don't like the comparison, but have heard it and can accept it
with a grain or two of salt.... otherwise, I, too, have not seen an example
of MISD... (perhaps just like there is base-9 base-7, etc arithmetic??? that
nobody uses...)

Bob Hyatt

William R. Michalson

unread,
Sep 3, 1991, 9:16:46 AM9/3/91
to
While the examples given are arguably MISD, I usually interperate
MISD as multiple instruction streams operating on a single data
stream at the same time. This interpretation leads to the seemingly
obvoius conclusion that no such systems exist. Nicely settled until
a guy in the back of the room says "what about fault-tolerant systems
containing self-checking pairs of processors?" Oh well.........

Bill Michalson, w...@pluto.wpi.edu

Martin Golding

unread,
Sep 3, 1991, 2:08:42 PM9/3/91
to
In <1991Sep3.0...@newserve.cc.binghamton.edu> vu0...@bingvaxu.cc.binghamton.edu writes:

>In article <1991Sep2.1...@hubcap.clemson.edu> ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>>Is there an example of an MISD machine
>> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
>>or is it generally agreed that such a beast doesn't make sense?
>>--
>>Mark Smotherman, CS Dept., Clemson University, Clemson, SC 29634-1906
>> (803) 656-5878, ma...@cs.clemson.edu or ma...@hubcap.clemson.edu

>It is generally believed that MISDs don't exist,

How about the old card-programmed accounting machines? 80 "byte" "words",
with some 50-100 (depending on program) parallel "instructions" operating
simultaneously on each "word". You could make a blazer of a business
machine, ZOOM but you'd have to write your programs in RPG.

Imagine a vectorising RPG compiler- no, don't. :-P


Martin Golding | sync, sync, sync, sank ... sunk:
DoD #0236 | He who steals my code steals trash.
HOG #still pending | (Twas mine, tis his, and will be slave to thousands.)
A poor old decrepit Pick programmer. Sympathize at:
mcspdx!adpplz!martin or mar...@adpplz.uucp

Jeff Carroll

unread,
Sep 3, 1991, 8:41:24 PM9/3/91
to
In article <1991Sep3.0...@newserve.cc.binghamton.edu> vu0...@bingvaxu.cc.binghamton.edu writes:
>In article <1991Sep2.1...@hubcap.clemson.edu> ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>>Is there an example of an MISD machine
>> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
>>or is it generally agreed that such a beast doesn't make sense?

>Examples: CDC STAR100


> TI-ASC
> CRAY-1 (it's vector processor)!!!

I recently saw (but didn't read) an article in which the author claimed that
the IBM System/88 was a MISD.

Now, I'm so clueless about IBM machines that I don't even know what a
system/88 is...

--
Jeff Carroll car...@ssc-vax.boeing.com

Jeff Carroll

unread,
Sep 4, 1991, 2:11:04 AM9/4/91
to
In article <31...@speedy.mcnc.org> gb...@ncsc.org (Gregory T. Byrd) writes:
>In article <1991Sep2.1...@hubcap.clemson.edu>,
>ma...@hubcap.clemson.edu (Mark Smotherman) writes:
> >Is there an example of an MISD machine
> > (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
> >or is it generally agreed that such a beast doesn't make sense?
>
>systolic arrays (e.g., Warp)

Unless I grossly misunderstand what I've read about Warp, it's MIMD.

Distributed memory MIMD.

I suppose that if you were to build a hardwired systolic array, you could
argue that it was MISD, but since Flynn's taxonomy usually comes up when
we talk about how to program machine X, I don't see the point.

--
Jeff Carroll car...@ssc-vax.boeing.com

Edward N. Kittlitz

unread,
Sep 4, 1991, 8:30:59 AM9/4/91
to
In article <45...@ssc-bee.ssc-vax.boeing.com> car...@ssc-vax.UUCP (Jeff Carroll) writes:
>I recently saw (but didn't read) an article in which the author claimed that
>the IBM System/88 was a MISD.
>
>Now, I'm so clueless about IBM machines that I don't even know what a
>system/88 is...

The IBM system/88 is the Stratus fault tolerant system as remarketed by IBM.
Stratus achieves CPU fault tolerance by running quadruplicated CPUs. Each
"logical" CPU is implemented as two self-checking pairs of CPUs. If there
is disagreement between one pair, that pair drops out and the other pair
continues to operate.

I would not call this MISD: the same analysis which says the data is "S"
also requires that you think of the instruction as "S". all 4 CPUs are
executing exactly the same instruction on exactly the same data at any
given time, i.e. SISD.

The fact that Stratus makes multiprocessor systems is orthogonal to this
discussion. A particular system may have 6 "logical" processors, and be
capable of executing 6 independent instruction streams. However, there are
actually 6 groups of 4 processors physically present. Each such fault-tolerant
group is SISD by the above definitions.

-----
E. N. Kittlitz kitt...@world.std.com / kitt...@granite.ma30.bull.com
Contracting at Bull, but not alleging any representation of their philosophy.

Al Crawford

unread,
Sep 4, 1991, 11:55:22 AM9/4/91
to
In article <1991Sep2.1...@hubcap.clemson.edu> ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>Is there an example of an MISD machine
> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
>or is it generally agreed that such a beast doesn't make sense?

My personal raison d'etre at the moment - semantically driven architectures
:-) Processing is split between three processors. There's a binding
processor that generates addresses, a transient processor that does
calculations etc and a control processor that traverses the program graph
and issues basic blocks of straight line code to the other two. In this
type of architecture it's possible for accessing a single datum to involve
all three processors. Quite definitely an MISD machine but as yet just a
paper architecture.

I have a paper on this type of architecture but, since the copy I have says
"Submitted to Supercomputer 91" and I've no idea if it's been accepted or
published I won't give any further details until I've contacted the author.

--
Al Crawford - aw...@dcs.ed.ac.uk
"Such a digital lifetime, it's been by numbers all the while"

Eugene N. Miya

unread,
Sep 4, 1991, 7:15:13 PM9/4/91
to
Just checking for "spillage" from another news group.

Oh, on this example. Why don't you ask Flynn? He's still around.
I've seen other abuses of his taxonomy. Ask him. You might learn something,
then again, you might not. 8^)


--eugene miya, NASA Ames Research Center, eug...@orville.nas.nasa.gov
Resident Cynic, Rock of Ages Home for Retired Hackers
{uunet,mailrus,other gateways}!ames!eugene

Martin Golding

unread,
Sep 5, 1991, 3:12:44 PM9/5/91
to

I'd argue that two mutually checking processors is not usefully different from
one self checking processor. And that there's only one instruction stream,
even though there are two units executing it.


As for really MISD machines, I already proposed the 409 (?) card programmed
accounting machine; it did multiple simultaneous add and move operations,
and I think there was one that did multiplies.

Now I've had time to check my reference (Computer Structures: Principles
and Examples", Siewiorek, Bell, Newell), I also suggest dataflow machines,
where an individual datum is broadcast, with every processor unit that wants
the data executing simultaneously.

For a concrete example of the dataflow architecture, the 360/91 floating point
unit could do a simultaneous add, multiply, and store on a load or result
of a previous operation. The instruction decoder assigned tags to expected
future results, then pre-issued commands with the tags they should execute
on. (This mechanism is the source of what are politely termed "imprecise
interrupts".)

Scoreboarding machines (eg CDC 6600) also allow execution in parallel
as each pending operand is made available, but the transport mechanism
may not be sufficiently SD if the functional units have multiple paths
to the registers.


My references are old and graying (10 years is forever in this industry).
Is anybody doing serious work on dataflow systems, or have they gone the
way of massively interleaved memory and drums?

Alex Colvin

unread,
Sep 6, 1991, 3:49:45 PM9/6/91
to
ma...@hubcap.clemson.edu (Mark Smotherman) writes:

>Is there an example of an MISD machine

sounds like a shared memory multiprocessor, where all processors are
pounding on the same data structure. this is tricky, but there are some
algorithms that work this way.

for example, if you consider a large matrix to be a single datum, it's
possible for unsynchronized processors to converge on a factorization.
you still need a few atomic operations to inhibit some shared
accesses.

some of the combining networks that do test-&-add, etc. might also be
considered misd.

Gregory T. Byrd

unread,
Sep 7, 1991, 8:47:44 AM9/7/91
to
In article <1991Sep6.1...@dartvax.dartmouth.edu>,

m...@eleazar.dartmouth.edu (Alex Colvin) writes:
>ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>
>>Is there an example of an MISD machine
>
>sounds like a shared memory multiprocessor, where all processors are
>pounding on the same data structure. this is tricky, but there are some
>algorithms that work this way.

Nope. Is has to do with how many independent data streams
are *available* in the architecture. Just because all the
processors in a shared memory machine might read the same
location doesn't mean they're restricted to do so.

In a MISD machine, there would only one component that
generates data addresses.

(In the same vein as your example, you can certainly make
a MIMD machine act like it's SIMD, but that doesn't take
away its MIMD-ness.)

Andy Glew

unread,
Sep 7, 1991, 3:53:17 AM9/7/91
to
>ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>
>Is there an example of an MISD machine

I can imagine, e.g., a real-time signal processing system, where multiple
processors apply different algorithms to the same incoming data stream.

They might use different algorithms designed to perform the same
computation, but with different adaptabilities - e.g. different threat
estimation algorithms, one which assumes a noisy battlefield, another
which assumes that sensor data is reliable.

Or you might use different algorithms for different processors for
different purposes: one a collision avoidance algorithm, that plots
the aircraft course so as to avoid collisions with other objects, the
other a targetting algorithm, that constantly calculates cannon
position to hit the other objects in the sky (so that you don't have
to wait for targetting calculations before firing).
--

Andy Glew, gl...@ichips.intel.com
Intel Corp., M/S JF1-19, 5200 NE Elam Young Pkwy,
Hillsboro, Oregon 97124-6497

This is a private posting; it does not indicate opinions or positions
of Intel Corp.

Intel Inside (tm)

Joe Buck

unread,
Sep 7, 1991, 2:40:22 PM9/7/91
to
In article <GLEW.91S...@pdx007.intel.com> gl...@pdx007.intel.com (Andy Glew) writes:
>>ma...@hubcap.clemson.edu (Mark Smotherman) writes:
>>
>>Is there an example of an MISD machine
>
>I can imagine, e.g., a real-time signal processing system, where multiple
>processors apply different algorithms to the same incoming data stream.

Here's one: consider a neural-network model of the auditory cortex of
a one-eared person. There's at least conceptually only a single data
stream: the incoming sound (here we'd consider the cochlea part of the
processing -- a series of filter banks).

For that matter, and neural network that processes a single data stream
could be considered MISD in a sense (though if you consider the signals
on all the connections data you're back to MIMD).

--
--
Joe Buck
jb...@galileo.berkeley.edu {uunet,ucbvax}!galileo.berkeley.edu!jbuck

Donald Lindsay

unread,
Sep 7, 1991, 7:21:51 PM9/7/91
to

In article <9...@adpplz.UUCP> mar...@adpplz.UUCP (Martin Golding) writes:
>Is anybody doing serious work on dataflow systems, or have they gone the
>way of massively interleaved memory and drums?

Arvind's group at MIT is doing the "Monsoon" machine, with help from
Motorola.

Bill Dally's group at MIT has Id running on their Message Based
Processor machine, and claim to be getting performance comparable to
Monsoon's. Note that that's not a dataflow machine - just a MIMD
machine that can timeshare with an unusually fine grain.
--
Don D.C.Lindsay Carnegie Mellon Robotics Institute

Thorsten von Eicken

unread,
Sep 7, 1991, 10:01:08 PM9/7/91
to

And we have Id running on RISC machines at about the same speeds (see last
ASPLOS). The bottom line is that dataflow *architectures* are fading out
but that dataflow *compilation techniques* are here to stay.
Thorsten von Eicken (t...@cs.berkeley.edu)

Bradley C. Kuszmaul

unread,
Sep 23, 1991, 1:29:58 AM9/23/91
to
In article <1991Sep07.2...@cs.cmu.edu> lind...@cs.cmu.edu (Donald Lindsay) writes:

Path: mintaka!olivea!samsung!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!pt.cs.cmu.edu!lindsay
From: lind...@cs.cmu.edu (Donald Lindsay)
Newsgroups: comp.arch
Date: 7 Sep 91 23:21:51 GMT
References: <9...@adpplz.UUCP>
Organization: School of Computer Science, Carnegie Mellon
Lines: 14
Nntp-Posting-Host: gandalf.cs.cmu.edu


In article <9...@adpplz.UUCP> mar...@adpplz.UUCP (Martin Golding) writes:
>Is anybody doing serious work on dataflow systems, or have they gone the
>way of massively interleaved memory and drums?

Arvind's group at MIT is doing the "Monsoon" machine, with help from
Motorola.

I am a graduate student in Arvind's research group. There is a lot of
active work building Monsoon and there is a lot of research going on to try
to understand how to build START, the next generation dataflow machine.

Bill Dally's group at MIT has Id running on their Message Based
Processor machine, and claim to be getting performance comparable to
Monsoon's. Note that that's not a dataflow machine - just a MIMD
machine that can timeshare with an unusually fine grain.

When I saw Ellen Spertus's talk on this work, I got the impression that
they were not running Id on the J-machine. I believe that Ms. Spertus hand
coded various small dataflow programs. She had quite a few clever ideas
about how to map dataflow computation onto the J-machine. I don't remember
the performance figures. Maybe she will elaborate...
-Bradley

Ellen R. Spertus

unread,
Sep 23, 1991, 5:11:11 PM9/23/91
to

Bradley, thanks for calling this to my attention. You are correct
that there is not a full Id compiler on the J-Machine and that we have
not proven that we can match Monsoon's performance, although there is
reason to believe we can on some sorts of programs.

Specifically, we have made Id code run on the J-Machine through two
methods:

1. Hand-compiling dataflow graphs according to templates
(i.e., no fancy optimization is going on).
2. Taking Robert Iannucci's compiler which originally produced
code for his hybrid architecture and adding a MDP back end.
Only a subset of Id was supported.

For factorial, method 1 produced code which ran a little faster than
the corresponding Monsoon code on existing hardware. Method 2
performed worse, because it did too much run-time simulation of
Iannucci's architecture. Our experiences with these systems is
written up in ICPP '91. If anybody wants a copy of the paper (or a
longer version), just let me know.

I am currently working on a MDP back end for the ID Threaded Abstract
Machine (TAM) compiler at Berkeley. See ASPLOS '91 and FPCA '91 for
more details on TAM. In about six months, I'll be able to say how
this system compares to Monsoon's performance.

Ellen Spertus

bakhtia...@gmail.com

unread,
Sep 10, 2018, 4:33:29 AM9/10/18
to
Please can you explain the MISD in detail

EricP

unread,
Sep 10, 2018, 9:22:08 AM9/10/18
to
bakhtia...@gmail.com wrote:
> Please can you explain the MISD in detail

Ah the new school year...

MISD is one of the newest areas of investigation for Quantum Computers.

You may have heard that Q.C. hold their data in a state of
supposition while they work on it one instruction at a time.
Since they calculate multiple data states at once,
that is called Single Instruction Multiple Data or SIMD.

There is also a whole new area of research in whereby
multiple quantum instructions are held in a state of
supposition while acting on a single data item.
That is called Multiple Instruction Single Data or MISD.
If it pans out (and that is really the big question) it will
allow Quantum Computers to execute whole programs in 1 clock,
applying all the instructions at once to a single data item.



MitchAlsup

unread,
Sep 10, 2018, 12:17:48 PM9/10/18
to
On Monday, September 10, 2018 at 8:22:08 AM UTC-5, EricP wrote:
> bakhtia...@gmail.com wrote:
> > Please can you explain the MISD in detail
>
> Ah the new school year...

Imagine a multiplier where the multiplication array is held in
a superposition. Now apply a double width result and a single
width operand, and have the multiplier array determine what
the other operand had to be.

Ivan Godard

unread,
Sep 10, 2018, 12:30:10 PM9/10/18
to
Without involving quantum weirdness, MISC can be understood as any
situation in which multiple actions are applied to a single datum. I
don't know any MISC at the scale of ordinary instructions, but
commercial examples exist both at larger scale (data plane parse,
recognize, and mutate a single packet) and smaller (multiple x86
decoders concurrently parsing a single fetched bit-block; certain kinds
of adders).

Quadibloc

unread,
Sep 10, 2018, 3:22:16 PM9/10/18
to
On Monday, September 2, 1991 at 9:43:51 AM UTC-6, Mark Smotherman wrote:
> Is there an example of an MISD machine
> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
> or is it generally agreed that such a beast doesn't make sense?

I think it clearly does not make sense.

A SISD machine will, depending on the program, sometimes operate one one piece
of data, and then perform a different operation on the result. But that's
different data after the previous operation changed it.

So calling dataflow machines, pipelined machines, or accounting equipment "MISD"
doesn't capture the meaning of the classification as applied to the other cases.

In the case of SIMD and MIMD, the "multiple" element is, even if not
simultaneous, essentially in parallel. So MISD would mean multiple instructions
applied to the same original piece of data, not in succession. And that would
mean more results than there were inputs.

John Savard

Ivan Godard

unread,
Sep 10, 2018, 3:45:22 PM9/10/18
to
Yes, but only insofar as there are results. However, not all operations
have results, store and compare-and-branch for example. If one imagines
an accumulator machine with a VLIW encoding and mem-op-accum operations,
one could express:
if ((x = y) < z) {...}
in one instruction and four operations, which is clearly MISD even if
you don't count the initial load.

I do not claim that this would be particularly good design, but it is
meaningful.

MitchAlsup

unread,
Sep 10, 2018, 5:51:48 PM9/10/18
to
On Monday, September 10, 2018 at 2:22:16 PM UTC-5, Quadibloc wrote:
> On Monday, September 2, 1991 at 9:43:51 AM UTC-6, Mark Smotherman wrote:
> > Is there an example of an MISD machine
> > (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
> > or is it generally agreed that such a beast doesn't make sense?
>
> I think it clearly does not make sense.

I can remember an example from Whetstone::

x := sqrt(exp(ln(x)/t1));

This looks pretty much like it could be tackled by MISD.

Quadibloc

unread,
Sep 10, 2018, 9:45:45 PM9/10/18
to
On Monday, September 10, 2018 at 3:51:48 PM UTC-6, MitchAlsup wrote:

> I can remember an example from Whetstone::
>
> x := sqrt(exp(ln(x)/t1));
>
> This looks pretty much like it could be tackled by MISD.

That is SISD. MISD would be:

y := ln(x)/t1 ;
z := exp(x)+t1 ;
a := sqrt(x)*t1 ;

Taking the _same data_ and running off doing different things with it in
parallel.

John Savard

EricP

unread,
Sep 11, 2018, 1:31:43 AM9/11/18
to
Now imagine we could charge certain governments and companies
to _not_ sell such a crypto factoring machine to the general public.
We could just sit home and laugh and laugh and collect our dough.

Sigh... it probably only works in one of those alternate universes.

Chris M. Thomasson

unread,
Sep 11, 2018, 1:33:38 AM9/11/18
to
I have a little crypto for ya:

Fwiw, my cipher actually encrypts n-bytes of random TRNG bytes into each
message. Nothing is sent in the clear like a traditional IV.

So, I agree with you that there is no need to have random bytes per
packet. Per message is fine.

Here is some more info:

http://funwithfractals.atspace.cc/ct_cipher/

And a C implementation:

https://groups.google.com/d/topic/comp.lang.c/a53VxN8cwkY/discussion
(read all if interested...)

https://pastebin.com/raw/feUnA3kP


Funny times...

Terje Mathisen

unread,
Sep 12, 2018, 2:23:37 AM9/12/18
to
Would it be legal to recognize that expression and replace it with

x = exp(ln(x)/(t1*2));

or even

x = pow(x, -t1*2);

I.e. you are trying to calculate the (t1*2)'th root of x here, which
should be OK at least as long as x is positive.

Using a good pow() implementation you should be able to calculate this
in a few tens of clock cycles, right?

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

mac

unread,
Sep 12, 2018, 8:07:55 AM9/12/18
to

> Please can you explain the MISD in detail

Consider atomic test-and-set or fetch-and-add, where multiple instruction
streams converge on a single datum

--
mac the naïf

MitchAlsup

unread,
Sep 12, 2018, 10:58:06 AM9/12/18
to
On Wednesday, September 12, 2018 at 1:23:37 AM UTC-5, Terje Mathisen wrote:
> MitchAlsup wrote:
> > On Monday, September 10, 2018 at 2:22:16 PM UTC-5, Quadibloc wrote:
> >> On Monday, September 2, 1991 at 9:43:51 AM UTC-6, Mark Smotherman
> >> wrote:
> >>> Is there an example of an MISD machine (i.e., from Flynn
> >>> notation: {Single|Multiple}Inst.stream{S|M}Data.stream), or is it
> >>> generally agreed that such a beast doesn't make sense?
> >>
> >> I think it clearly does not make sense.
> >
> > I can remember an example from Whetstone::
> >
> > x := sqrt(exp(ln(x)/t1));
> >
> > This looks pretty much like it could be tackled by MISD.
> >
>
> Would it be legal to recognize that expression and replace it with
>
> x = exp(ln(x)/(t1*2));
>
> or even
>
> x = pow(x, -t1*2);
>
> I.e. you are trying to calculate the (t1*2)'th root of x here, which
> should be OK at least as long as x is positive.

Done properly, POW(x,-t1*2) is likely more accurate than EXP(ln(x)/(t1*2))
since the intermediate product can be separated into the integer part of
ln(x) and the fraction part of ln(x) allowing fairly easy access to 58-62-bits
of intermediate; both of which can be multiplied by -2*t1.

> Using a good pow() implementation you should be able to calculate this
> in a few tens of clock cycles, right?

My transcendental evaluator can do this in 38 cycles DP.

MitchAlsup

unread,
Sep 12, 2018, 12:34:49 PM9/12/18
to
I am sorry, the correct number is 34 cycles.

Terje Mathisen

unread,
Sep 12, 2018, 2:36:42 PM9/12/18
to
Very sorry indeed. :-)

From what you wrote above it sounds like your pow is in fact using ln()
and exp() as the building blocks: Is there a good way to calculate it
directly?

I would at least try log2 and 2^x instead of ln and exp, particularly
the second half should be faster/more accurate.

MitchAlsup

unread,
Sep 12, 2018, 3:31:46 PM9/12/18
to
There is a whole page of tests for special conditions, but once these
are performed, one decomposes the FP operand and the performs Ln2
on the fraction part, while extracting the exponent.

s = SIGN( y );
y = ABS( y );
expon = EXPON( y ); // Ln2 from the exponent
fract = FRACT( y );
ln2F = ln2( fract ); // Ln2 from the fraction
// expon<12> + ln2F<64> is the ln2 of y to 78 bits

HW can do the above in ZERO cycles.
Then one performs a high precision multiplication

// perform x*ln2(y)
expnX = (double)expon * x;
expnY = ln2F * x;

HW does not have to regroup the integer and fractional parts but any SW
implementation would. HW also does not have to (double) the integer either

// regroup integer exponent and FP fraction
expon = NINT( expnX + expnY );
fract = ( expnX - (double)expon ) + expnY;
// The only point of error insertion is exp2

Then one simply stitches it all back together.

// perform exp2( x*ln2(y) )
if( s ) return EADD( -expon, exp2( -fract ) );
else return EADD( expon, exp2( fract ) );

The only error is in the Ln2( fract ) and in the exp2( fract ). in HW ln2
generates a 64-bit result wich would have been rounded into 53 bits, but
it remains 64-bits in the high precision multiply. exp2, also, takes a
58-bit argument and generates a 64-bit result (pre rounding). It just takes
a bit of logic to keep all of the binary point in line so the arithmetic can
all be done in fixed point.

MitchAlsup

unread,
Sep 12, 2018, 3:52:29 PM9/12/18
to
Except for the ln2(fract) part

ashish....@gmail.com

unread,
Apr 1, 2020, 2:43:13 AM4/1/20
to
I think example of MISD would be encryption process they break single instruction to multiple and operate on same data

joshua.l...@gmail.com

unread,
Apr 4, 2020, 12:33:29 PM4/4/20
to
On Monday, 2 September 1991 16:43:51 UTC+1, Mark Smotherman wrote:
> Is there an example of an MISD machine
> (i.e., from Flynn notation: {Single|Multiple}Inst.stream{S|M}Data.stream),
> or is it generally agreed that such a beast doesn't make sense?

An atomic integer operated on by multiple processors looks MISD,
as do some other lock-free parallel data structures. Each processor
is operating on the same data, each doing different things.

I'd argue my Restricted Asynchronous Dataflow idea that chains
a bunch of operations to a waiting load might count as MISD.
0 new messages