CPU Design HOW−TO

28 views
Skip to first unread message

杨毅涛

unread,
Mar 29, 2012, 1:39:41 PM3/29/12
to sh...@googlegroups.com
CPU Design HOW−TO

感觉SHLUG是个可以纯粹讨论技术的地方。因为前些日子获得了这样一篇文章。所以想和大家交流一下CPU设计还有开源CPU核的东西。希望能够有玩过CPU设计还有开源CPU的给些指点。这个文章貌似有些老了,里面虽然说有很多链接已经不能用了。不过。还是有很多可以讨论的。最早我看到这个文章貌似在SGI的一个技术贴里面,后来实在找不到这个文章的出处了。不过有个文档。
就是说,这个文章看下来,我在考虑自制CPU的可能性,或者自己设计CPU的可能性。至少曾经有IT历史证明,很多非常好的技术都来自民间。咱们是否有人一起研究这个东西呢。

CPU Design HOW−TO
Al Dev (Alavoor Vasudevan) 
v12.0, 14 June 2001
CPU is the "brain" of computer and is a very vital component of computer system and is like a "cousin
brother" of operating system (Linux or Unix). This document helps companies, businesses, universities and
research institutes to design, build and manufacture CPUs. Also the information will be useful for university
students of U.S.A and Canada who are studying computer science/engineering. The document has URL links
which helps students understand how a CPU is designed and manufactured. Perhaps in near future there will
be a GNU/GPLed CPU running Linux, Unix, Microsoft Windows, Apple Mac and BeOS operating systems!!
1. Introduction
2. What is IP ?
· 2.1 Free CPU List
· 2.2 Commercial CPU List
3. CPU Museum and Silicon Zoo
· 3.1 CPU Museum
· 3.2 How Transistors work
· 3.3 How a Transistors handles information
· 3.4 Displaying binary information
· 3.5 What is a Semi−conductor?
4. CPU Design and Architecture
· 4.1 CPU Design
· 4.2 Online Textbooks on CPU Architecture
· 4.3 University Lecture notes on CPU Architecture
· 4.4 CPU Architecture
· 4.5 Usenet Newsgroups for CPU design
5. Fabrication, Manufacturing CPUs
· 5.1 Foundry Business is in Billions of dollars!!
· 5.2 Fabrication of CPU
CPU Design HOW−TO 1
6. Super Computer Architecture
· 6.1 Main Architectural Classes
· 6.2 SISD machines
· 6.3 SIMD machines
· 6.4 MISD machines
· 6.5 MIMD machines
· 6.6 Distributed Processing Systems
· 6.7 ccNUMA machines
7. Neural Network Processors
8. Related URLs
9. Other Formats of this Document
· 9.1 Acrobat PDF format
· 9.2 Convert Linuxdoc to Docbook format
· 9.3 Convert to MS WinHelp format
· 9.4 Reading various formats
10. Copyright
1. Introduction
This document provides you comprehensive list of URLs for CPU Design and fabrication. Using this
information students, companies, universities or businesses can make new CPUs which can run Linux/Unix
operating systems.
In olden days, chip vendors were also the IP developers and the EDA tools developers. Nowadays, we have
specialized fab companies (TSMC http://www.tsmc.com), IP companies (ARM http://www.arm.com, MIPS
http://www.mips.com, Gray Research LLC http://cnets.sourceforge.net/grllc.html ), and tools companies (
Mentor http://www.mentor.com, Cadence http://www.cadence.com, etc.), and combinations of these (Intel).
You can buy IP bundled with hardware (Intel), bundled with your tools (EDA companies), or separately (IP
providers).
Enter the FPGA vendors (Xilinx http://www.xilinx.com, Altera http://www.altera.com). They have an
opportunity to seize upon a unique business model.
VA Linux systems http://www.valinux.com builds the entire system and perhaps in future will design and
build CPUs for Linux.
Visit the following CPU design sites:
· FPGA Main site http://www.fpgacpu.org
CPU Design HOW−TO
6. Super Computer Architecture 2
OpenRISC 1000 Free Open−source 32−bit RISC processor IP core competing with proprietary ARM
· 
· Open IP org http://www.openip.org
· Free IP org − ASIC and FPGA cores for masses http://www.free−ip.com
2. What is IP ?
What is IP ? IP is short for Intellectual Property. More specifically, it is a block of logic that can be used in
making ASIC's and FPGA's. Examples of "IP Cores" are, UART's, CPU's, Ethernet Controllers, PCI
Interfaces, etc. In the past, quality cores of this nature could cost anywhere from US$5,000 to more than
US$350,000. This is way too high for the average company or individual to even contemplate using −−
Hence, the Free−IP project.
Initially the Free−IP project will focus on the more complex cores, like CPU's and Ethernet controllers. Less
complex cores might follow.
The Free−IP project is an effort to make quality IP available to anyone.
Visit the following sites for IP cores −
· Open IP org http://www.openip.org
· Free IP org − ASIC and FPGA cores for masses http://www.free−ip.com
· FPGA Main site http://www.fpgacpu.org
2.1 Free CPU List
Here is the list of Free CPUs available or curently under development −
· F−CPU 64−bit Freedom CPU http://www.f−cpu.org mirror site at http://www.f−cpu.de
· European Space Agency − SPARC architecture LEON CPU http://www.estec.esa.nl/wsmwww/leon
· European Space Agency − ERC32 SPARC V7 CPU http://www.estec.esa.nl/wsmwww/erc32
Atmel ERC32 SPARC part # TSC695E http://www.atmel−wm.com/products click on
Aerospace=>Space=>Processors
· 
Sayuri at http://www.morphyplanning.co.jp/Products/FreeCPU/freecpu−e.html and manufactured by
Morphy Planning Ltd at http://www.morphyone.org and feature list at
· 
OpenRISC 1000 Free 32−bit processor IP core competing with proprietary ARM and MIPS is at
· 
· OpenRISC 2000 is at http://www.opencores.org
· STM 32−bit, 2−way superscalar RISC CPU http://www.asahi−net.or.jp/~uf8e−itu
· Green Mountain − GM HC11 CPU Core is at http://www.gmvhdl.com/hc11core.html
Open−source CPU site − Google Search "Computers>Hardware>Open Source"
· 
CPU Design HOW−TO
2. What is IP ? 3
· Free microprocessor and DSP IP cores written in Verilog or VHDL http://www.cmosexod.com
· Free hardware cores to speed development http://www.scrap.de/html/opencore.htm
· Linux open hardware and free EDA systems http://opencollector.org
2.2 Commercial CPU List
Russian E2K 64−bit CPU (Very fast CPU !!!) website : http://www.elbrus.ru/roadmap/e2k.html.
ELBRUS is now partnered (alliance) with Sun Microsystems of USA.
· 
Korean CPU from Samsung 64−bit CPU original from DEC Alpha
http://www.samsungsemi.com Alpha−64bit CPU is at http://www.alpha−processor.com Now there is
collaboration between Samsumg, Compaq of USA on Alpha CPU
· 
· Transmeta crusoe CPU and in near future Transmeta's 64−bit CPU http://www.transmeta.com
· Sun Ultra−sparc 64−bit CPU http://www.sun.com or http://www.sunmicrosystems.com
· MIPS RISC CPUs http://www.mips.com
· Silicon Graphics MIPS Architecture CPUs http://www.sgi.com/processors
· IDT MIPS Architecture CPUs http://www.idt.com
Motorola embedded processors. SPS processor based on PowerPC, M−CORE, ColdFire, M68k, or
M68HC cores http://www.mot−sps.com
· 
Hitachi SuperH 64−bit RISC processor SH7750 http://www.hitachi.com sold at $40 per cpu in
quantities of 10,000. Hitachi SH4,3,2,1 CPUs http://semiconductor.hitachi.com/superh
· 
· Fujitsu 64−bit processor http://www.fujitsu.com
HAL−Fujitsu (California) Super−Sparc 64−bit processor http://www.hal.com also compatible to
Sun's sparc architecture.
· 
· Seimens Pyramid CPU from Pyramid Technologies
· Intel X86 series 32−bit CPUs Pentiums, Celeron etc..
· AMDs X86 series 32−bit CPUs K−6, Athlon etc..
· National's Cyrix X86 series 32−bit CPUs Cyrix etc..
· QED RISC 64−bit and MIPS cpus : http://www.qedinc.com/about.htm
Origin 2000 CPU −
http://techpubs.sgi.com/library/manuals/3000/007−3511−001/html/O2000Tuning.1.html
· 
· Univ. of Mich High−perf. GaAs Microprocessor Project http://www.eecs.umich.edu/UMichMP
· Hyperstone E1−32 RISC/DSP processor http://bwrc.eecs.berkeley.edu/CIC/tech/hyperstone
· PSC1000 32−bit RISC processor http://www.ptsc.com/psc1000/index.html
IDT R/RV4640 and R/RV4650 64−bit CPU w/DSP Capability
· 
· Cogent CPUs http://www.cogcomp.com
· CPU Info center − List of CPUs sparc, arm etc.. http://bwrc.eecs.berkeley.edu/CIC/tech
Main CPU site is : Google Search engine CPU site
"Computers>Hardware>Components>Microprocessors"
· 
Other important CPU sites are at −
CPU Design HOW−TO
2.2 Commercial CPU List 4
World−wide 24−hour news on CPUs
· 
· The computer architecture site is at http://www.cs.wisc.edu/~arch/www
· Microdesign resources http://www.mdronline.com
3. CPU Museum and Silicon Zoo
This chapter gives very basics of CPU technology. If you have good technical background then you can skip
this entire chapter.
3.1 CPU Museum
CPU Museum is at
· Intel − History of Microprocessors http://www.intel.com/intel/museum/25anniv
· Virtual Museum of Computing http://www.museums.reading.ac.uk/vmoc
· Intel − How the Microprocessors work http://www.intel.com/education/mpuworks
· Simple course in Microprocessors http://www.hkrmicro.com/course/micro.html
3.2 How Transistors work
Microprocessors are essential to many of the products we use every day such as TVs, cars, radios, home
appliances and of course, computers. Transistors are the main components of microprocessors. At their most
basic level, transistors may seem simple. But their development actually required many years of painstaking
research. Before transistors, computers relied on slow, inefficient vacuum tubes and mechanical switches to
process information. In 1958, engineers (one of them Intel founder Robert Noyce) managed to put two
transistors onto a silicon crystal and create the first integrated circuit that led to the microprocessor.
Transistors are miniature electronic switches. They are the building blocks of the microprocessor which is the
brain of the computer. Similar to a basic light switch, transistors have two operating positions, on and off.
This on/off, or binary functionality of transistors enables the processing of information in a computer.
How a simple electronic switch works:
The only information computers understand are electrical signals that are switched on and off. To
comprehend transistors, it is necessary to have an understanding of how a switched electronic circuit works.
Switched electronic circuits consist of several parts. One is the circuit pathway where the electrical current
flows − typically through a wire. Another is the switch, a device that starts and stops the flow of electrical
current by either completing or breaking the circuit's pathway. Transistors have no moving parts and are
turned on and off by electrical signals. The on/off switching of transistors facilitates the work performed by
microprocessors.
CPU Design HOW−TO
3. CPU Museum and Silicon Zoo 5
3.3 How a Transistors handles information
Something that has only two states, like a transistor, can be referred to as binary. The transistor's on state is
represented by a 1 and the off state is represented by a 0. Specific sequences and patterns of 1's and 0's
generated by multiple transistors can represent letters, numbers, colors and graphics. This is known as binary
notation
3.4 Displaying binary information
Spell your name in Binary:
Each character of the alphabet has a binary equivalent. Below is the name JOHN and its equivalent in binary.
J 0100 1010
O 0100 1111
H 0100 1000
N 0100 1110
More complex information can be created such as graphics, audio and video using the binary, or on/off action
of transistors.
Scroll down to the Binary Chart below to see the complete alphabet in binary.
Character
Binary Character Binary
A 0100 0001 N 0100 1110
B 0100 0010 O 0100 1111
C 0100 0011 P 0101 0000
D 0100 0100 Q 0101 0001
E 0100 0101 R 0101 0010
F 0100 0110 S 0101 0011
G 0100 0111 T 0101 0100
H 0100 1000 U 0101 0101
I 0100 1001 V 0101 0110
J 0100 1010 W 0101 0111
K 0100 1011 X 0101 1000
L 0100 1100 Y 0101 1001
M 0100 1101 Z 0101 1010
Binary Chart
for Alphabets
CPU Design HOW−TO
3.3 How a Transistors handles information 6
3.5 What is a Semi−conductor?
Conductors and insulators :
Many materials, such as most metals, allow electrical current to flow through them. These are known as
conductors. Materials that do not allow electrical current to flow through them are called insulators. Pure
silicon, the base material of most transistors, is considered a semiconductor because its conductivity can be
modulated by the introduction of impurities.
Anatomy of Transistor
Semiconductors and flow of electricity
Adding certain types of impurities to the silicon in a transistor changes its crystalline structure and enhances
its ability to conduct electricity. Silicon containing boron impurities is called p−type silicon − p for positive
or lacking electrons. Silicon containing phosphorus impurities is called n−type silicon − n for negative or
having a majority of free electrons
A Working Transistor
A Working transistor − The On/Off state of Transistor
Transistors consist of three terminals; the source, the gate and the drain.
In the n−type transistor, both the source and the drain are negatively−charged and sit on a positively−charged
well of p−silicon.
When positive voltage is applied to the gate, electrons in the p−silicon are attracted to the area under the gate
forming an electron channel between the source and the drain.
When positive voltage is applied to the drain, the electrons are pulled from the source to the drain. In this
state the transistor is on.
If the voltage at the gate is removed, electrons aren't attracted to the area between the source and drain. The
pathway is broken and the transistor is turned off.
Impact of Transistors
The Impact of Transistors − How microprocessors affect our lives.
The binary function of transistors gives micro− processors the ability to perform many tasks; from simple
word processing to video editing. Micro− processors have evolved to a point where transistors can execute
hundreds of millions of instructions per second on a single chip. Automobiles, medical devices, televisions,
computers and even the Space Shuttle use microprocessors. They all rely on the flow of binary information
made possible by the transistor.
CPU Design HOW−TO
3.5 What is a Semi−conductor? 7
4. CPU Design and Architecture
4.1 CPU Design
Visit the following links for information on CPU Design.
· Hamburg University VHDL archive http://tech−www.informatik.uni−hamburg.de/vhdl
· List of FPGA−based Computing Machines http://www.io.com/~guccione/HW_list.html
· SPARC International http://www.sparc.com
· Design your own processor http://www.spacetimepro.com
· Teaching Computer Design with FPGAs http://www.fpgacpu.org
· Technical Committee on Computer Architecture http://www.computer.org/tab/tcca
Frequently Asked Questions FAQ on VHDL http://www.vhdl.org/vi/comp.lang.vhdl or it is at
· 
· Comp arch FAQ ftp://rtfm.mit.edu/pub/usenet−by−hierarchy/comp/arch
· Linux benchmarks http://www.silkroad.com/linux−bm.html
4.2 Online Textbooks on CPU Architecture
Univ of Texas Comp arch :
· 
· Number systems and Logic circuits : http://www.tpub.com/neets/book13/index.htm
· Instruction Execution cycle: http://cq−pan.cqu.edu.au/students/timp1/exec.html
· Overview of Shared Memory: http://www.sics.se/cna/mp_overview.html
· Simulaneous Multi−threading in processors : http://www.cs.washington.edu/research/smt
Advice: An Adaptable and Extensible Distributed Virtual Memory Architecture
· 
Univ of Utah Avalanche Scalable Parallel Processor Project
· 
· Pisma Memory architecture: http://aiolos.cti.gr/en/pisma/pisma.html
· 
· Comp Arch Conference and Journals http://www.handshake.de/user/kroening/conferences.html
· WWW Comp arch page http://www.cs.wisc.edu/~arch/www
CPU Design HOW−TO
4. CPU Design and Architecture 8
4.3 University Lecture notes on CPU Architecture
· Computer architecture − Course level 415 http://www.diku.dk/teaching/2000f/f00.415
Rutgers Univ − Principles of Comp Arch :
· 
· Univ of Sydney − Intro Digital Systems : http://www.eelab.usyd.edu.au/digital_tutorial/part3
Bournemouth Univ, UK Principles of Computer Systems :
· 
· Parallel Virtual machine: http://www.netlib.org/pvm3/book/node1.html
· Examples of working VLSI circuits(in Greek) http://students.ceid.upatras.gr/~gef/projects/vlsi
4.4 CPU Architecture
Visit the following links for information on CPU architecture
· 
· Beyond RISC − The Post−RISC Architecture http://www.cps.msu.edu/~crs/cps920
· CPU Info center − List of CPUs sparc, arm etc.. http://bwrc.eecs.berkeley.edu/CIC/tech
· cpu arch intel IA 64 http://developer.intel.com/design/ia−64
· Intel 386 CPU architecture http://www.delorie.com/djgpp/doc/ug/asm/about−386.html
· Freedom CPU architecture http://f−cpu.tux.org/original/Freedom.php3
· CRIMSEN OS and teaching−aid CPU http://www.dcs.gla.ac.uk/~ian/project3/node1.html
· Tron CPU architecture http://tronweb.super−nova.co.jp/tronvlsicpu.html
4.5 Usenet Newsgroups for CPU design
· Newsgroup computer architecture news:comp.arch
· Newsgroup FPGA news:comp.arch.fpga
· Newsgroup Arithmetic news:comp.arch.arithmetic
· Newsgroup Bus news:comp.arch.bus
· Newsgroup VME Bus news:comp.arch.vmebus
· Newsgroup embedded news:comp.arch.embedded
· Newsgroup embedded piclist news:comp.arch.embedded.piclist
· Newsgroup storage news:comp.arch.storage
CPU Design HOW−TO
4.3 University Lecture notes on CPU Architecture 9
· Newsgroup VHDL news:comp.lang.vhdl
· Newsgroup Computer Benchmarks news:comp.benchmarks
5. Fabrication, Manufacturing CPUs
After doing the design and testing of CPU, your company may want to mass produce the CPUs. There are
many "semi−conductor foundries" in the world who will do that for you for a nominal competetive cost.
There are companies in USA, Germany, UK, Japan, Taiwan, Korea and China.
TMSC (Taiwan) is the "largest independent foundry" in the world. You may want to shop around and you
will get the best rate for a very high volume production (greater than 100,000 CPU units).
5.1 Foundry Business is in Billions of dollars!!
Foundry companies invested very heavily in the infra−structure and building plants runs in several millions
of dollars! Silicon foundry business will grow from $7 billion to $36 billion by 2004 (414% increase!!). More
integrated device manufacturers (IDMs) opt to outsource chip production verses adding wafer−processing
capacity.
Independent foundries currently produce about 12% of the semiconductors in the world, and by 2004, that
share will more than double to 26%.
The "Big Three" pure−play foundries are −− Taiwan Semiconductor Manufacturing Co. (TSMC), United
Microelectronics Corp. (UMC), and Chartered Semiconductor Manufacturing Ltd. Pte.−−collectively account
for 69% of today's silicon foundry volume, but their share is expected to grow to 88% by 2004.
5.2 Fabrication of CPU
There are hundreds of foundries in the world (too numerous to list). Some of them are −
· Fabless Semiconductor Association http://www.fsa.org
TSMC (Taiwan Semi−conductor Manufacturing Co) http://www.tsmc.com, about co
· 
· Chartered Semiconductor Manufacturing, Singapore http://www.csminc.com
· United Microelectronics Corp. (UMC) http://www.umc.com/index.html
· Advanced BGA Packing http://www.abpac.com
· Amcor, Arizona http://www.amkor.com
· Elume, USA http://www.elume.com
· X−Fab, Gesellschaft zur Fertigung von Wafern mbH, Erfurt, Germany http://www.xfab.com
· IBM corporation, (Semi−conductor foundry div) http://www.ibm.com
· National Semi−conductor Co, Santa Clara, USA http://www.natioanl.com
· Intel corporation (Semi−conductor foundries), USA http://www.intel.com
· Hitachi Semi−conductor Co, Japan http://www.hitachi.com
· FUJITSU limited, Japan has Wafer−foundry−services
· Mitsubhishi Semi−conductor Co, Japan
· Hyandai Semi−conductor, Korea http://www.hea.com
· Samsumg Semi−conductor, Korea
· Atmel, France http://www.atmel−wm.com
CPU Design HOW−TO
5. Fabrication, Manufacturing CPUs 10
If you know any major foundries, let me know I will add to list.
List of CHIP foundry companies
6. Super Computer Architecture
For building Super computers, the trend that seems to emerge is that most new systems look as minor
variations on the same theme: clusters of RISC−based Symmetric Multi−Processing (SMP) nodes which in
turn are connected by a fast network. Consider this as a natural architectural evolution. The availability of
relatively low−cost (RISC) processors and network products to connect these processors together with
standardised communication software has stimulated the building of home−brew clusters computers as an
alternative to complete systems offered by vendors.
Visit the following sites for Super Computers −
· Top 500 super computers http://www.top500.org/ORSC/2000
· National Computing Facilities Foundation http://www.nwo.nl/ncf/indexeng.htm
· Linux Super Computer Beowulf cluster http://www.linuxdoc.org/HOWTO/Beowulf−HOWTO.html
· Extreme machines − beowulf cluster http://www.xtreme−machines.com
System architecture description of the Hitachi SR2201
· 
· Personal Parallel Supercomputers http://www.checs.net/checs_98/papers/super
6.1 Main Architectural Classes
Before going on to the descriptions of the machines themselves, it is important to consider some mechanisms
that are or have been used to increase the performance. The hardware structure or architecture determines to a
large extent what the possibilities and impossibilities are in speeding up a computer system beyond the
performance of a single CPU. Another important factor that is considered in combination with the hardware
is the capability of compilers to generate efficient code to be executed on the given hardware platform. In
many cases it is hard to distinguish between hardware and software influences and one has to be careful in the
interpretation of results when ascribing certain effects to hardware or software peculiarities or both. In this
chapter we will give most emphasis to the hardware architecture. For a description of machines that can be
considered to be classified as "high−performance".
Since many years the taxonomy of Flynn has proven to be useful for the classification of high−performance
computers. This classification is based on the way of manipulating of instruction and data streams and
comprises four main architectural classes. We will first briefly sketch these classes and afterwards fill in
some details when each of the classes is described.
6.2 SISD machines
These are the conventional systems that contain one CPU and hence can accommodate one instruction stream
that is executed serially. Nowadays many large mainframes may have more than one CPU but each of these
CPU Design HOW−TO
6. Super Computer Architecture 11
execute instruction streams that are unrelated. Therefore, such systems still should be regarded as (a couple
of) SISD machines acting on different data spaces. Examples of SISD machines are for instance most
workstations like those of DEC, Hewlett−Packard, and Sun Microsystems. The definition of SISD machines
is given here for completeness' sake. We will not discuss this type of machines in this report.
6.3 SIMD machines
Such systems often have a large number of processing units, ranging from 1,024 to 16,384 that all may
execute the same instruction on different data in lock−step. So, a single instruction manipulates many data
items in parallel. Examples of SIMD machines in this class are the CPP DAP Gamma II and the Alenia
Quadrics.
Another subclass of the SIMD systems are the vectorprocessors. Vectorprocessors act on arrays of similar
data rather than on single data items using specially structured CPUs. When data can be manipulated by these
vector units, results can be delivered with a rate of one, two and −−− in special cases −−− of three per clock
cycle (a clock cycle being defined as the basic internal unit of time for the system). So, vector processors
execute on their data in an almost parallel way but only when executing in vector mode. In this case they are
several times faster than when executing in conventional scalar mode. For practical purposes
vectorprocessors are therefore mostly regarded as SIMD machines. Examples of such systems is for instance
the Hitachi S3600.
6.4 MISD machines
Theoretically in these type of machines multiple instructions should act on a single stream of data. As yet no
practical machine in this class has been constructed nor are such systems easily to conceive. We will
disregard them in the following discussions.
6.5 MIMD machines
These machines execute several instruction streams in parallel on different data. The difference with the
multi−processor SISD machines mentioned above lies in the fact that the instructions and data are related
because they represent different parts of the same task to be executed. So, MIMD systems may run many
sub−tasks in parallel in order to shorten the time−to−solution for the main task to be executed. There is a
large variety of MIMD systems and especially in this class the Flynn taxonomy proves to be not fully
adequate for the classification of systems. Systems that behave very differently like a four−processor NEC
SX−5 and a thousand processor SGI/Cray T3E fall both in this class. In the following we will make another
important distinction between classes of systems and treat them accordingly.
Shared memory systems
Shared memory systems have multiple CPUs all of which share the same address space. This means that the
knowledge of where data is stored is of no concern to the user as there is only one memory accessed by all
CPUs on an equal basis. Shared memory systems can be both SIMD or MIMD. Single−CPU vector
processors can be regarded as an example of the former, while the multi−CPU models of these machines are
examples of the latter. We will sometimes use the abbreviations SM−SIMD and SM−MIMD for the two
subclasses.
CPU Design HOW−TO
6.3 SIMD machines 12
Distributed memory systems
In this case each CPU has its own associated memory. The CPUs are connected by some network and may
exchange data between their respective memories when required. In contrast to shared memory machines the
user must be aware of the location of the data in the local memories and will have to move or distribute these
data explicitly when needed. Again, distributed memory systems may be either SIMD or MIMD. The first
class of SIMD systems mentioned which operate in lock step, all have distributed memories associated to the
processors. As we will see, distributed−memory MIMD systems exhibit a large variety in the topology of
their connecting network. The details of this topology are largely hidden from the user which is quite helpful
with respect to portability of applications. For the distributed−memory systems we will sometimes use
DM−SIMD and DM−MIMD to indicate the two subclasses. Although the difference between shared− and
distributed memory machines seems clear cut, this is not always entirely the case from user's point of view.
For instance, the late Kendall Square Research systems employed the idea of "virtual shared memory" on a
hardware level. Virtual shared memory can also be simulated at the programming level: A specification of
High Performance Fortran (HPF) was published in 1993 which by means of compiler directives distributes
the data over the available processors. Therefore, the system on which HPF is implemented in this case will
look like a shared memory machine to the user. Other vendors of Massively Parallel Processing systems
(sometimes called MPP systems), like HP and SGI/Cray, also are able to support proprietary virtual
shared−memory programming models due to the fact that these physically distributed memory systems are
able to address the whole collective address space. So, for the user such systems have one global address
space spanning all of the memory in the system. We will say a little more about the structure of such systems
in the ccNUMA section. In addition, packages like TreadMarks provide a virtual shared memory
environment for networks of workstations.
6.6 Distributed Processing Systems
Another trend that has came up in the last few years is distributed processing. This takes the DM−MIMD
concept one step further: instead of many integrated processors in one or several boxes, workstations,
mainframes, etc., are connected by (Gigabit) Ethernet, FDDI, or otherwise and set to work concurrently on
tasks in the same program. Conceptually, this is not different from DM−MIMD computing, but the
communication between processors is often orders of magnitude slower. Many packages to realise distributed
computing are available. Examples of these are PVM (st anding for Parallel Virtual Machine), and MPI
(Message Passing Interface). This style of programming, called the "message passing" model has becomes so
much accepted that PVM and MPI have been adopted by virtually all major vendors of distributed−memory
MIMD systems and even on shared−memory MIMD systems for compatibility reasons. In addition there is a
tendency to cluster shared−memory systems, for instance by HiPPI channels, to obtain systems with a very
high computational power. E.g., the NEC SX−5, and the SGI/Cray SV1 have this structure. So, within the
clustered nodes a shared−memory programming style can be used while between clusters message−passing
should be used.
6.7 ccNUMA machines
As already mentioned in the introduction, a trend can be observed to build systems that have a rather small
(up to 16) number of RISC processors that are tightly integrated in a cluster, a Symmetric Multi−Processing
(SMP) node. The processors in such a node are virtually always connected by a 1−stage crossbar while these
clusters are connected by a less costly network.
This is similar to the policy mentioned for large vectorprocessor ensembles mentioned above but with the
important difference that all of the processors can access all of the address space. Therefore, such systems can
CPU Design HOW−TO
Distributed memory systems 13
be considered as SM−MIMD machines. On the other hand, because the memory is physically distributed, it
cannot be guaranteed that a data access operation always will be satisfied within the same time. Therefore
such machines are called ccNUMA systems where ccNUMA stands for Cache Coherent Non−Uniform
Memory Access. The term "Cache Coherent" refers to the fact that for all CPUs any variable that is to be
used must have a consistent value. Therefore, is must be assured that the caches that provide these variables
are also consistent in this respect. There are various ways to ensure that the caches of the CPUs are coherent.
One is the snoopy bus protocol in which the caches listen in on transport of variables to any of the CPUs and
update their own copies of these variables if they have them. Another way is the directory memory, a special
part of memory which enables to keep track of the all copies of variables and of their validness.
For all practical purposes we can classify these systems as being SM−MIMD machines also because special
assisting hardware/software (such as a directory memory) has been incorporated to establish a single system
image although the memory is physically distributed.
7. Neural Network Processors
NNs are models of biological neural networks and some are not, but historically, much of the inspiration for
the field of NNs came from the desire to produce artificial systems capable of sophisticated, perhaps
"intelligent", computations similar to those that the human brain routinely performs, and thereby possibly to
enhance our understanding of the human brain.
Most NNs have some sort of "training" rule whereby the weights of connections are adjusted on the basis of
data. In other words, NNs "learn" from examples (as children learn to recognize dogs from examples of dogs)
and exhibit some capability for generalization beyond the training data.
NNs normally have great potential for parallelism, since the computations of the components are largely
independent of each other. Some people regard massive parallelism and high connectivity to be defining
characteristics of NNs, but such requirements rule out various simple models, such as simple linear regression
(a minimal feedforward net with only two units plus bias), which are usefully regarded as special cases of
NNs.
Some definitions of Neural Network (NN) are as follows:
According to the DARPA Neural Network Study : A neural network is a system composed of many
simple processing elements operating in parallel whose function is determined by network structure,
connection strengths, and the processing performed at computing elements or nodes.
· 
According to Haykin: A neural network is a massively parallel distributed processor that has a natural
propensity for storing experiential knowledge and making it available for use. It resembles the brain
in two respects:
¨ Knowledge is acquired by the network through a learning process.
¨ Interneuron connection strengths known as synaptic weights are used to store the knowledge.
· 
According to Nigrin: A neural network is a circuit composed of a very large number of simple
processing elements that are neurally based. Each element operates only on local information.
Furthermore each element operates asynchronously; thus there is no overall system clock.
· 
According to Zurada: Artificial neural systems, or neural networks, are physical cellular systems
which can acquire, store, and utilize experiential knowledge.
· 
Visit the following sites for more info on Neural Network Processors
CPU Design HOW−TO
7. Neural Network Processors 14
· Omers Neural Network pointers http://www.cs.cf.ac.uk/User/O.F.Rana/neural.html
· Automation corp Neural Network Processor hardware
8. Related URLs
Visit following locators which are related −
· Color Vim editor http://metalab.unc.edu/LDP/HOWTO/Vim−HOWTO.html
· Source code control system http://metalab.unc.edu/LDP/HOWTO/CVS−HOWTO.html
Linux goodies main site http://www.aldev.8m.com and mirrors at http://aldev0.webjump.com,
angelfire, geocities, virtualave, 50megs, theglobe, NBCi, Terrashare, Fortunecity, Freewebsites,
Tripod, Spree, Escalix, Httpcity, Freeservers.
· 
9. Other Formats of this Document
This document is published in 14 different formats namely − DVI, Postscript, Latex, Adobe Acrobat PDF,
LyX, GNU−info, HTML, RTF(Rich Text Format), Plain−text, Unix man pages, single HTML file, SGML
(Linuxdoc format), SGML (Docbook format), MS WinHelp format.
This howto document is located at −
http://www.linuxdoc.org and click on HOWTOs and search for howto document name using
CTRL+f or ALT+f within the web−browser.
· 
You can also find this document at the following mirrors sites −
Other mirror sites near you (network−address−wise) can be found at
http://www.linuxdoc.org/mirrors.html select a site and go to directory
/LDP/HOWTO/xxxxx−HOWTO.html
· 
You can get this HOWTO document as a single file tar ball in HTML, DVI, Postscript or SGML
· 
Plain text format is in: ftp://www.linuxdoc.org/pub/Linux/docs/HOWTO and
· 
· Single HTML file format is in: http://www.linuxdoc.org/docs.html#howto
Single HTML file can be created with command (see man sgml2html) − sgml2html −split 0
xxxxhowto.sgml
CPU Design HOW−TO
8. Related URLs 15
Translations to other languages like French, German, Spanish, Chinese, Japanese are in
http://www.linuxdoc.org/docs.html#howto Any help from you to translate to other languages is
welcome.
· 
The document is written using a tool called "SGML−Tools" which can be got from −
http://www.sgmltools.org Compiling the source you will get the following commands like
· sgml2html xxxxhowto.sgml (to generate html file)
· sgml2html −split 0 xxxxhowto.sgml (to generate a single page html file)
· sgml2rtf xxxxhowto.sgml (to generate RTF file)
· sgml2latex xxxxhowto.sgml (to generate latex file)
9.1 Acrobat PDF format
PDF file can be generated from postscript file using either acrobat distill or Ghostscript. And postscript file
is generated from DVI which in turn is generated from LaTex file. You can download distill software from
http://www.adobe.com. Given below is a sample session:
bash$ man sgml2latex
bash$ sgml2latex filename.sgml
bash$ man dvips
bash$ dvips −o filename.ps filename.dvi
bash$ distill filename.ps
bash$ man ghostscript
bash$ man ps2pdf
bash$ ps2pdf input.ps output.pdf
bash$ acroread output.pdf &
Or you can use Ghostscript command ps2pdf. ps2pdf is a work−alike for nearly all the functionality of
Adobe's Acrobat Distiller product: it converts PostScript files to Portable Document Format (PDF) files.
ps2pdf is implemented as a very small command script (batch file) that invokes Ghostscript, selecting a
special "output device" called pdfwrite. In order to use ps2pdf, the pdfwrite device must be included in the
makefile when Ghostscript was compiled; see the documentation on building Ghostscript for details.
9.2 Convert Linuxdoc to Docbook format
This document is written in linuxdoc SGML format. The Docbook SGML format supercedes the linuxdoc
format and has lot more features than linuxdoc. The linuxdoc is very simple and is easy to use. To convert
linuxdoc SGML file to Docbook SGML use the program ld2db.sh and some perl scripts. The ld2db output is
not 100% clean and you need to use the clean_ld2db.pl perl script. You may need to manually correct few
lines in the document.
· Download ld2db program from http://www.dcs.gla.ac.uk/~rrt/docbook.html or from Al Dev site
· Download the cleanup_ld2db.pl perl script from from Al Dev site
The ld2db.sh is not 100% clean, you will get lots of errors when you run
bash$ ld2db.sh file−linuxdoc.sgml db.sgml
bash$ cleanup.pl db.sgml > db_clean.sgml
bash$ gvim db_clean.sgml
bash$ docbook2html db.sgml
CPU Design HOW−TO
9.1 Acrobat PDF format 16
And you may have to manually edit some of the minor errors after running the perl script. For e.g. you may
need to put closing tag < /Para> for each < Listitem>
9.3 Convert to MS WinHelp format
You can convert the SGML howto document to Microsoft Windows Help file, first convert the sgml to html
using:
bash$ sgml2html xxxxhowto.sgml (to generate html file)
bash$ sgml2html −split 0 xxxxhowto.sgml (to generate a single page html file)
Then use the tool HtmlToHlp. You can also use sgml2rtf and then use the RTF files for generating winhelp
files.
9.4 Reading various formats
In order to view the document in dvi format, use the xdvi program. The xdvi program is located in
tetex−xdvi*.rpm package in Redhat Linux which can be located through ControlPanel | Applications |
Publishing | TeX menu buttons. To read dvi document give the command −
xdvi −geometry 80x90 howto.dvi
man xdvi
And resize the window with mouse. To navigate use Arrow keys, Page Up, Page Down keys, also you can
use 'f', 'd', 'u', 'c', 'l', 'r', 'p', 'n' letter keys to move up, down, center, next page, previous page etc. To turn off
expert menu press 'x'.
You can read postscript file using the program 'gv' (ghostview) or 'ghostscript'. The ghostscript program is in
ghostscript*.rpm package and gv program is in gv*.rpm package in Redhat Linux which can be located
through ControlPanel | Applications | Graphics menu buttons. The gv program is much more user friendly
than ghostscript. Also ghostscript and gv are available on other platforms like OS/2, Windows 95 and NT,
you view this document even on those platforms.
· Get ghostscript for Windows 95, OS/2, and for all OSes from http://www.cs.wisc.edu/~ghost
To read postscript document give the command −
ghostscript howto.ps
You can read HTML format document using Netscape Navigator, Microsoft Internet explorer, Redhat Baron
Web browser or any of the 10 other web browsers.
You can read the latex, LyX output using LyX a X−Windows front end to latex.
CPU Design HOW−TO
9.3 Convert to MS WinHelp format 17
10. Copyright
Copyright policy is GNU/GPL as per LDP (Linux Documentation project). LDP is a GNU/GPL project.
Additional restrictions are − you must retain the author's name, email address and this copyright notice on all
the copies. If you make any changes or additions to this document then you should intimate all the authors of
this document.
CPU Design HOW−TO
10. Copyright 18

ghosTM55

unread,
Mar 29, 2012, 8:22:51 PM3/29/12
to sh...@googlegroups.com
On Fri, Mar 30, 2012 at 1:39 AM, 杨毅涛 <yangy...@gmail.com> wrote:
> CPU Design HOW-TO

大段的文字建议直接给出原文链接

--
Thomas
Shanghai Linux User Group
GitCafe - Share a cup of open source

http://ghosTunix.org
Twitter: @ghosTM55

Qian Hong

unread,
Mar 29, 2012, 9:01:22 PM3/29/12
to sh...@googlegroups.com

借这个主题请教几个问题:
1,我们现在是否存在可以自举的开源硬件工具链?这里工具链的定义不好下,也许有的人认为把沙子提炼成硅单晶才是工具链的底层,有的人认为半导体芯片的制作设备就可以算工具链的底层,我也没有确定的看到,只是和大家发散地讨论一下。楼主的问题可能主要关心的是cpu的设计,我这个问题其实主要是关心cpu的工艺。
2,有没有谁比较了解开源软件工具链的历史?历史上经历了多少年,花费了多少人力物力,才第一次完成一套可以自举的开源软件工具链?我这里说的开源软件工具链,指的是一个开源的系统内核,一个开源的shell和一套开源的编译工具,我不了解历史上是否存在比gnu工具链更早的工具链。其实我更关注的是,根据我们现在可用的开源硬件资源,实现一套能够自举的开源硬件工具链,需要多少人力物力和时间?如果开源硬件能够自举,那么也许有一天,我们使用开源软件的时候就再也不用为网卡驱动和显卡驱动等驱动问题困扰了。

两个问题都没有什么严肃的定义,大家随便发挥:)

杨毅涛

unread,
Mar 29, 2012, 9:38:04 PM3/29/12
to sh...@googlegroups.com
貌似这篇文章就是个链接地址大合集了。而且原文在哪里,确实找不到了。
工具链的事情,貌似本月技术聚会就会讨论吧。

杨毅涛

unread,
Mar 29, 2012, 9:47:37 PM3/29/12
to sh...@googlegroups.com
不过说实在的,把沙子提炼成多晶硅,还有芯片制作设备这些东西已经算是比较好获取了。无非就是对生产工艺的熟悉。咱们玩的疯狂一点,这些设备自制起来已经不算很瓶颈的东西。这个属于工业化的事情,多去几次什么工博会,电子展,拿几本会刊回来。上面的公司所吹嘘的东西上网搜搜。已经算不复杂的事情了。国内也有很多类似的代工厂。
其实CPU设计我感觉是个比较核心的技术。HOW-TO很重要。制作已经在国内不是什么办不到的事情了。

Wizard

unread,
Mar 29, 2012, 10:11:50 PM3/29/12
to sh...@googlegroups.com
在 2012年3月30日 上午9:47,杨毅涛 <yangy...@gmail.com> 写道:
> 不过说实在的,把沙子提炼成多晶硅,还有芯片制作设备这些东西已经算是比较好获取了。无非就是对生产工艺的熟悉。咱们玩的疯狂一点,这些设备自制起来已经不算很瓶颈的东西。这个属于工业化的事情,多去几次什么工博会,电子展,拿几本会刊回来。上面的公司所吹嘘的东西上网搜搜。已经算不复杂的事情了。国内也有很多类似的代工厂。
> 其实CPU设计我感觉是个比较核心的技术。HOW-TO很重要。制作已经在国内不是什么办不到的事情了。
>
不会芯片的飘过
--
Wizard

小马xiaoma

unread,
Mar 29, 2012, 10:38:44 PM3/29/12
to sh...@googlegroups.com
想要自己制作CPU的,先去看看下面这篇文章吧。

http://www.xiaohui.com/weekly/20050915.htm


在 2012年3月30日 上午9:47,杨毅涛 <yangy...@gmail.com> 写道:

> 其实CPU设计我感觉是个比较核心的技术。HOW-TO很重要。制作已经在国内不是什么办不到的事情了。

Chaos Eternal

unread,
Mar 30, 2012, 12:07:27 AM3/30/12
to sh...@googlegroups.com
支持你去做,不过你打算做点什么新东西出来呢?

2012/3/30 杨毅涛 <yangy...@gmail.com>:

杨毅涛

unread,
Mar 30, 2012, 12:18:32 AM3/30/12
to sh...@googlegroups.com
小马哥给的链接所用的技术还是50年前所流行的。做着玩还是不错的。比方说其中一张老鼠洞布线的东西。总体来说这个技术工艺可能做一个8051都有些吃力的。
目前流行的是FPGA做开源核的CPU。有个地方叫做开源硬件的,这个就是一部分。还有开源的GPU。但是这个东西,需要了解的太多,所以希望和大伙一起研究了。比方说某个开源核的特点,制作,指令集规划和修改等。
开源核应用比较多的是以前SUN的OpenSPARC 不过前些日子和富士通的人聊。据说这个玩意是oracle以后。就要钱了。而且很贵。
 Power PC的东西,飞思卡尔据说是想做成开源的,不过里面的文档非常昂贵。另外制作起来据说还有老大哥IBM和你拼命。

杨毅涛

unread,
Mar 30, 2012, 12:35:14 AM3/30/12
to sh...@googlegroups.com
关于做什么东西这块,目前还是对这个技术的研究和探索阶段,遇到个坎是烧钱的问题,目前来说,一块好一点的FPGA开发板需要上万RMB起,另外,如果说去流片,耗费的资金都是以百万RMB计算的。还有很多技术资料,技术经验的获取。和真的做点啥,还差距很远。

liyaoshi

unread,
Mar 30, 2012, 12:36:08 AM3/30/12
to sh...@googlegroups.com
晶体管是51年才开始有的

在 2012年3月30日 下午12:18,杨毅涛 <yangy...@gmail.com>写道:

liyaoshi

unread,
Mar 30, 2012, 12:37:27 AM3/30/12
to sh...@googlegroups.com
连松下多有自己的cpu

杨毅涛

unread,
Mar 30, 2012, 12:38:22 AM3/30/12
to sh...@googlegroups.com
在 2012年3月30日 下午12:36,liyaoshi <liya...@gmail.com>写道:
晶体管是51年才开始有的

合计到现在也有60年了吧,再说那个图还是在老鼠洞上面布线哦。前些日子看过一些老鼠洞布线工具。弄套工具就要花不少钱,而且很多都是耗材。

杨毅涛

unread,
Mar 30, 2012, 12:39:35 AM3/30/12
to sh...@googlegroups.com
有自己CPU的公司多了,苹果也有,IBM也有,HP也有的。

Bill Hsu

unread,
Mar 30, 2012, 1:58:49 AM3/30/12
to sh...@googlegroups.com
原来选过CPU设计课,用verilog+FPGA实现过简单的CPU,还是很有意思的。
版图可以用Electric来布,开源的软件



2012/3/30 杨毅涛 <yangy...@gmail.com>

Xuangui Huang

unread,
Mar 30, 2012, 2:50:37 AM3/30/12
to sh...@googlegroups.com
正在上神马计算机组成实验课中。。。就是用verilog来实现CPU的各个控制信号的。。。

liyaoshi

unread,
Mar 30, 2012, 5:52:08 AM3/30/12
to sh...@googlegroups.com
仔细记讲义,到时候扫描了传上来

青月

unread,
Mar 30, 2012, 6:27:33 AM3/30/12
to sh...@googlegroups.com
硬件本来就是花钱的。
不跟软件一样。即便是开始,也只是代码。做好后,直接复制,粘贴就好了。

在 2012年3月30日 下午12:35,杨毅涛 <yangy...@gmail.com>写道:

Xuangui Huang

unread,
Mar 30, 2012, 9:21:20 AM3/30/12
to sh...@googlegroups.com
问题是上课用的那个软件是闭源的。而且不知道是不是我理解错了,里面用的语言很像C和pascal的合体阿。。。感觉上硬件设计语言的话不应该是这样阿。(比如做个加法,这个Verilog直接就用+号就行了,但是数电和计组课上都是说应该是用PLD之类的搭逻辑电路阿)。。。
弱弱的真的不是很清楚。。
然后其实也没有什么讲义,都是实验指导书,简单的实验教你一步一步做,复杂的它也没写清楚。

ic crazycode

unread,
Mar 29, 2012, 8:31:29 PM3/29/12
to sh...@googlegroups.com

我记得有人直接用三极管焊接出一个CPU来的。

Qian Hong

unread,
Mar 31, 2012, 12:31:06 AM3/31/12
to sh...@googlegroups.com
2012/3/30 杨毅涛 <yangy...@gmail.com>:

> 关于做什么东西这块,目前还是对这个技术的研究和探索阶段,遇到个坎是烧钱的问题,目前来说,一块好一点的FPGA开发板需要上万RMB起,另外,如果说去流片,耗费的资金都是以百万RMB计算的。还有很多技术资料,技术经验的获取。和真的做点啥,还差距很远。

假如我们有开源的硬件工具链, 是否有助于降低成本和解决烧钱的问题呢?
比方说, 自己开一个公司, 生产开源硬件工具链的某一个环节, 把这个环节的工具销售给依赖这套工具的下游厂商, 同时利用得到的利润来支持自己的研究?


--
Regards,
Qian Hong

-
Sent from Ubuntu
http://www.ubuntu.com/

Ben Luo

unread,
Mar 31, 2012, 9:37:39 AM3/31/12
to sh...@googlegroups.com
制造还是没这么简单的吧。只是台积电牛B,让人们觉得制造已经不是问题了。不过这些都是100亿美元起跳的东西,我们也不用想了。也许 CPU 设计还真是可以几个人或者一个小团队可以玩的。就我知道的上海大厂,真正能做好一点的逻辑电路的还是中芯国际吧,但人家收不收小单就不清楚了。其它能做逻辑电路的工厂不知道靠谱的还有谁。不知道现在台积电在大陆的工厂是不是已经在运作了。

另外,好多年前就有 open power 这个计划了,现在不知道什么进展。我觉得真要玩,还是要站在巨人肩膀上。

2012/3/30 杨毅涛 <yangy...@gmail.com>

Ben Luo

unread,
Mar 31, 2012, 9:40:25 AM3/31/12
to sh...@googlegroups.com
既然是开源了,人家不会直接copy啊,商人的目的是挣钱。

2012/3/31 Qian Hong <frac...@gmail.com>

liyaoshi

unread,
Mar 31, 2012, 9:49:03 AM3/31/12
to sh...@googlegroups.com
在 2012年3月31日 下午9:37,Ben Luo <ben...@gmail.com>写道:
制造还是没这么简单的吧。只是台积电牛B,让人们觉得制造已经不是问题了。不过这些都是100亿美元起跳的东西,我们也不用想了。也许 CPU 设计还真是可以几个人或者一个小团队可以玩的。就我知道的上海大厂,真正能做好一点的逻辑电路的还是中芯国际吧,但人家收不收小单就不清楚了。其它能做逻辑电路的工厂不知道靠谱的还有谁。不知道现在台积电在大陆的工厂是不是已经在运作了。

台积电 好像很久很久以前就上海开厂了,当年是8寸的拉到国内,12寸的台湾,现在可能也是这样,反正总归不会把最先进的放到国内的,总统也不让这么干

Ben Luo

unread,
Mar 31, 2012, 9:55:53 AM3/31/12
to sh...@googlegroups.com
时间过得真快,想想我为半导体行业服务也是10年前的事了。

现在不知道做芯片的化学品是不是还是进口的。没有这些东西,什么芯片也别想做出来。

2012/3/31 liyaoshi <liya...@gmail.com>

杨毅涛

unread,
Mar 31, 2012, 10:50:07 AM3/31/12
to sh...@googlegroups.com
今天听小马说的一个东西,貌似可以做模拟人脑的芯片哦。

Ma Xiaojun

unread,
Apr 2, 2012, 8:46:36 AM4/2/12
to sh...@googlegroups.com
一直有一個網站叫OpenCores
http://opencores.org/

其實要有給力的開源硬件就這麼幾步,YY下
1、找一個富翁或者自己成為富翁
2、找一些技術好,但銷售業績不好的硬件公司,買下來。
3、把所有硬件設計資料用GPL開源,公佈原Windows驅動的代碼,但把維護轉交“社區”
4、原有員工繼續維護硬件,做相應硬件的解決方案,做Kernel開發⋯⋯

注意這個開源必須是GPL,讓別人無法用你的代碼做私有產品超過你

一個可能的難點在於,如何說服別人(他可以銷售獲利)生產GPL的硬件?

Qian Hong

unread,
Apr 2, 2012, 8:53:03 AM4/2/12
to sh...@googlegroups.com
2012/4/2 Ma Xiaojun <damag...@gmail.com>:

> 其實要有給力的開源硬件就這麼幾步,YY下
> 1、找一個富翁或者自己成為富翁
> 2、找一些技術好,但銷售業績不好的硬件公司,買下來。
> 3、把所有硬件設計資料用GPL開源,公佈原Windows驅動的代碼,但把維護轉交“社區”
> 4、原有員工繼續維護硬件,做相應硬件的解決方案,做Kernel開發⋯⋯
>
> 注意這個開源必須是GPL,讓別人無法用你的代碼做私有產品超過你
>
> 一個可能的難點在於,如何說服別人(他可以銷售獲利)生產GPL的硬件?

我YY很久了, 问题是, 说服别人生产GPL硬件有比自己成为富翁还难吗?

Ma Xiaojun

unread,
Apr 2, 2012, 8:55:31 AM4/2/12
to sh...@googlegroups.com
還有一種可能是以後的硬件由ASIC轉向FPGA。這樣就不需要找人流片什麼的了,直接買現成的FPGA就行。

粗略的說,FPGA就是硬件本身也變成可編程的了,你可以寫入一個ARM的Core也可以寫入一個MIPS的Core。但是現在普通人還不太不可能有那些Core的License。就算是重度Linux用戶,又有多少人知道乃至玩過OpenRISC呢?我也沒玩過。

所以,要麼像我前面說的那樣砸錢解決,要麼就像Linus那樣很NB地從頭開始寫。直到有像現在Linux這樣的生態環境。從此人類使用計算機的方式可能徹底改變,CPU設計隨筆換⋯⋯

Ma Xiaojun

unread,
Apr 2, 2012, 8:56:57 AM4/2/12
to sh...@googlegroups.com
你可以先成為富翁,然後你關注過的所有Bug都可以很快解決的,LOL

Qian Hong

unread,
Apr 2, 2012, 9:02:45 AM4/2/12
to sh...@googlegroups.com
2012/4/2 Qian Hong <frac...@gmail.com>:

> 2012/4/2 Ma Xiaojun <damag...@gmail.com>:
>> 其實要有給力的開源硬件就這麼幾步,YY下
>> 1、找一個富翁或者自己成為富翁
>> 2、找一些技術好,但銷售業績不好的硬件公司,買下來。
>> 3、把所有硬件設計資料用GPL開源,公佈原Windows驅動的代碼,但把維護轉交“社區”
>> 4、原有員工繼續維護硬件,做相應硬件的解決方案,做Kernel開發⋯⋯
>>
>> 注意這個開源必須是GPL,讓別人無法用你的代碼做私有產品超過你
>>
>> 一個可能的難點在於,如何說服別人(他可以銷售獲利)生產GPL的硬件?
>
> 我YY很久了, 问题是, 说服别人生产GPL硬件有比自己成为富翁还难吗?

另外一个问题是, 当你自己真正成为一个富翁的时候, 你就会更加谨慎地思考如何合理地使用资金.
比如, 资助某一类疾病的人, 投资某一领域的医疗研究, 解决环境污染问题, 等等.
我想不管多么富有, 也不够钱解决上面所有的问题, 这个时候, 开源硬件的优先级是不是还有那么高呢?
其实, 我认为只要资金保持流动, 公司提供就业机会, 并且不破坏环境, 就是对社会非常难得的贡献了.
至于投入在公益上的资金, 甚至要比商业投资还谨慎, 以避免烧钱做没意义的事情甚至好心做坏事.

Qian Hong

unread,
Apr 2, 2012, 9:36:59 AM4/2/12
to sh...@googlegroups.com
2012/4/2 Ma Xiaojun <damag...@gmail.com>:

> 其實要有給力的開源硬件就這麼幾步,YY下
> 1、找一個富翁或者自己成為富翁
> 2、找一些技術好,但銷售業績不好的硬件公司,買下來。
> 3、把所有硬件設計資料用GPL開源,公佈原Windows驅動的代碼,但把維護轉交“社區”
> 4、原有員工繼續維護硬件,做相應硬件的解決方案,做Kernel開發⋯⋯
>
> 注意這個開源必須是GPL,讓別人無法用你的代碼做私有產品超過你

我设想过一种 DGPL, 即 Delay-able GPL
Delay-able是我生造的词, DGPL的意思就是 "可延迟公布源代码的开源许可协议"
创造这种协议的目的是进一步协调开源和商业的矛盾, 填补更多空白.

DGPL协议的核心在于, 使用DGPL代码进行二次开发的厂商, 可以在发布新产品的一定时间之后再公布修改版的源代码.
这个时间差如何决定是一个难点, 也许可以规定若干个月后要公布代码, 也可以规定为若干次版本号升级之后要公布代码.

初创公司选择别人现有的DGPL代码进行二次开发, 一方面可以降低成本, 又能保持自己的工作在短期内不会被简单复制.
而DGPL的时间差可以用于研发下一代的产品.

对于有意愿主动为开源做贡献的公司, 比如Xiao Jun和我YY的开源硬件公司, 选择DGPL协议, 也可以利用这个时间差保持
自己产品的领先地位.

依然YY,希望十年后这种DGPL协议不再是YY. 如果有朋友想到更好的协议名称, 也欢迎提出来.

YunQiang Su

unread,
Apr 2, 2012, 9:41:00 AM4/2/12
to sh...@googlegroups.com
CPU 的设计应该不是最难的事情,最难的是如何维护生态系统,
还有就是fab。

2012/4/2 Qian Hong <frac...@gmail.com>:

--
YunQiang Su

杨毅涛

unread,
Apr 2, 2012, 10:09:42 AM4/2/12
to sh...@googlegroups.com
我觉得要做这个,干嘛把协议挂那么高。我喜欢BSD协议。
信息应该是免费的
前提条件是我要想办法做出来。

杨毅涛

unread,
Apr 2, 2012, 10:14:23 AM4/2/12
to sh...@googlegroups.com
说实话,CPU不过是一部分,比CPU还要多的东西。固件,配套电路。电源管理。能耗优化。操作系统,应用软件。
这个是很多东西组成的一整套东西。慢慢搞了,但愿咱有生之年能搞出这么一套东西。就没有白活了。钱财都是身外之物。人活着到了快牺牲的时候,给自己点回忆比啥都重要。

Ma Xiaojun

unread,
Apr 2, 2012, 10:14:27 AM4/2/12
to sh...@googlegroups.com
> 我觉得要做这个,干嘛把协议挂那么高。我喜欢BSD协议。

BSD對用戶有什麼好處,除了讓代碼很容易被封閉之外?

杨毅涛

unread,
Apr 2, 2012, 10:16:25 AM4/2/12
to sh...@googlegroups.com
对用户就是一句话。你想怎么用就怎么用。没有比这个更自由的了。

2012/4/2 Ma Xiaojun <damag...@gmail.com>

Ben Luo

unread,
Apr 2, 2012, 10:18:05 AM4/2/12
to sh...@googlegroups.com
这真是 hacker 的境界。想想有个 debian developer 得了肌肉萎缩证,生命最后还用脚发最后一个 Bug, 就是你这样的境界。

2012/4/2 杨毅涛 <yangy...@gmail.com>

Ma Xiaojun

unread,
Apr 2, 2012, 10:19:19 AM4/2/12
to sh...@googlegroups.com
> 对用户就是一句话。你想怎么用就怎么用。没有比这个更自由的了。

用戶無所謂改代碼之後能不能私有化,但是保證一些代碼不能私有化對自由軟件的持續發展很重要。

杨毅涛

unread,
Apr 2, 2012, 10:24:09 AM4/2/12
to sh...@googlegroups.com
抬举了,咱技术还没有到那个境界呢。
持续发展,我觉得持续发展这件事情,就是创始人牺牲了,这个东西照样发展就可以了。比方说UNIX。我们现在已经没有必要去关心这个东西是不是纯正的了。但是活了这么久。既是后期有很多变种。但是这些改变都是往好的方向发展。舍不得抛弃,就不会进步。

Qian Hong

unread,
Apr 2, 2012, 10:25:22 AM4/2/12
to sh...@googlegroups.com
2012/4/2 Ma Xiaojun <damag...@gmail.com>:

>> 我觉得要做这个,干嘛把协议挂那么高。我喜欢BSD协议。
>
> BSD對用戶有什麼好處,除了讓代碼很容易被封閉之外?

我觉得选择什么协议也是作者的自由,没有什么可以争论的,倒是自己想采用什么协议值得谨慎考虑.
不同人对自由的理解是不一样的, 认为BSD自由的人, 是希望别人可以自由地使用代码去做二次开发而不必受公布代码的约束.
认为GPL自由的人,是希望二次开发的产品的用户依然拥有三次开发的自由.

杨毅涛

unread,
Apr 2, 2012, 10:28:08 AM4/2/12
to sh...@googlegroups.com
当今社会,能把二次开发做好就已经是个很伟大的工程了。

Ma Xiaojun

unread,
Apr 2, 2012, 10:28:45 AM4/2/12
to sh...@googlegroups.com
> 不同人对自由的理解是不一样的, 认为BSD自由的人, 是希望别人可以自由地使用代码去做二次开发而不必受公布代码的约束.

二次開發不見得只是小改動,小定製。假如Linux是BSD的,那微軟心情好可以做個NT和Linux雙內核的系統,代價只是加一句acknowledgement⋯⋯

杨毅涛

unread,
Apr 2, 2012, 10:35:05 AM4/2/12
to sh...@googlegroups.com
呵呵,这个事情很简单了。他做他的,你做你的。用户想怎么用,你加个什么协议有鸟用。协议是双方的事情。
无非开源的东西。加个协议不会被某些别有用心的人注册个什么狗屁专利。除此以外。没有任何价值。
说不好听点,现在很多人就是公开山寨,你又能把那些人怎么地。
我觉得中世纪海盗的协议还有条款还是蛮齐全的。要不可以参考下自己弄个开源海盗协议。

Qian Hong

unread,
Apr 2, 2012, 10:55:26 AM4/2/12
to sh...@googlegroups.com
2012/4/2 Ma Xiaojun <damag...@gmail.com>:

>> 不同人对自由的理解是不一样的, 认为BSD自由的人, 是希望别人可以自由地使用代码去做二次开发而不必受公布代码的约束.
>
> 二次開發不見得只是小改動,小定製。假如Linux是BSD的,那微軟心情好可以做個NT和Linux雙內核的系統,代價只是加一句acknowledgement⋯⋯

如果GPL开源的模式有生命力, 那么即使Linux是BSD的, 也会有另一个大学生做出一个GPL的内核, 然后吸引全世界的黑客参与,
然后有一个有商业眼光的先驱开创一家Blue Hat, 去支持那个GPL内核的发展. 反过来,如果Gnu/Linux的发展仅仅是偶然, 那么偏爱
GPL 协议自然也就没有说服力了.

只是, 这个问题不必操心, 历史已经过去了. 倒是务实一点做自己能把握能影响的事情更有意义.

杨毅涛

unread,
Apr 2, 2012, 10:57:50 AM4/2/12
to sh...@googlegroups.com
是呀,我喜欢务实这么个做法,先做东西吧,管他个什么鸟协议呢。

黎东俊

unread,
Apr 2, 2012, 10:18:17 AM4/2/12
to sh...@googlegroups.com
你没发错吧哦?

Ma Xiaojun

unread,
Apr 3, 2012, 12:56:08 AM4/3/12
to sh...@googlegroups.com
我們知道以前的OpenOffice.org,現在的LibreOffice是LGPL的,所以國內就有不少閉源二次開發的

有紅旗2000的RedOffice
http://www.openoffice.org/marketing/ooocon2006/presentations/tuesday_g6bis.pdf
有中標軟件的中標普華Office
http://www.openoffice.org/marketing/ooocon2004/presentations/friday/CS2C.pdf
也有曇花一現的WPS Office飓风
http://www.kingsoft.com/kingstorm/stormdown_3-24.html

這些公司一定修復了OOo的不少問題,心情好的時候肯定也發給OOo社區了。只不過對於普通用戶而言,還是得等OOo社區自己修Bug。

如果OOo是GPL的,是好是壞,那就見任見智了。

Xunzhen Quan

unread,
Apr 3, 2012, 1:06:08 AM4/3/12
to sh...@googlegroups.com
LGPL 不能随便做闭源二次开发的吧,最重要的是不能改那些代码,只能把那些代码作为动态链接库链接进去啊

2012/4/3 Ma Xiaojun <damag...@gmail.com>:

Ma Xiaojun

unread,
Apr 3, 2012, 1:12:58 AM4/3/12
to sh...@googlegroups.com
可能之前還有點不一樣?
反正那些公司確實在做,也能在OpenOffice.org Conference發言,不是嗎?
Reply all
Reply to author
Forward
0 new messages