Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Chipkill protection with less than 36 chips?

109 views
Skip to first unread message

reasonable...@googlemail.com

unread,
Sep 12, 2008, 4:07:31 AM9/12/08
to
David Wang (see quote below) described a way to get chipkill
protection with x8 memory chips using a 144 pin data bus. Wang's
emphasis was on minimizing the number of data pins required, but I
want to minimize the number of chips required, regardless of the width
of the data bus. The standard way to get chipkill protection is with
36 x4 chips. Is Wang describing an 18 x8 chip configuration, or 36 x8?
Does his system actually provide SSC/DSD, or just SSC/DED?
The standard 36 x4 configuration has 1/9 memory capacity overhead (it
uses 8/9 of the memory for data, and 1/9 for ECC). Is it possible in
any way to get chipkill protection using less than 36 chips without
requiring more than 1/9 overhead?
Another obvious way to get full SSC/DSD protection is with a 3 x64 (or
3 x128, or any other data width) configuration functioning simply as a
3-way mirror, which would require just 3 chips, assuming chips that
wide were available, but isn't practical since it has 2/3 overhead. Is
there any design point in between which requires significantly less
than 36 chips but doesn't require significantly more than 1/9
overhead?

David Wang wrote in comp.arch 2 years ago:
> I think you can do chipkill protection for x8 devices over a
> 144 pin data bus interface. You just have to use 16 bits per symbol,
> and use 2 separate ECC words. What you end up implementing is a
> (288,256) code over GF(2^16). You use two ECC words, and you route
> half of each x8 device to a different ECC word. Then each DRAM device
> will contribute 16 bits of data over 4 beats, and you buffer up the
> 288 bit wide word to do error detection/correction. If a single x8
> device fails, then both ECC word will point to the failure of the
> same device, and each set of ECC hardware will provide the correct
> 16 bits of data, and allow you to reconstruct the 32 bits of data
> that went into the bit bucket when the x8 device failed. The nice
> thing about this is that when you've detected a failure, you can
> initiate a data migration process to shift the data out of the
> channel with the failed device, and if a soft error comes in while
> you're migrating data with the failed device, then the two ECC
> words will disagree as to the location of the failed ECC device.
> Since you know that's not possible, you know that a soft error has
> occured while you're moving data. So that's chipkill protection for
> x8 devices with 1 bit error detection on top of that.

Quadibloc

unread,
Sep 13, 2008, 1:44:53 PM9/13/08
to
On Sep 12, 2:07 am, reasonablereliabil...@googlemail.com wrote:
> I
> want to minimize the number of chips required, regardless of the width
> of the data bus. The standard way to get chipkill protection is with
> 36 x4 chips.

You can get SEC/DED with any Hamming code plus parity, and indeed
there are smaller ones that can be used. The drawback is that the
proportion of storage used for the ECC is no longer only 1/9th but is
larger.

So, for example, you could obtain SEC/DED with only eight chips, but
then *half* the chips would be devoted to ECC.

John Savard

reasonable...@googlemail.com

unread,
Sep 14, 2008, 4:25:38 PM9/14/08
to

But I need not just SEC/DED, but SSC/DSD, and with only one symbol per
chip. If an entire chip fails (or there's a multibit error affecting
all the bits in a symbol), SSC/DSD is guaranteed to correct it, but
SEC/DED isn't even guaranteed to detect it.

Quadibloc

unread,
Sep 14, 2008, 7:40:01 PM9/14/08
to

I think the misunderstanding here is this: I'm thinking in terms of
applying eight separate Hamming codes in parallel, so that each code
involves one bit per chip.

In order to get 1/9 overhead, though, I just remembered that 64 bits
need 8 bits overhead, so that means you need 72 chips, not 36 chips,
for correction at that efficiency.

John Savard

Jan Vorbrüggen

unread,
Sep 15, 2008, 7:21:05 AM9/15/08
to
> In order to get 1/9 overhead, though, I just remembered that 64 bits
> need 8 bits overhead, so that means you need 72 chips, not 36 chips,
> for correction at that efficiency.

I'm sure that somewhere on the web, you will be able to find the
details...I do remember that you can implement the 64+8 bit code on
4-bit chips, i.e, with 18 chips, and have stuck-at-one and stuck-at-zero
correction for any chip into the bargain. Hmmm...you have 255 error
syndromes available; you need 72 for the single-bit correction, and I
believe another 72 for the double-bit detection, and 36 for the
chip-kill correction...that's only a total of 180 syndromes. IIRC, the
others are syndromes of three-bit error, but that's not enough to detect
them reliably.

Jan

reasonable...@googlemail.com

unread,
Sep 18, 2008, 10:40:42 PM9/18/08
to
On Sep 15, 5:21 am, Jan Vorbrüggen <Jan.Vorbrueg...@not-thomson.net>
wrote:

> I'm sure that somewhere on the web, you will be able to find the
> details...I do remember that you can implement the 64+8 bit code on
> 4-bit chips, i.e, with 18 chips, and have stuck-at-one and stuck-at-zero
> correction for any chip into the bargain. Hmmm...you have 255 error
> syndromes available; you need 72 for the single-bit correction, and I
> believe another  72 for the double-bit detection, and 36 for the
> chip-kill correction...that's only a total of 180 syndromes.
I've searched, but I can't find explanations which I can understand.

I'm not looking for just SEC/DED plus protection against some
particular failure modes in other bits or other symbols. I need a
design which can correct a single chip failure, and detect the failure
of two chips, no matter what combination of erroneous outputs the
failed chips start producing on all of their bits. This is what I mean
by chipkill protection.

The Opteron manual at http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/32559.pdf
on pages 149-150 describes the integrated memory controller's chipkill
mode using a 144 bit bus and 4 bit symbols. It points out that memory
chips of any width can be used in chipkill mode, but only 4 bit chips
(or narrower) actually cause the chipkill mode to provide chipkill
protection, since chips wider than 4 bits require multiple symbols to
be mapped to each chip, with the result that a single chip's failure
can cause the failure of more than one symbol and thus cause an
uncorrectable error.

4 bit chips and a 144 bit bus means 36 chips. The Opteron's code
provides SSC/DSD with 4 bit symbols, so connecting 36 chips to an
Opteron gives me chipkill protection. Cost effective, high density 4
bit chips are on the market, so this is a practical solution.

The Opteron also has a different ECC mode for use in 72 bit systems.
It provides just SEC/DED (which means SSC/DSD with 1 bit symbols).
This means that getting chipkill protection in this mode would require
1 bit chips, but the contemporary market doesn't provide them in high
densities. Besides that, it would require 72 chips.

Because the Opteron is designed this way, I think that a 72 bit SSC/
DSD code with 1/9 overhead and symbols wider than 1 bit doesn't exist,
and a 144 bit SSC/DSD code with 1/9 overhead and symbols wider than 4
bits doesn't exist. If such codes did exist, the Opteron would use
them. This implies that for chipkill protection with 1/9 overhead, a
72 bit bus requires 72 chips, and a 144 bit bus requires 36 chips.

If I increase the bus width to 288 bits, what's the widest possible
symbol that will give SSC/DSD with 1/9 overhead? If it's 8 bits or
less, then I still need at least 36 chips, so I've gained nothing.

If I want to increase the symbol width to 8 bits, what's the minimum
required bus width to get SSC/DSD with 1/9 overhead? If it's 288 bits
or more, then I still need at least 36 chips, so I've gained nothing.

If I want SSC/DSD with 4 bit symbols and a 72 bit bus (so I only have
to use 18 chips), what's the overhead? Obviously it will be more than
1/9, but I don't know what exactly. What about for 8 bit symbols and a
144 bit bus?

already...@yahoo.com

unread,
Sep 19, 2008, 10:39:31 AM9/19/08
to
I think, you are asking wrong questions.
ECC DIMMs based on 8-bit memory chips are not popular. The choice is
between 4-bit and 9-bit.

Jan Vorbrüggen

unread,
Oct 1, 2008, 3:42:09 PM10/1/08
to
>> I'm sure that somewhere on the web, you will be able to find the
>> details...I do remember that you can implement the 64+8 bit code on
>> 4-bit chips, i.e, with 18 chips, and have stuck-at-one and stuck-at-zero
>> correction for any chip into the bargain. Hmmm...you have 255 error
>> syndromes available; you need 72 for the single-bit correction, and I
>> believe another 72 for the double-bit detection, and 36 for the
>> chip-kill correction...that's only a total of 180 syndromes.
> I've searched, but I can't find explanations which I can understand.

Hmmm, I would have thought the Wikipedia entry for "Hamming Code" would
be a good start. It doesn't explain the chipkill stuff, though. The
"chipkill" entry points to an IBM paper on the subject, which itself is
too much marketing for my taste, but points to more information on the
IBM website.

HTH, Jan

Bernd Paysan

unread,
Oct 3, 2008, 9:44:47 AM10/3/08
to
reasonable...@googlemail.com wrote:
> If I increase the bus width to 288 bits, what's the widest possible
> symbol that will give SSC/DSD with 1/9 overhead? If it's 8 bits or
> less, then I still need at least 36 chips, so I've gained nothing.

With 288 bits, and 1/9 overhead, you have 32 bits for the checksums. That
should allow at least 16 bit stuck-at-01 syndromes, so you should be able
to have ECC+chipkill with 18 chips. You need 10 bits for the ECC, 16 bits
for recovering chipkill, and you can use the 6 spare bits to make sure you
detect the chipkill correctly (these 6 spare bits should contain checksums
of the 10 ECC bits and the 16 chipkill bits in a way that a stuck-at-01
fault indicates that it is the ECC chip which got killed).

You can't do the same with 144 bits. There, you need 9 bits for the ECC, and
8 bits for the chipkill, so you are short by one bit. Use 4 bit chips, and
you have 9 bits for ECC, 4 bits for chipkill, and 3 bits for protection
against chipkill of the four checksum chips. This works. With 72 bits, you
only have 8 bits for ECC, no chipkill (unless, of course, the chips are 1
bit wide, since then chipkill becomes single-error-correction).

I'm not really sure if the 144 bit ECC is really conclusive, or just
conservative. One off the 9 ECC bits can be deduced from the 8 chipkill
bits, and if you carefully mix in the chipkill checksum into the other
bits, you should be able to detect if it's the chipkill checksum which got
killed or something else.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

Anton Ertl

unread,
Oct 3, 2008, 10:01:28 AM10/3/08
to
Bernd Paysan <bernd....@gmx.de> writes:
>You can't do the same with 144 bits. There, you need 9 bits for the ECC, and
>8 bits for the chipkill, so you are short by one bit.

For SEC (without DED), 8 bits are sufficient. As long as the
additional chip is not killed, you should be able to use it for
additional error detection (and possibly correction). So 144 bits
built with 18 8-bit chips should be viable for ECC with chipkill. And
I believe that some of the memory controllers out there can do this.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
an...@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html

Del Cecchi

unread,
Oct 3, 2008, 6:27:02 PM10/3/08
to

"Anton Ertl" <an...@mips.complang.tuwien.ac.at> wrote in message
news:2008Oct...@mips.complang.tuwien.ac.at...

If you search for "memory package codes" you will find a couple of
patents, and a couple of papers

Memory Package Error Detection and Correction
Varanasi, M.R. Rao, T.R.N. Son Pham
Department of Computer Science, University of South Florida;


This paper appears in: Computers, IEEE Transactions on
Publication Date: Sept. 1983
Volume: C-32, Issue: 9
On page(s): 872-874
ISSN: 0018-9340
Digital Object Identifier: 10.1109/TC.1983.1676338
Date Published in Issue: 2006-08-21 07:23:19.0

Title:
Extended error correction for package error correction codes
Document Type and Number:
United States Patent 4661955

Abstract:
An extended error code particularly applicable to a code that can
correct any number of errors in one sub-field but can only detect the
existence of any number of errors in two sub-fields. If the initial
pass of the data through the error correction code indicates an
uncorrected error, the data is complemented and restored in the memory
and then reread. The retrieved data is recomplemented and again passed
through the error correction code. If an uncorrected error persists,
then a bit-by-bit comparision is performed between the originally read
data and the retrieved complemented data to isolate the hard fails in
the memory. The bits in the sub-field associated with the hard fail
are then sequentially changed and then the changed data word is passed
through the error correction code. A wrong combination is detected by
the error correction code. The sequential changing continues until the
bits in the sub-field associated with the hard fail match the
originally stored data, in which case the error correction code can
correct the remaining errors in the remaining sub-fields.

US Patent 6675341 - Extended error correction for SEC-DED codes with
package error detection ability

Abstract
An apparatus and method is provided for correcting data words
resulting
from a package fail within a memory array in which coded data is
divided
into a plurality of multi-bit packages of b bits each. The coded data
comprises n-bit words with r error correcting code bits and n-r data
bits.
The invention is capable of correcting one package which has suffered
at
least one hard failure. The invention correcting exploits single error
correcting (SEC)-and double error detecting (DED) codes, requiring no
additional check bits, which give a syndrome when the data word has
suffered an error coming from at least one error in a package.


I am sure that a little more searching and following the references in
the patents would provide much more information. I bet one of the X
series technical papers books would also.


davewang202

unread,
Oct 8, 2008, 3:43:29 AM10/8/08
to
On Sep 12, 1:07 am, reasonablereliabil...@googlemail.com wrote:
> David Wang (see quote below) described a way to get chipkill
> protection with x8 memory chips using a 144 pin data bus. Wang's
> emphasis was on minimizing the number of data pins required, but I
> want to minimize the number of chips required, regardless of the width
> of the data bus. The standard way to get chipkill protection is with
> 36 x4 chips. Is Wang describing an 18 x8 chip configuration, or 36 x8?
> Does his system actually provide SSC/DSD, or just SSC/DED?
> The standard 36 x4 configuration has 1/9 memory capacity overhead (it
> uses 8/9 of the memory for data, and 1/9 for ECC). Is it possible in
> any way to get chipkill protection using less than 36 chips without
> requiring more than 1/9 overhead?
> Another obvious way to get full SSC/DSD protection is with a 3 x64 (or
> 3 x128, or any other data width) configuration functioning simply as a
> 3-way mirror, which would require just 3 chips, assuming chips that
> wide were available, but isn't practical since it has 2/3 overhead. Is
> there any design point in between which requires significantly less
> than 36 chips but doesn't require significantly more than 1/9
> overhead?

The minimum number of chips to do chipkill with a 1/9 overhead is 18.

To do it with fewer number of chips, you can do it with a 2/10
overhead, and do it with 10 chips.

One way to do that is with a 2-redudant, 8-adjacent code that Bossen
described in the following paper he published on IBM's journal for
Research and Development - July 1970!

http://www.research.ibm.com/journal/rd/144/ibmrd1404J.pdf

See "example 3" on page 404.

Here, the symbol size is 8 bits, and you have 8 data symbols and 2
check symbols per word.

I'm definitely not a mathematics guy, so I can't speak authoratatively
on the topic, but I have looked at the various error detection and
correction algorithms over the last couple of years.

My understanding is that the (80, 64) code presented in the paper has
the weakness that it can't really do the "single bit sort error
detection" very well after the chipkill event has occured.

That is, a generally reliable memory systems require that chipkill
functionality be tolerant of the failure of a single memory device AND
detect for a single bit soft error after the memory device has failed.
So you can have a memory system that can have a couple of soft errors
in the large memory array. Then, when a single memory device fails,
you detect that the device failure has occured and start shifting data
out of the failed channel ASAP. The problem is that you'll end up
with a couple of words in there with a failed device and a soft error,
because the soft errors may not have already been scrubbed out during
normal operations. You'll want to give the software a chance to
decide at a later time what the bad bits were, so you have to flag the
word with the uncorrectable error., and not just transparently correct
it and store the suspect data with the rest of the good data.

My understanding is that to properly detect all single bit errors
after a chip has failed, you need at least 2N+1 check bits for a given
symbol size N. So this is not possible in all these 72 bit or 80 bit
wide memory systems. However, you can come pretty close. I think
Intel's Seaburg FBD chipset claims something like "corrects for single
symbol failure, and properly detects 99.86% of single bit errors that
occur on top of the symbol failure". The last time I looked, I think
that the (80,64) algorithm described in Bossen's paper will only
detect something like 96.xx% of single bit errors that occur on top of
the symbol failure....

As I wrote above, I'm not a mathematician, so take this this a grain
of salt.

HTH
David


> David Wang wrote in comp.arch 2 years ago:
>
>
>
> > I think you can do chipkill protection for x8 devices over a
> > 144 pin data bus interface. You just have to use 16 bits per symbol,
> > and use 2 separate ECC words. What you end up implementing is a
> > (288,256) code over GF(2^16). You use two ECC words, and you route
> > half of each x8 device to a different ECC word. Then each DRAM device
> > will contribute 16 bits of data over 4 beats, and you buffer up the
> > 288 bit wide word to do error detection/correction. If a single x8
> > device fails, then both ECC word will point to the failure of the
> > same device, and each set of ECC hardware will provide the correct
> > 16 bits of data, and allow you to reconstruct the 32 bits of data
> > that went into the bit bucket when the x8 device failed. The nice
> > thing about this is that when you've detected a failure, you can
> > initiate a data migration process to shift the data out of the
> > channel with the failed device, and if a soft error comes in while
> > you're migrating data with the failed device, then the two ECC
> > words will disagree as to the location of the failed ECC device.
> > Since you know that's not possible, you know that a soft error has
> > occured while you're moving data. So that's chipkill protection for

> > x8 devices with 1 bit error detection on top of that.- Hide quoted text -
>
> - Show quoted text -

nm...@cam.ac.uk

unread,
Oct 8, 2008, 4:26:02 AM10/8/08
to
davewang202 <davew...@gmail.com> wrote:
>
> The minimum number of chips to do chipkill with a 1/9 overhead is 18.
>
> To do it with fewer number of chips, you can do it with a 2/10
> overhead, and do it with 10 chips.
>
> One way to do that is with a 2-redudant, 8-adjacent code that Bossen
> described in the following paper he published on IBM's journal for
> Research and Development - July 1970!
>
> http://www.research.ibm.com/journal/rd/144/ibmrd1404J.pdf
>
> I'm definitely not a mathematics guy, so I can't speak authoratatively
> on the topic, but I have looked at the various error detection and
> correction algorithms over the last couple of years.

Well, I was and read up about this while waiting for my Fortran II
jobs to run - which may date things :-)

Those numbers are roughly right, but the seminal work on ECC dates
from somewhere in the 1920s or 1930s. The more modern work has
refined it and provided algorithms for finding codes (the original
work gave proof of existence, but no algorithms to find codes, so
you have to search by hand). The techniques are also related to
sphere packing and similar branches of pure mathematics.

That is one reason I got annoyed with the RAID people - they tried
to claim that RAID was new (and different from ECC) - complete and
utter nonsense. As an aside, that is one of the disgraces of the
patent system - applying precisely the same algorithm to a very
similar storage technology becomes a 'new invention'.

As you say in the sections I snipped, the exact algorithms and
numbers depend on the precise properties you want. The only really
simple case is that of single-bit error detection, where parity is
both necessary and sufficient.

The IBM Journal of Research and Development is an obvious place to
look for work in this area, though not the only one.


Regards,
Nick Maclaren.

Terje Mathisen

unread,
Oct 8, 2008, 7:49:42 AM10/8/08
to
nm...@cam.ac.uk wrote:
> That is one reason I got annoyed with the RAID people - they tried
> to claim that RAID was new (and different from ECC) - complete and

Well, NetApp have just lost (for various meanings of lost) all 6 of
their "core" WAFL patents in their lawsuit against Sun's ZFS. Yeah!

> utter nonsense. As an aside, that is one of the disgraces of the
> patent system - applying precisely the same algorithm to a very
> similar storage technology becomes a 'new invention'.

Around 10-15 years ago I was asked to investigate a set of 10 patents in
a cell phone lawsuit:

Afair, one or two _might_ have been good.

Half the remainder were _really_ obvious, on the order of "the only
reasonable way a novice in the field would choose".

The other half were worse: They were textbook sw algorithms, i.e. from
Knuth or previous, but implemented in firmware/hardware on the phone chips.

When I sent back my report, they never asked me anything else, so I
suspect I was asked on behalf of the patent holder, not the opposition. :-)

Terje

Ken Hagan

unread,
Oct 13, 2008, 4:56:40 AM10/13/08
to
On Wed, 08 Oct 2008 09:26:02 +0100, <nm...@cam.ac.uk> wrote:

> Those numbers are roughly right, but the seminal work on ECC dates

> from somewhere in the 1920s or 1930s. [...] The techniques are


> also related to sphere packing and similar branches of pure

> mathematics. [...] As an aside, that is one of the disgraces of the


> patent system - applying precisely the same algorithm to a very
> similar storage technology becomes a 'new invention'.

Well the phrase is "skilled in the art", not "skilled in some other
art that those skilled in the art don't generally perceive as being
related". :)

> The IBM Journal of Research and Development is an obvious place to
> look for work in this area, though not the only one.

But yes, in this case we evidently *haven't* crossed into a new field.

nm...@cam.ac.uk

unread,
Oct 13, 2008, 5:13:16 AM10/13/08
to
In article <op.uiygg...@khagan.ttx>,

Ken Hagan <K.H...@thermoteknix.com> wrote:
>
>> Those numbers are roughly right, but the seminal work on ECC dates
>> from somewhere in the 1920s or 1930s. [...] The techniques are
>> also related to sphere packing and similar branches of pure
>> mathematics. [...] As an aside, that is one of the disgraces of the
>> patent system - applying precisely the same algorithm to a very
>> similar storage technology becomes a 'new invention'.
>
>Well the phrase is "skilled in the art", not "skilled in some other
>art that those skilled in the art don't generally perceive as being
>related". :)

Anyone who can justifiably claim to be skilled in an art has at
least enough knowledge of allied arts to know what is standard
practice in them. And note that I said ALLIED arts, not completely
unrelated ones.

>> The IBM Journal of Research and Development is an obvious place to
>> look for work in this area, though not the only one.
>
>But yes, in this case we evidently *haven't* crossed into a new field.


Quite. As has been the case in most of the patents I have seen that
attempt to patent old technology.


Regards,
Nick Maclaren.

Del Cecchi

unread,
Oct 13, 2008, 1:29:19 PM10/13/08
to
nm...@cam.ac.uk wrote:
snip

>
> Anyone who can justifiably claim to be skilled in an art has at
> least enough knowledge of allied arts to know what is standard
> practice in them. And note that I said ALLIED arts, not completely
> unrelated ones.
>
snip
Please note that if you are talking about US Patents, "one skilled in
the art" is a legal term with court cases and precedents and all that
and probably has little relation to what you or I would define it as.

Andrew Reilly

unread,
Oct 13, 2008, 7:47:07 PM10/13/08
to
On Mon, 13 Oct 2008 12:29:19 -0500, Del Cecchi wrote:

> nm...@cam.ac.uk wrote:
> snip
>>
>> Anyone who can justifiably claim to be skilled in an art has at least
>> enough knowledge of allied arts to know what is standard practice in
>> them. And note that I said ALLIED arts, not completely unrelated ones.
>>
> snip
> Please note that if you are talking about US Patents, "one skilled in
> the art" is a legal term with court cases and precedents and all that
> and probably has little relation to what you or I would define it as.

I remember a post by John Mashey from long ago wherein he opined that the
"art" spoken of in patent law was "breathing"... I'm not sure how well
that observation correlates with case law: I've managed to avoid details
like that so far.

--
Andrew

Del Cecchi

unread,
Oct 14, 2008, 12:24:43 AM10/14/08
to

"Andrew Reilly" <andrew-...@areilly.bpc-users.org> wrote in
message news:6li4vrF...@mid.individual.net...

The "one skilled in the art" of course relates to obviousness and not
to the issue of prior art which is more the topic under discussion.
The problem is that patent examiners only look at patents and are not
necessarily cognizant of the totality of the technical literature and
practice. This is not a new problem. We used to, like 30 years ago,
joke about the japanese patenting the motorola data book. I suppose
now it is patenting knuth's books. It is particularly a concern since
a lot of stuff was developed while software wasn't patentable so never
made it into the patent database.

Some, perhaps most, patent examiners are relatively fresh out of
school probably law school and are getting ticket punched until they
can get a good job on other side.

del


nm...@cam.ac.uk

unread,
Oct 14, 2008, 3:49:52 AM10/14/08
to
In article <6lherbF...@mid.individual.net>,

Del Cecchi <delcecchinos...@gmail.com> wrote:
>>
>> Anyone who can justifiably claim to be skilled in an art has at
>> least enough knowledge of allied arts to know what is standard
>> practice in them. And note that I said ALLIED arts, not completely
>> unrelated ones.
>>
>Please note that if you are talking about US Patents, "one skilled in
>the art" is a legal term with court cases and precedents and all that
>and probably has little relation to what you or I would define it as.

Oh, indeed! But I think that you will find that certain people like
Thomas Jefferson and Benjamin Franklin would have agreed with me,
as would most UK and USA lawyers from their time up until fairly
recently. The modern interpretation that "skilled in the art"
translates as "as technically clueless as a patent lawyer" is far
more recent than most people realise.

The following letter should raise a hollow laugh - the recent history
is that the USA has been leading the development of monopolies, with
its faithful poodles in the UK trying to sneak the changes in by the
back door. The latter remark is specific to UK politics, and so I
shall not follow up on it here.

http://www.usewisdom.com/sayings/patentsj.html


Regards,
Nick Maclaren.

Bernd Paysan

unread,
Oct 14, 2008, 4:22:54 AM10/14/08
to
Del Cecchi wrote:
> The "one skilled in the art" of course relates to obviousness and not
> to the issue of prior art which is more the topic under discussion.

Until recently, the legal definition of "obviousness" was exactly the same
as "prior art". I.e. when you were asked what the inventive action was on
your side, you could always argue that the fact that nobody else thought of
it by now (no prior art) makes it obviously non-obvious.

> The problem is that patent examiners only look at patents and are not
> necessarily cognizant of the totality of the technical literature and
> practice. This is not a new problem. We used to, like 30 years ago,
> joke about the japanese patenting the motorola data book. I suppose
> now it is patenting knuth's books. It is particularly a concern since
> a lot of stuff was developed while software wasn't patentable so never
> made it into the patent database.

There's a completely different problem: A patent examiner gets no reward
whatsoever for denying an application. It's only work for him, and the
applicant will appeal, and reappeal again, making it more work. Granting an
application gets a reward, and the work is quickly done.

The FFII (a anti-software-patent organization in Germany) proposed something
differently: Forget about patent examiners. Just let everybody apply for
patents, and allow all other parties to challenge their patents for a
(sufficiently high) fee (a costly warning). If the challenge is valid, that
fee has to be paid, the patent is null and void, and all licensees get
their money back with interest. The challenge period ends when the patent
has expired. Patent holder can let patents expire whenever they like, under
challenge they can only withdraw it.

That way, you solve a technical problem with lawyers. Lawyers like costly
warnings. Licensees will like to challenge their business partners, when
there's a much higher reward to be expected: Get your money back. Dragging
on a case will only make things worse, because the interest makes the
payback more expensive, so resolutions will be quick (though the defendand
of the patent will use teeth and claws).

Del Cecchi

unread,
Oct 15, 2008, 1:06:00 PM10/15/08
to

<nm...@cam.ac.uk> wrote in message
news:gd1iv0$kod$1...@soup.linux.pwf.cam.ac.uk...

Yes, but please don't mix what are really two separate issues. Prior
Art is independent of "skilled in the art" and is what it is. The
problem with patents and prior art, particularily in software, is that
the examiners are not knowledgable in the existing public domain
knowledge.

The issue of "one skilled in the art" relates to Obviousness, in that
one can only patent something that is "not obvious to one skilled in
the art".

If something has previously been published or "offered for sale" then
it is not patentable. The question becomes when for example some
technique used in telecommunications is applied to memory error
correction. Sure there is prior art, but this is a new application or
combination. Is it obvious or not to "one skilled in the art" and
what is "one skilled in the art"? Methinks you are way overqualified
for the position. Sorry.

del


Bernd Paysan

unread,
Oct 16, 2008, 5:12:03 AM10/16/08
to
Del Cecchi wrote:
> If something has previously been published or "offered for sale" then
> it is not patentable. The question becomes when for example some
> technique used in telecommunications is applied to memory error
> correction. Sure there is prior art, but this is a new application or
> combination. Is it obvious or not to "one skilled in the art" and
> what is "one skilled in the art"? Methinks you are way overqualified
> for the position. Sorry.

Yes, we are all overqualified, because we are skilled in the art.

When I was at university, a person from the European Patent Office gave a
presentation about patenting software, which was followed by a debate. In
Europe, you can't patent algorithms "as such", but you can when you combine
it with hardware. We carefully explained to this EPO guy that our education
bases on reuse of algorithms in different contexts, so "one skilled in the
art" *must* know how to take an algorithm from field A to field B, and
combine it with other algorithms and appropriate hardware. If you can't,
you don't pass your computer science exam; the lectures will usually only
tell you the principle, not particular applications (e.g. if you learn
about hamming distance, it will not be a memory ECC example or a telecom
example, it will just be the math). The guy from the EPO completely failed
to understand that point. It was an overall hostile discussion for him,
anyway. Our interpretation to "one skilled in the art" certainly is "one
who has a master degree or similar in that field".

Funny is that last year, a very high court (or was it even the supreme
court?) in the US declared that it is "obvious" when you just apply
something already known in a different field, or otherwise just narrow down
the application of known techniques. So after all, the situation has now
changed. What they didn't is to say "until now, patent examiners have not
taken that into account, so all patents have to be reexamined under these
new rules, and are not valid until this has been done". Which means that
all those bogus patents still needs a costly court decision to be
challenged.

Piotr Wyderski

unread,
Oct 21, 2008, 4:15:00 AM10/21/08
to
Del Cecchi wrote:

> The issue of "one skilled in the art" relates to Obviousness, in that one
> can only patent something that is "not obvious to one skilled in the art".

US Patent 7290698 - Progress bar with multiple portions

Abstract

A method and system for providing information about recorded media content
having a beginning and end time. A progress bar including a first portion is
displayed on a display device. The first portion graphically represents the
duration
of the recorded media content and has a first color. The progress bar also
includes a second portion having a second color. The second portion
graphically
represents a section of the recorded media content that is viewed during
a viewing session. The second color is distinct from the first color.

Yep, a really non-trivial achievement of human mind. :-D

Best regards
Piotr Wyderski

nm...@cam.ac.uk

unread,
Oct 21, 2008, 6:18:45 AM10/21/08
to
In article <gdjtoc$6pn$1...@node1.news.atman.pl>,

And SO original, too!


Regards,
Nick Maclaren.

Bernd Paysan

unread,
Oct 21, 2008, 8:27:31 AM10/21/08
to
nm...@cam.ac.uk wrote:
> And SO original, too!

Well, I've seen multi-progress-bars with several colors for ages (long
before the first digit of patent numbers reached 7), e.g. from
InstallShield-generated setup.exes. But the main point here is about
obviousness. Even if this was new, it would have been obvious even though.

I start to think that the USPTO has an internal guideline, which says "must
be either not new or obvious to one skilled in the art". When I applied for
two US patents 10 years ago, I remember the following: The examiner had
trouble with typical expressions used in my field - those problems were
overcome by the US patent attorney, who translated them into patent lawyer
gibberish, making the patent unsearchable by one skilled in the art (who
knows these terms, but not the patent lawyer gibberish). The examiner had
also problems with those parts of the invention which IMHO did actually
invent something. The US pattent attorney brodened the claims so that in
effect, the actual invention was just a special case of what was patentet.

In the end, I had the strong impression that my patent was just describing
common practice. It was that form that was accepted.

Del Cecchi

unread,
Oct 21, 2008, 7:49:02 PM10/21/08
to
So it is one of the large proportion of patents that aren't valid. All
you need is an example from pre 2004.
0 new messages