Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Intel junk...Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

934 views
Skip to first unread message

Designed By India H1B Engineers

unread,
Jan 4, 2018, 8:05:00 AM1/4/18
to
Performance hits loom, other OSes need fixes

Updated A fundamental design flaw in Intel's processor chips has
forced a significant redesign of the Linux and Windows kernels
to defang the chip-level security bug.

Programmers are scrambling to overhaul the open-source Linux
kernel's virtual memory system. Meanwhile, Microsoft is expected
to publicly introduce the necessary changes to its Windows
operating system in an upcoming Patch Tuesday: these changes
were seeded to beta testers running fast-ring Windows Insider
builds in November and December.

Crucially, these updates to both Linux and Windows will incur a
performance hit on Intel products. The effects are still being
benchmarked, however we're looking at a ballpark figure of five
to 30 per cent slow down, depending on the task and the
processor model. More recent Intel chips have features – such as
PCID – to reduce the performance hit. Your mileage may vary.


The Register
?
@TheRegister
PostgreSQL SELECT 1 with the KPTI workaround for Intel CPU
vulnerability https://www.postgresql.org/message-
id/20180102222354....@alap3.anarazel.de

Best case: 17% slowdown
Worst case: 23%

3:58 PM - Jan 2, 2018
12 12 Replies 331 331 Retweets 212 212 likes
Twitter Ads info and privacy
Similar operating systems, such as Apple's 64-bit macOS, will
also need to be updated – the flaw is in the Intel x86-64
hardware, and it appears a microcode update can't address it. It
has to be fixed in software at the OS level, or go buy a new
processor without the design blunder.

Details of the vulnerability within Intel's silicon are under
wraps: an embargo on the specifics is due to lift early this
month, perhaps in time for Microsoft's Patch Tuesday next week.
Indeed, patches for the Linux kernel are available for all to
see but comments in the source code have been redacted to
obfuscate the issue.

However, some details of the flaw have surfaced, and so this is
what we know.

Impact
It is understood the bug is present in modern Intel processors
produced in the past decade. It allows normal user programs –
from database applications to JavaScript in web browsers – to
discern to some extent the layout or contents of protected
kernel memory areas.

The fix is to separate the kernel's memory completely from user
processes using what's called Kernel Page Table Isolation, or
KPTI. At one point, Forcefully Unmap Complete Kernel With
Interrupt Trampolines, aka FUCKWIT, was mulled by the Linux
kernel team, giving you an idea of how annoying this has been
for the developers.

Whenever a running program needs to do anything useful – such as
write to a file or open a network connection – it has to
temporarily hand control of the processor to the kernel to carry
out the job. To make the transition from user mode to kernel
mode and back to user mode as fast and efficient as possible,
the kernel is present in all processes' virtual memory address
spaces, although it is invisible to these programs. When the
kernel is needed, the program makes a system call, the processor
switches to kernel mode and enters the kernel. When it is done,
the CPU is told to switch back to user mode, and reenter the
process. While in user mode, the kernel's code and data remains
out of sight but present in the process's page tables.

Think of the kernel as God sitting on a cloud, looking down on
Earth. It's there, and no normal being can see it, yet they can
pray to it.

These KPTI patches move the kernel into a completely separate
address space, so it's not just invisible to a running process,
it's not even there at all. Really, this shouldn't be needed,
but clearly there is a flaw in Intel's silicon that allows
kernel access protections to be bypassed in some way.

The downside to this separation is that it is relatively
expensive, time wise, to keep switching between two separate
address spaces for every system call and for every interrupt
from the hardware. These context switches do not happen
instantly, and they force the processor to dump cached data and
reload information from memory. This increases the kernel's
overhead, and slows down the computer.

Your Intel-powered machine will run slower as a result.

How can this security hole be abused?
At best, the vulnerability could be leveraged by malware and
hackers to more easily exploit other security bugs.

At worst, the hole could be abused by programs and logged-in
users to read the contents of the kernel's memory. Suffice to
say, this is not great. The kernel's memory space is hidden from
user processes and programs because it may contain all sorts of
secrets, such as passwords, login keys, files cached from disk,
and so on. Imagine a piece of JavaScript running in a browser,
or malicious software running on a shared public cloud server,
able to sniff sensitive kernel-protected data.

Specifically, in terms of the best-case scenario, it is possible
the bug could be abused to defeat KASLR: kernel address space
layout randomization. This is a defense mechanism used by
various operating systems to place components of the kernel in
randomized locations in virtual memory. This mechanism can
thwart attempts to abuse other bugs within the kernel:
typically, exploit code – particularly return-oriented
programming exploits – relies on reusing computer instructions
in known locations in memory.

If you randomize the placing of the kernel's code in memory,
exploits can't find the internal gadgets they need to fully
compromise a system. The processor flaw could be potentially
exploited to figure out where in memory the kernel has
positioned its data and code, hence the flurry of software
patching.

However, it may be that the vulnerability in Intel's chips is
worse than the above mitigation bypass. In an email to the Linux
kernel mailing list over Christmas, AMD said it is not affected.
The wording of that message, though, rather gives the game away
as to what the underlying cockup is:

AMD processors are not subject to the types of attacks that the
kernel page table isolation feature protects against. The AMD
microarchitecture does not allow memory references, including
speculative references, that access higher privileged data when
running in a lesser privileged mode when that access would
result in a page fault.

A key word here is "speculative." Modern processors, like
Intel's, perform speculative execution. In order to keep their
internal pipelines primed with instructions to obey, the CPU
cores try their best to guess what code is going to be run next,
fetch it, and execute it.

It appears, from what AMD software engineer Tom Lendacky was
suggesting above, that Intel's CPUs speculatively execute code
potentially without performing security checks. It seems it may
be possible to craft software in such a way that the processor
starts executing an instruction that would normally be blocked –
such as reading kernel memory from user mode – and completes
that instruction before the privilege level check occurs.

That would allow ring-3-level user code to read ring-0-level
kernel data. And that is not good.

The specifics of the vulnerability have yet to be confirmed, but
consider this: the changes to Linux and Windows are significant
and are being pushed out at high speed. That suggests it's more
serious than a KASLR bypass.

Also, the updates to separate kernel and user address spaces on
Linux are based on a set of fixes dubbed the KAISER patches,
which were created by eggheads at Graz University of Technology
in Austria. These boffins discovered [PDF] it was possible to
defeat KASLR by extracting memory layout information from the
kernel in a side-channel attack on the CPU's virtual memory
system. The team proposed splitting kernel and user spaces to
prevent this information leak, and their research sparked this
round of patching.

Their work was reviewed by Anders Fogh, who wrote this
interesting blog post in July. That article described his
attempts to read kernel memory from user mode by abusing
speculative execution. Although Fogh was unable to come up with
any working proof-of-concept code, he noted:

My results demonstrate that speculative execution does indeed
continue despite violations of the isolation between kernel mode
and user mode.

It appears the KAISER work is related to Fogh's research, and as
well as developing a practical means to break KASLR by abusing
virtual memory layouts, the team may have somehow proved Fogh
right – that speculative execution on Intel x86 chips can be
exploited to access kernel memory.

Shared systems
The bug will impact big-name cloud computing environments
including Amazon EC2, Microsoft Azure, and Google Compute
Engine, said a software developer blogging as Python Sweetness
in this heavily shared and tweeted article on Monday:

There is presently an embargoed security bug impacting
apparently all contemporary [Intel] CPU architectures that
implement virtual memory, requiring hardware changes to fully
resolve. Urgent development of a software mitigation is being
done in the open and recently landed in the Linux kernel, and a
similar mitigation began appearing in NT kernels in November. In
the worst case the software fix causes huge slowdowns in typical
workloads.

There are hints the attack impacts common virtualisation
environments including Amazon EC2 and Google Compute Engine...

Microsoft's Azure cloud – which runs a lot of Linux as well as
Windows – will undergo maintenance and reboots on January 10,
presumably to roll out the above fixes.

Amazon Web Services also warned customers via email to expect a
major security update to land on Friday this week, without going
into details.

There were rumors of a severe hypervisor bug – possibly in Xen –
doing the rounds at the end of 2017. It may be that this
hardware flaw is that rumored bug: that hypervisors can be
attacked via this kernel memory access cockup, and thus need to
be patched, forcing a mass restart of guest virtual machines.

A spokesperson for Intel was not available for comment. ®

Updated to add
The Intel processor flaw is real. A PhD student at the systems
and network security group at Vrije Universiteit Amsterdam has
developed a proof-of-concept program that exploits the Chipzilla
flaw to read kernel memory from user mode:

View image on Twitter
View image on Twitter

brainsmoke
@brainsmoke
Bingo! #kpti #intelbug

6:28 AM - Jan 3, 2018
58 58 Replies 1,687 1,687 Retweets 2,362 2,362 likes
Twitter Ads info and privacy
The Register has also seen proof-of-concept exploit code that
leaks a tiny amount of kernel memory to user processes.

Finally, macOS has been patched to counter the chip design
blunder since version 10.13.2, according to operating system
kernel expert Alex Ionescu. And it appears 64-bit ARM Linux
kernels will also get a set of KAISER patches, completely
splitting the kernel and user spaces, to block attempts to
defeat KASLR. We'll be following up this week.

https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

--
Windows 2000 Pro RC2 on Alpha.

chrisv

unread,
Jan 4, 2018, 10:28:20 AM1/4/18
to
Designed By India H1B Engineers wrote:

>Crucially, these updates to both Linux and Windows will incur a
>performance hit on Intel products. The effects are still being
>benchmarked, however we're looking at a ballpark figure of five
>to 30 per cent slow down, depending on the task and the
>processor model.

This is ugly. Think of the large computing centers, for example
Google's data centers. Suddenly, they will need significantly more
CPU time, and thus electricity (and thus carbon), to get the job done?

--
"Half say its easier than Windows and the other (more truthful) half
maintain that they are GLAD Linux is buggy and error prone because it
puts off the great unwashed that would otherwise pollute the Linux
gene pool." - "True Linux advocate" Hadron Quark

Mr. Man-wai Chang

unread,
Jan 4, 2018, 12:31:07 PM1/4/18
to
On 4/1/2018 23:28, chrisv wrote:
>
> This is ugly. Think of the large computing centers, for example
> Google's data centers. Suddenly, they will need significantly more
> CPU time, and thus electricity (and thus carbon), to get the job done?

That only made all electricity companies happy... ;)

--
@~@ Remain silent! Drink, Blink, Stretch! Live long and prosper!!
/ v \ Simplicity is Beauty!
/( _ )\ May the Force and farces be with you!
^ ^ (x86_64 Ubuntu 9.10) Linux 2.6.39.3
不借貸! 不詐騙! 不援交! 不打交! 不打劫! 不自殺! 請考慮綜援 (CSSA):
http://www.swd.gov.hk/tc/index/site_pubsvc/page_socsecu/sub_addressesa

Roger Blake

unread,
Jan 4, 2018, 12:44:06 PM1/4/18
to
On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
> This is ugly. Think of the large computing centers, for example
> Google's data centers. Suddenly, they will need significantly more
> CPU time, and thus electricity (and thus carbon), to get the job done?

Carbon is not a pollutant, except in the "minds" of left-wing loons,
so that is not of any importance. (I certainly refuse to lower my carbon
output. Environmentalist scum who desire to lower theirs are welcome to
stop breathing. However, I digress.)

The loss of performance could significantly increase the cost of
operations for large computing centers. Look for the cost of online
services to rise.

--
-----------------------------------------------------------------------------
Roger Blake (Posts from Google Groups killfiled due to excess spam.)

NSA sedition and treason -- http://www.DeathToNSAthugs.com
Don't talk to cops! -- http://www.DontTalkToCops.com
Badges don't grant extra rights -- http://www.CopBlock.org
-----------------------------------------------------------------------------

Doomsdrzej

unread,
Jan 4, 2018, 12:54:50 PM1/4/18
to
On Thu, 4 Jan 2018 17:44:04 -0000 (UTC), Roger Blake
<rogb...@iname.invalid> wrote:

>On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> This is ugly. Think of the large computing centers, for example
>> Google's data centers. Suddenly, they will need significantly more
>> CPU time, and thus electricity (and thus carbon), to get the job done?
>
>Carbon is not a pollutant, except in the "minds" of left-wing loons,
>so that is not of any importance. (I certainly refuse to lower my carbon
>output. Environmentalist scum who desire to lower theirs are welcome to
>stop breathing. However, I digress.)
>
>The loss of performance could significantly increase the cost of
>operations for large computing centers. Look for the cost of online
>services to rise.

Might I say that was an awesome post, sir.

Jan-Erik Soderholm

unread,
Jan 4, 2018, 12:56:59 PM1/4/18
to
Den 2018-01-04 kl. 18:44, skrev Roger Blake:
> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> This is ugly. Think of the large computing centers, for example
>> Google's data centers. Suddenly, they will need significantly more
>> CPU time, and thus electricity (and thus carbon), to get the job done?
>
> Carbon is not a pollutant, except in the "minds" of left-wing loons,
> so that is not of any importance. (I certainly refuse to lower my carbon
> output. Environmentalist scum who desire to lower theirs are welcome to
> stop breathing. However, I digress.)
>

The carbon you breath comes from the food you eat. No problem.

But much of the carbon that we let out comes from carbon from
millions years ago (fosile fuels). *That* is a major problem.

Burning bilological fuels (that grow the last 100 years) is not
any problem either.

You other statements are purelly childish and uneducated.

chrisv

unread,
Jan 4, 2018, 1:29:24 PM1/4/18
to
Jan-Erik Soderholm wrote:

> Den 2018-01-04 kl. 18:44, skrev Roger Blake:
>>
>> Carbon is not a pollutant, except in the "minds" of left-wing loons,
>> so that is not of any importance. (I certainly refuse to lower my carbon
>> output. Environmentalist scum who desire to lower theirs are welcome to
>> stop breathing. However, I digress.)
>
>The carbon you breath comes from the food you eat. No problem.
>
>But much of the carbon that we let out comes from carbon from
>millions years ago (fosile fuels). *That* is a major problem.
>
>Burning bilological fuels (that grow the last 100 years) is not
>any problem either.
>
>You other statements are purelly childish and uneducated.

Might I say that was an awesome post, sir.

--
"If only 2% of the world believes that GNU/Linux is great, then THEY
are the liars for claiming that it's a quality piece of software." -
"Slimer", AKA "Doomsdrzej"

DaveFroble

unread,
Jan 4, 2018, 3:43:11 PM1/4/18
to
chrisv wrote:
> Designed By India H1B Engineers wrote:
>
>> Crucially, these updates to both Linux and Windows will incur a
>> performance hit on Intel products. The effects are still being
>> benchmarked, however we're looking at a ballpark figure of five
>> to 30 per cent slow down, depending on the task and the
>> processor model.
>
> This is ugly. Think of the large computing centers, for example
> Google's data centers. Suddenly, they will need significantly more
> CPU time, and thus electricity (and thus carbon), to get the job done?
>

And once all the spanners are tossed into the works, which will slow things
down, what happens when new CPUs without the issues are available? Will
computers forever be artificially slowed down?

A whole bunch of someones has seriously dropped the ball on this. Protected
memory should be just that, protected, with no way to avoid the protection.

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Buffalo

unread,
Jan 4, 2018, 3:50:09 PM1/4/18
to
"Roger Blake" wrote in message
news:2018010...@news.eternal-september.org...
>
>On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> This is ugly. Think of the large computing centers, for example
>> Google's data centers. Suddenly, they will need significantly more
>> CPU time, and thus electricity (and thus carbon), to get the job done?
>
>Carbon is not a pollutant, except in the "minds" of left-wing loons,
>so that is not of any importance. (I certainly refuse to lower my carbon
>output. Environmentalist scum who desire to lower theirs are welcome to
>stop breathing. However, I digress.)
>
>The loss of performance could significantly increase the cost of
>operations for large computing centers. Look for the cost of online
>services to rise.
>

Rise? After the deal Trump gave them? Billions more in profit from saved
taxes.
Those Big Corp execs can really vacation now!!
--
Buffalo

Bob F

unread,
Jan 4, 2018, 5:12:09 PM1/4/18
to
On 1/4/2018 10:29 AM, chrisv wrote:
> Jan-Erik Soderholm wrote:
>
>> Den 2018-01-04 kl. 18:44, skrev Roger Blake:
>>>
>>> Carbon is not a pollutant, except in the "minds" of left-wing loons,
>>> so that is not of any importance. (I certainly refuse to lower my carbon
>>> output. Environmentalist scum who desire to lower theirs are welcome to
>>> stop breathing. However, I digress.)
>>
>> The carbon you breath comes from the food you eat. No problem.
>>
>> But much of the carbon that we let out comes from carbon from
>> millions years ago (fosile fuels). *That* is a major problem.
>>
>> Burning bilological fuels (that grow the last 100 years) is not
>> any problem either.
>>
>> You other statements are purelly childish and uneducated.
>
> Might I say that was an awesome post, sir.
>
Agreed!

Jan-Erik Soderholm

unread,
Jan 4, 2018, 5:12:59 PM1/4/18
to
Den 2018-01-04 kl. 23:04, skrev Tim Streater:
> In article <p2m3kt$vnk$1...@dont-email.me>, DaveFroble
> <da...@tsoft-inc.com> wrote:
>
>> chrisv wrote:
>>> Designed By India H1B Engineers wrote:
>>>
>>>> Crucially, these updates to both Linux and Windows will incur a
>>>> performance hit on Intel products. The effects are still being
>>>> benchmarked, however we're looking at a ballpark figure of five to 30
>>>> per cent slow down, depending on the task and the processor model.
>>>
>>> This is ugly.  Think of the large computing centers, for example
>>> Google's data centers.  Suddenly, they will need significantly more
>>> CPU time, and thus electricity (and thus carbon), to get the job done?
>>>
>>
>> And once all the spanners are tossed into the works, which will slow
>> things down, what happens when new CPUs without the issues are
>> available?  Will computers forever be artificially slowed down?
>>
>> A whole bunch of someones has seriously dropped the ball on this.
>> Protected memory should be just that, protected, with no way to avoid the
>> protection.
>
> But AIUI, the protection isn't applied when the CPU does speculative
> instruction execution. It's unclear why, though.
>

Becuse the designers, for performance reasons, has mapped kernel memory
into the user process address space and relies on the OS to check
protection before any kernel memory (or code) is accessed.

The issue with the current issues is that the hardware (the CPU) does
these accesses in hardware "under the hood" without control by the OS.

If you map your kernel memory in another way that uses the hardware
protection facilities, you are (as I understand) safe, at the cost
of worse performance to switch between user and kernel mode.


Roger Blake

unread,
Jan 4, 2018, 10:36:36 PM1/4/18
to
On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
> Might I say that was an awesome post, sir.

His post was sheer idiocy. CO2 is not a pollutant - period.

Human caused "climate change/global warming" is junk science at
its worst. Even Reid Bryson, the scientist who was the father of
modern climate science, stated that it is "a bunch of hooey."

As I said, I absolutely refuse to reduce my own carbon emissions and
in fact continue to see ways to increase them. (Do you dumbass hippies
really believe that your stoopid windmills are solar panels are capable
of keeping people warm and alive in the deep freeze that so much of the
U.S. is currently experiencing?)

JF Mezei

unread,
Jan 5, 2018, 1:37:14 AM1/5/18
to
On 2018-01-04 17:04, Tim Streater wrote:

> But AIUI, the protection isn't applied when the CPU does speculative
> instruction execution. It's unclear why, though.


The protection IS applied. But...


Here is how it works:

-read 1 byte from memory location Y (for which your process does NOT
have access) into register x.

-move "1" to my_array[x]

So normally, you'd read a value from memory, and then use that value as
an index into a 256 byte array where you then move a "1". (the rest of
the array is all zeros).


The CPU starts to process first instruction and sends the "read 1 byte
from memory at virtual location Y" request to memory subsystem. While it
waits for that to finish, it starts to preprocess the next instruction
which eventually has to wait for the value from memory to arrive.

The memory system does the virtual to physical translation, sends
request to the memory to read that byte, and then check the virtual
translation table to see if you have access to that memory.

The problem is that by the time it decides it was illegal memory access,
the request to read tyat byte has already gone to memory and will return
to internal CPU register (not to register x). The instruction will not
complete and will not store the value in register X because it goes to
the exception handler for an illegal memory access.

The problem is that the next instruction, all prepped and ready, is
allowed to go the microsecond the memory contents have been received
into the internal CPU register, and one reason for this is that it
doesn't do anything with the value (not stored elsewhere), it simply
used as an index into an array.

That array is modified by that second instruction, but it only goes as
far as the cache and not to RAM before the exception handler runs things
down and undoes the instructions.

Here is the trick: a second process keeps flushing the CPU's cache and
checking for changes. When process 1 changes the array, even though it
doesn't go to RAM, it is visivble in the cache and second process sees
that the 57th byte in array has been modified with a "1". This means
that the value read fron kernel memory was 57.

You need to repeat that for every byte read.


Peter Köhlmann

unread,
Jan 5, 2018, 4:04:09 AM1/5/18
to
Roger Blake wrote:

> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> Might I say that was an awesome post, sir.
>
> His post was sheer idiocy. CO2 is not a pollutant - period.
>
> Human caused "climate change/global warming" is junk science at
> its worst.

Idiot

Chris Ahlstrom

unread,
Jan 5, 2018, 6:14:37 AM1/5/18
to
Roger Blake wrote this copyrighted missive and expects royalties:

> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> Might I say that was an awesome post, sir.
>
> His post was sheer idiocy. CO2 is not a pollutant - period.

How about you fill your room with CO2 for a couple hours, Roge?

> <rest of head-in-the-sand post snipped>

--
Q: Why do ducks have big flat feet?
A: To stamp out forest fires.

Q: Why do elephants have big flat feet?
A: To stamp out flaming ducks.

Jan-Erik Soderholm

unread,
Jan 5, 2018, 6:34:05 AM1/5/18
to
Den 2018-01-05 kl. 04:36, skrev Roger Blake:
> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> Might I say that was an awesome post, sir.
>
> His post was sheer idiocy. CO2 is not a pollutant - period.
>

No, it is a natural part of the atmosphare, but it is a balance.
It has to be in the right proportions. To much (and in particual
if we continue to burn fosile fuels that ads carbone that was
bound millions of years ago) and the climate will be hurt.

> Human caused "climate change/global warming" is junk science at
> its worst. Even Reid Bryson, the scientist who was the father of
> modern climate science, stated that it is "a bunch of hooey."
>

I could probably name the scientist that has the opposite view, but
the space in one posting would not be enough.

And why pick one that has been dead for 10 years? The views on global
warming has changed over the years and a lot has happend the last decade.

> As I said, I absolutely refuse to reduce my own carbon emissions and
> in fact continue to see ways to increase them.

OK. fine. You'll be sorry and your children will be hurt. But then, if
you could reduce your C02 emission, what would be the issue?

> (Do you dumbass hippies
> really believe that your stoopid windmills are solar panels are capable
> of keeping people warm and alive in the deep freeze that so much of the
> U.S. is currently experiencing?)

That weather phenomenon is probably also caused by the disturbed climate
caused by the CO2 emissions. So in the case of the current US weather
issues, you could say that it is, in a way, self-inflicted.

Anyway, you could probably start with more efficient cars, shutting down
all AC equipment and so on. This cold is just a temporarily storm and
has little to do with the overall climate issues. One can not use the
amount of snow on the back garden to judge about the climate at large.







chrisv

unread,
Jan 5, 2018, 8:05:46 AM1/5/18
to
Chris Ahlstrom wrote:

> Roger Blake wrote:
>>
>> chrisv wrote:
>>>
>>> Might I say that was an awesome post, sir.
>>
>> His post was sheer idiocy. CO2 is not a pollutant - period.
>
>How about you fill your room with CO2 for a couple hours, Roge?
>
>> <rest of head-in-the-sand post snipped>

It was the expected illogical, dishonest attack, from the
right-winger.

As if there isn't a middle ground between doing nothing at all and
(absurdly) depending entirely upon renewable energy.

Sheesh.

--
"Nevermind if the game is fair or not, it is the winning and the
losing that matters, eh?" - Rat

Alan Browne

unread,
Jan 5, 2018, 8:45:11 AM1/5/18
to
On 2018-01-04 12:44, Roger Blake wrote:
> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> This is ugly. Think of the large computing centers, for example
>> Google's data centers. Suddenly, they will need significantly more
>> CPU time, and thus electricity (and thus carbon), to get the job done?
>
> Carbon is not a pollutant, except in the "minds" of left-wing loons,

When the truth is inconvenient, deny it.

Alan Browne

unread,
Jan 5, 2018, 8:51:02 AM1/5/18
to
On 2018-01-04 15:43, DaveFroble wrote:
> chrisv wrote:
>> Designed By India H1B Engineers wrote:
>>
>>> Crucially, these updates to both Linux and Windows will incur a
>>> performance hit on Intel products. The effects are still being
>>> benchmarked, however we're looking at a ballpark figure of five to 30
>>> per cent slow down, depending on the task and the processor model.
>>
>> This is ugly.  Think of the large computing centers, for example
>> Google's data centers.  Suddenly, they will need significantly more
>> CPU time, and thus electricity (and thus carbon), to get the job done?
>>
>
> And once all the spanners are tossed into the works, which will slow
> things down, what happens when new CPUs without the issues are
> available?  Will computers forever be artificially slowed down?
>
> A whole bunch of someones has seriously dropped the ball on this.
> Protected memory should be just that, protected, with no way to avoid
> the protection.

I presume it's an implementation flaw, not a principle-of-design flaw.
So once addressed, it should result in both proper memory protection and
increased performance in future cores. Alas (per the article) this
can't be addressed with a microcode patch.

--
“When it is all said and done, there are approximately 94 million
full-time workers in private industry paying taxes to support 102
million non-workers and 21 million government workers.
In what world does this represent a strong job market?”
.Jim Quinn

Wolf K

unread,
Jan 5, 2018, 8:59:25 AM1/5/18
to
On 2018-01-05 08:50, Alan Browne wrote:
> “When it is all said and done, there are approximately 94 million
>  full-time workers in private industry paying taxes to support 102
>  million non-workers and 21 million government workers.
>  In what world does this represent a strong job market?”
> .Jim Quinn

The real world. Without consumers, there would be no "job market".

--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."

Alan Browne

unread,
Jan 5, 2018, 9:02:58 AM1/5/18
to
On 2018-01-04 07:56, Designed By India H1B Engineers wrote:
> Performance hits loom, other OSes need fixes

Stunning. As someone else mentioned the electricity hit alone could be
enormous. Not just in data centres but across all intel users.

Class action suit against intel I think. This is on the level of the
FDIV bug of yore. Worse in some sense.

Must be full cigar time at AMD...

Curiously Apple claim that their mitigation of this has no measurable
effect on one aspect (CVE-2017-5754 or "rogue data cache load" - aka
Meltdown) of the flaw (using 3rd party benchmarking s/w); and "only"
2.5% slowdown in one of three benchmarks for the other flaws (
CVE-2017-5753 or "bounds check bypass," and CVE-2017-5715 or "branch
target injection.").
https://support.apple.com/en-us/HT208394

This makes me wonder if the article's claims of 5 - 30% slowdown (post
OS fix [Windows, Linux]) are exaggerated or if Apple's fix is either
miraculous or incomplete.

--

Alan Browne

unread,
Jan 5, 2018, 9:05:12 AM1/5/18
to
On 2018-01-05 08:59, Wolf K wrote:
> On 2018-01-05 08:50, Alan Browne wrote:
>> “When it is all said and done, there are approximately 94 million
>>   full-time workers in private industry paying taxes to support 102
>>   million non-workers and 21 million government workers.
>>   In what world does this represent a strong job market?”
>> .Jim Quinn
>
> The real world. Without consumers, there would be no "job market".

Double congratulations are in order Sir.

1) Replying to sigs is very lame,
2) Misunderstanding the fundamental meaning, moreso.

--

DaveFroble

unread,
Jan 5, 2018, 9:06:19 AM1/5/18
to
Don't feed the troll, opps, I just did ...

Bill Gunshannon

unread,
Jan 5, 2018, 9:06:45 AM1/5/18
to
On 01/05/2018 08:50 AM, Alan Browne wrote:
> On 2018-01-04 15:43, DaveFroble wrote:
>> chrisv wrote:
>>> Designed By India H1B Engineers wrote:
>>>
>>>> Crucially, these updates to both Linux and Windows will incur a
>>>> performance hit on Intel products. The effects are still being
>>>> benchmarked, however we're looking at a ballpark figure of five to
>>>> 30 per cent slow down, depending on the task and the processor model.
>>>
>>> This is ugly.  Think of the large computing centers, for example
>>> Google's data centers.  Suddenly, they will need significantly more
>>> CPU time, and thus electricity (and thus carbon), to get the job done?
>>>
>>
>> And once all the spanners are tossed into the works, which will slow
>> things down, what happens when new CPUs without the issues are
>> available?  Will computers forever be artificially slowed down?
>>
>> A whole bunch of someones has seriously dropped the ball on this.
>> Protected memory should be just that, protected, with no way to avoid
>> the protection.
>
> I presume it's an implementation flaw, not a principle-of-design flaw.
> So once addressed, it should result in both proper memory protection and
> increased performance in future cores.  Alas (per the article) this
> can't be addressed with a microcode patch.
>

Sounds more like a "principle-of-design" flaw to me. Hard to
believe all those different companies all made the same mistake
building on a sound design.

bill

Alan Browne

unread,
Jan 5, 2018, 9:10:33 AM1/5/18
to
On 2018-01-04 12:56, Jan-Erik Soderholm wrote:
> Den 2018-01-04 kl. 18:44, skrev Roger Blake:

>> Carbon is not a pollutant, except in the "minds" of left-wing loons,
>> so that is not of any importance. (I certainly refuse to lower my carbon
>> output. Environmentalist scum who desire to lower theirs are welcome to
>> stop breathing. However, I digress.)
>>
>
> The carbon you breath comes from the food you eat. No problem.
>
> But much of the carbon that we let out comes from carbon from
> millions years ago (fosile fuels). *That* is a major problem.

Indeed, we've "unlocked" sequestered carbon. I'm not especially against
that - but we've done it in such a recklessly fast (wasteful,
inefficient and polluting manner) that the "system" doesn't have time to
absorb the damage in a reasonable way.

>
> Burning bilological fuels (that grow the last 100 years) is not
> any problem either.
>
> You other statements are purelly childish and uneducated.

+1 generally but -10 for attempting to help a hopelessly closed mind.

Alan Browne

unread,
Jan 5, 2018, 9:11:37 AM1/5/18
to
On 2018-01-04 22:36, Roger Blake wrote:
> As I said, I absolutely refuse to reduce my own carbon emissions and
> in fact continue to see ways to increase them. (Do you dumbass hippies
> really believe that your stoopid windmills are solar panels are capable
> of keeping people warm and alive in the deep freeze that so much of the
> U.S. is currently experiencing?)

Oh dear. Another retard equating the weather to climate. Sad.

DaveFroble

unread,
Jan 5, 2018, 9:15:16 AM1/5/18
to
As I wrote, someone dropped the ball on this one.

Speculative execution is part of the HW, not software. It appears the HW
doesn't follow it's own rules. Or, perhaps I don't actually understand the problem?

Alan Browne

unread,
Jan 5, 2018, 9:20:42 AM1/5/18
to
Well, no matter how much they try, even the most pushy climate
scientists cannot link a given weather event to global warming. They
can state that there's "possibly"/"probably" some link, but not to a
causal level.

That said, the current east coast weather event is linked to a an
extreme jet-stream condition near Alaska and the bomb-cyclone
development. So one _weather_ scientist has said both are extreme and
_likely_ linked to climate change and that the combination of both at
the same time just exacerbates the whole.

> Anyway, you could probably start with more efficient cars, shutting down
> all AC equipment and so on. This cold is just a temporarily storm and
> has little to do with the overall climate issues. One can not use the
> amount of snow on the back garden to judge about the climate at large.

Exactly. Weather ≠ Climate.

Jan-Erik Soderholm

unread,
Jan 5, 2018, 9:22:11 AM1/5/18
to
As I understand, as in Linux, the kernel memeory is mapped into each user
process memory space (for performance reasons). The speculative fetch done
by the hardware can read kernel memory directly. And when the protection
schemas detects this, the data is already in the internal CPU cache.

The solution seems to be to separate kernel and user memoery into separate
virtual memory areas. So a re-mapp of the memory mapping is needed each
time the process needs to read kernel memory, and that adds a perf cost.

And yes, it looks like different "levels" in the hardware are bit
out of sync...





Alan Browne

unread,
Jan 5, 2018, 9:24:35 AM1/5/18
to
On 2018-01-05 09:15, DaveFroble wrote:
> Jan-Erik Soderholm wrote:

>> Becuse the designers, for performance reasons, has mapped kernel memory
>> into the user process address space and relies on the OS to check
>> protection before any kernel memory (or code) is accessed.
>>
>> The issue with the current issues is that the hardware (the CPU) does
>> these accesses in hardware "under the hood" without control by the OS.
>>
>> If you map your kernel memory in another way that uses the hardware
>> protection facilities, you are (as I understand) safe, at the cost
>> of worse performance to switch between user and kernel mode.
>>
>>
>
> As I wrote, someone dropped the ball on this one.
>
> Speculative execution is part of the HW, not software.  It appears the
> HW doesn't follow it's own rules.  Or, perhaps I don't actually
> understand the problem?

At least as well as I do. These are very complex mechanisms and
complexity is usually where you're most likely to get problems.

In this case the h/w implementation didn't reflect the design goal.

This means intel had very poor design review and abysmal testing of
security features.

Alan Browne

unread,
Jan 5, 2018, 9:29:26 AM1/5/18
to
Call as you like I'll stick to my version. "All"? What? 2?

They have similar design goals so having similar attacks on the problem
aren't a surprise. Indeed in their communities ideas fly around
somewhat freely before implementation puts them under proprietary
"protection".

The failure (I'm speculating) is in design implementation and failure to
test the intent of the security in an OS environment.

Further speculation: the CPU h/w designers are a couple steps away from
OS designers and their understanding of OS concerns doesn't see clearly
to how to design the security tests.

Doomsdrzej

unread,
Jan 5, 2018, 10:12:01 AM1/5/18
to
On Fri, 5 Jan 2018 03:36:34 -0000 (UTC), Roger Blake
<rogb...@iname.invalid> wrote:

>On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>> Might I say that was an awesome post, sir.
>
>His post was sheer idiocy. CO2 is not a pollutant - period.
>
>Human caused "climate change/global warming" is junk science at
>its worst. Even Reid Bryson, the scientist who was the father of
>modern climate science, stated that it is "a bunch of hooey."
>
>As I said, I absolutely refuse to reduce my own carbon emissions and
>in fact continue to see ways to increase them. (Do you dumbass hippies
>really believe that your stoopid windmills are solar panels are capable
>of keeping people warm and alive in the deep freeze that so much of the
>U.S. is currently experiencing?)

I _refuse_ to buy an electic car which has horrible range, little
storage and looks absolutely awful in the hope that mining lithium to
power them somehow causes less pollution than driving a regular,
gas-burning car.

I want power in my vehicle as well as the ability to drive as far as I
want to and that is something electric cars will never allow for.

Doomsdrzej

unread,
Jan 5, 2018, 10:13:45 AM1/5/18
to
Another thought-provoking and irrefutable post by Mainz's greatest
export, Peter the Klöwn.

Doomsdrzej

unread,
Jan 5, 2018, 10:20:27 AM1/5/18
to
On Fri, 5 Jan 2018 12:33:59 +0100, Jan-Erik Soderholm
<jan-erik....@telia.com> wrote:

>Den 2018-01-05 kl. 04:36, skrev Roger Blake:
>> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>>> Might I say that was an awesome post, sir.
>>
>> His post was sheer idiocy. CO2 is not a pollutant - period.
>>
>
>No, it is a natural part of the atmosphare, but it is a balance.
>It has to be in the right proportions. To much (and in particual
>if we continue to burn fosile fuels that ads carbone that was
>bound millions of years ago) and the climate will be hurt.

You can't _hurt_ climate. The Earth always balances itself out and
there are thousands of years of data showing this. Some periods are
cold; some periods are warm. In the end, there is a balance regardless
of what its living creatures do.

>> Human caused "climate change/global warming" is junk science at
>> its worst. Even Reid Bryson, the scientist who was the father of
>> modern climate science, stated that it is "a bunch of hooey."
>>
>
>I could probably name the scientist that has the opposite view, but
>the space in one posting would not be enough.
>
>And why pick one that has been dead for 10 years? The views on global
>warming has changed over the years and a lot has happend the last decade.

Please demonstrate how.

>> As I said, I absolutely refuse to reduce my own carbon emissions and
>> in fact continue to see ways to increase them.
>
>OK. fine. You'll be sorry and your children will be hurt. But then, if
>you could reduce your C02 emission, what would be the issue?

Reducing CO2 emissions should be voluntary in the same way that
companies having a $15 minimum wage should be voluntary. In the United
States, some companies did so and as a result show that they can
afford to pay people that well without there being any kind of
consequences. In Ontario, for instance, the $15 minimum wage was
forced and companies now have to cut back somehow to afford to pay
people that well. The liberal approach to CO2 emissions involves
forcing companies and the people to make significant sacrifices and
the end result is that it will do damage to the economy and the
standard of life in the _hope_ that we will somehow be able to slow
the evitable in a very insignificant way at a time when none of us
will still be alive. The best governments _should_ hope for is to
raise awareness about the potential problem and encourage people to
make whatever changes they can which is not at all what they've been
doing with schemes like the Paris Climate Accord.

>> (Do you dumbass hippies
>> really believe that your stoopid windmills are solar panels are capable
>> of keeping people warm and alive in the deep freeze that so much of the
>> U.S. is currently experiencing?)
>
>That weather phenomenon is probably also caused by the disturbed climate
>caused by the CO2 emissions. So in the case of the current US weather
>issues, you could say that it is, in a way, self-inflicted.
>
>Anyway, you could probably start with more efficient cars, shutting down
>all AC equipment and so on. This cold is just a temporarily storm and
>has little to do with the overall climate issues. One can not use the
>amount of snow on the back garden to judge about the climate at large.

Just watch this:

<https://hooktube.com/watch?v=NjlC02NsIt0>

chrisv

unread,
Jan 5, 2018, 10:32:38 AM1/5/18
to
Doomsdrzej wrote:

>The Earth always balances itself out and
>there are thousands of years of data showing this. Some periods are
>cold; some periods are warm. In the end, there is a balance regardless
>of what its living creatures do.

What the right-wing propagandists always "forget" is that, while there
are compensating mechanisms that tend to bring the climate back to an
equilibrium, the forces are gentle, and they work on the time-scale of
millennia. They cannot cope with a violent change in conditions
occurring over a short period of time, as in the last 100 years.

--
"Tell us again how Windows 95 is not DOS-based, Peter." - "Slimer",
AKA "Doomsdrzej", putting his ignorance on display

Wolf K

unread,
Jan 5, 2018, 10:34:18 AM1/5/18
to
On 2018-01-05 09:05, Alan Browne wrote:
> On 2018-01-05 08:59, Wolf K wrote:
>> On 2018-01-05 08:50, Alan Browne wrote:
>>> “When it is all said and done, there are approximately 94 million
>>>   full-time workers in private industry paying taxes to support 102
>>>   million non-workers and 21 million government workers.
>>>   In what world does this represent a strong job market?”
>>> .Jim Quinn
>>
>> The real world. Without consumers, there would be no "job market".
>
> Double congratulations are in order Sir.
>
> 1) Replying to sigs is very lame,
> 2) Misunderstanding the fundamental meaning, moreso.
>

I parsed you sig generously. As written, it''s not even wrong.

Alan Browne

unread,
Jan 5, 2018, 10:37:34 AM1/5/18
to
On 2018-01-05 10:32, chrisv wrote:
> What the right-wing propagandists always "forget" is that, while there
> are compensating mechanisms that tend to bring the climate back to an
> equilibrium, the forces are gentle, and they work on the time-scale of
> millennia. They cannot cope with a violent change in conditions
> occurring over a short period of time, as in the last 100 years.

+10

--

Alan Browne

unread,
Jan 5, 2018, 10:43:55 AM1/5/18
to
A funny thing about your sort is you believe that putting down others
makes your point valid and that resonates in your little echo chambers
as some sort of truth. While there is wisdom in crowds, that only works
when everyone's decision is independent. You have the independence of a
particularly dull sheep. Why alpha-idiots like Twump make little honest
effort to enlist you - not worth the expense - you come near free.

Meanwhile scientists worldwide are investigating climate change and with
each passing year narrowing the doubt about current anthropogenic
climate change to the statistical exclusion of all other causes.

Wolf K

unread,
Jan 5, 2018, 10:51:34 AM1/5/18
to
On 2018-01-05 09:22, Jan-Erik Soderholm wrote:

[snip remarks about probable performance hits]]
> As I understand, as in Linux, the kernel memeory is mapped into each user
> process memory space (for performance reasons). The speculative fetch done
> by the hardware can read kernel memory directly. And when the protection
> schemas detects this, the data is already in the internal CPU cache.
>
> The solution seems to be to separate kernel and user memoery into separate
> virtual memory areas. So a re-mapp of the memory mapping is needed each
> time the process needs to read kernel memory, and that adds a perf cost.
>
> And yes, it looks like different "levels" in the hardware are bit
> out of sync...

So, in order to reduce the performance hit, would it make sense to
redesign the CPU with a larger on-board cache to store both kernel and
user memory? Or, what am I missing in the protected memory concept?

Anyhow, I think most users will see no performance hit. I mean, how many
people are rendering CGI on their laptops? Etc.

I'm more worried about server farms used by big data, banks, ISPs etc.
These already show performance hits tied to time-of-day, as user access
(ie demand) varies. Even a few % slowdown in overall throughput will be
noticeable at peak demand times.

Alan Browne

unread,
Jan 5, 2018, 11:08:06 AM1/5/18
to
On 2018-01-05 10:51, Wolf K wrote:

> So, in order to reduce the performance hit, would it make sense to
> redesign the CPU with a larger on-board cache to store both kernel and
> user memory? Or, what am I missing in the protected memory concept?

Ignoring that the only fix will be in future (or currently in
pre-production) CPU's, the fix could be done with the same sized caches
but correctly implemented. It could be the "correct fix" is itself less
efficient overall than the goal (execution). It could be that such a
fix would reduce the amount of cache available to Kernel/User space and
thus have an impact too.

New CPU's can have the luxury of more cache in any case and so can add
even more to help with the issue.

Wow. What a long winded way to say: "who knows?".

>
> Anyhow, I think most users will see no performance hit. I mean, how many
> people are rendering CGI on their laptops? Etc.
>
> I'm more worried about server farms used by big data, banks, ISPs etc.
> These already show performance hits tied to time-of-day, as user access
> (ie demand) varies. Even a few % slowdown in overall throughput will be
> noticeable at peak demand times.

That depends on the server farms. Well designed they scale up cheaply
(upscale effort) but expensively (hardware + energy). There is cost.

AWS style platforms can upscale dynamically according to load (and the
contract with the service client). Of course the service client will be
sad to see his costs with the platform go up and have to find a way to
get more revenue.

All that said, the 5 - 30% load increase seems to be speculative /
theoretical. And maybe you're right that it will mostly affect people
using their computers to the hilt a lot of the time and not most
"casual" users. Alas, I do render videos often but can't say my Mac
(OS 10.13.2) has seen any impact (nor can I say it hasn't - nothing
noticable IOW).

Maybe under 10.13.3 when Apple are rumoured to "complete" their defense
against Meltdown and/or Spectre we'll perceive the hit.

Wolf K

unread,
Jan 5, 2018, 11:10:24 AM1/5/18
to
On 2018-01-05 10:20, Doomsdrzej wrote:
[...]
> You can't_hurt_ climate. The Earth always balances itself out and
> there are thousands of years of data showing this. Some periods are
> cold; some periods are warm.
[...]

True, but when climate changes too fast, a lot of living things get
hurt. This time round, it will be us. Or rather, our children and
grandchildren. Humans have a depressing history of hybris in their
dealing with Earth. We tend to think that because things are going our
way we are somehow in control of the Earth and its systems. Acting on
that delusion never ends well.

Case in point: Viking Greenland settlements. They established a
northern-temperate zone farming system, which worked for a couple
hundred years or so. Then the local climate cooled about 1/2 C (about 1
F). Result: those farms were no longer viable. The Vikings despised the
Skraelings, and refused to learn from them. So the settlements
disappeared. That is, the Vikings starved to death.

> In the end, there is a balance regardless
>> of what its living creatures do.

That is simply not true. Or rather _complexly_ not true. It's the
complexity that's the problem. Humans have a terrible time thinking in
complex-system terms.

Very small changes in the systems can have very large effects, both
short term and long term. They are non-linear systems. The rebalancing
you refer to can be rather drastic.

JF Mezei

unread,
Jan 5, 2018, 12:22:09 PM1/5/18
to
On 2018-01-05 11:57, Tim Streater wrote:

> 1) Process 2, you say, keeps flushing the CPU cache. How does it do
> this without root priv? And if it has that priv, there must be easier
> ways to get access to vital info without doing what you describe.


That is an issue which was not adressed by the Meltdown white paper. But
the transfer of data is dependent on the other process having access to
the CPU cache because that is the only way the data fetched by process 1
can be transfered to another process.

Without the cache technique, process 1 might have fetched the data it
wasn't supposed to, but that data would not have gone to a register or
to other location in memory as the exception would have stopped
everything before the instructions are committed.

The cache trick is a means for the dying process to leave breadcrumbs
that another process can pickup.

> 2) Process 1 reads from protected memory and will get an exception for
> each byte it tries to read. One supposes it has arranged recovery from
> these exceptions so it can try another byte-read. Wouldn't a prudent OS
> terminate any process that accumulates more than, say, 10 exceptions of
> this nature?


Yep. But the other way to do it is for the process itself to declare its
own exception handler so that even though the instructions are rolled
back, the process can continue with attempt to read the next byte. (this
gets into performance design to read all of the RAM).



Doomsdrzej

unread,
Jan 5, 2018, 1:46:02 PM1/5/18
to
On Fri, 5 Jan 2018 10:43:48 -0500, Alan Browne
<bitb...@blackhole.com> wrote:

>On 2018-01-05 10:13, Doomsdrzej wrote:
>> On Fri, 05 Jan 2018 10:04:06 +0100, Peter Köhlmann
>> <peter-k...@t-online.de> wrote:
>>
>>> Roger Blake wrote:
>>>
>>>> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>>>>> Might I say that was an awesome post, sir.
>>>>
>>>> His post was sheer idiocy. CO2 is not a pollutant - period.
>>>>
>>>> Human caused "climate change/global warming" is junk science at
>>>> its worst.
>>>
>>> Idiot
>>
>> Another thought-provoking and irrefutable post by Mainz's greatest
>> export, Peter the Klöwn.
>
>A funny thing about your sort is you believe that putting down others
>makes your point valid and that resonates in your little echo chambers
>as some sort of truth.

Says the hypocrite who just defended someone calling another poster an
"idiot."

> While there is wisdom in crowds...

... there is none in your posts.

*plonk*

Wolf K

unread,
Jan 5, 2018, 3:55:22 PM1/5/18
to
On 2018-01-05 11:41, Tim Streater wrote:
> In article <q16v4d1k940dhta8g...@4ax.com>, chrisv
> <chr...@nospam.invalid> wrote:
>
>> Doomsdrzej wrote:
>>
>>> The Earth always balances itself out and
>>> there are thousands of years of data showing this. Some periods are
>>> cold; some periods are warm. In the end, there is a balance regardless
>>> of what its living creatures do.
>>
>> What the right-wing propagandists always "forget" is that, while there
>> are compensating mechanisms that tend to bring the climate back to an
>> equilibrium ...
>
> This in itself is meaningless. Nature itself is never in balance.
>

Also true. But over long periods of time it oscillates around an average
sequence of states, eg the seasonal cycles which we call "climate". Or
plant and animal populations that we call "ecosystems". Etc.

These systems are chaotic, ie, non-linear. At some point, some small
extra load or disturbance in some part of the system will trigger a
rapid "rebalancing" of the whole thing. It won't be pretty, but it will
be interesting.

DaveFroble

unread,
Jan 5, 2018, 4:01:00 PM1/5/18
to
Alan Browne wrote:
> On 2018-01-05 09:15, DaveFroble wrote:
>> Jan-Erik Soderholm wrote:
>
>>> Becuse the designers, for performance reasons, has mapped kernel memory
>>> into the user process address space and relies on the OS to check
>>> protection before any kernel memory (or code) is accessed.
>>>
>>> The issue with the current issues is that the hardware (the CPU) does
>>> these accesses in hardware "under the hood" without control by the OS.
>>>
>>> If you map your kernel memory in another way that uses the hardware
>>> protection facilities, you are (as I understand) safe, at the cost
>>> of worse performance to switch between user and kernel mode.
>>>
>>>
>>
>> As I wrote, someone dropped the ball on this one.
>>
>> Speculative execution is part of the HW, not software. It appears the
>> HW doesn't follow it's own rules. Or, perhaps I don't actually
>> understand the problem?
>
> At least as well as I do. These are very complex mechanisms and
> complexity is usually where you're most likely to get problems.
>
> In this case the h/w implementation didn't reflect the design goal.
>
> This means intel had very poor design review and abysmal testing of
> security features.
>

There seems a whole bunch of us "speculating" about things we probably don't
know enough about.

:-)

It seems to me that before memory is fetched into cache, the CPU should be
determining whether it should indeed be fetching that memory. Yeah, sounds
simple, but I'm betting it isn't.

DaveFroble

unread,
Jan 5, 2018, 4:04:48 PM1/5/18
to
I wonder whether VAX would have these problems?

:-)

DaveFroble

unread,
Jan 5, 2018, 4:26:33 PM1/5/18
to
Doomsdrzej wrote:
> On Fri, 5 Jan 2018 03:36:34 -0000 (UTC), Roger Blake
> <rogb...@iname.invalid> wrote:
>
>> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>>> Might I say that was an awesome post, sir.
>> His post was sheer idiocy. CO2 is not a pollutant - period.
>>
>> Human caused "climate change/global warming" is junk science at
>> its worst. Even Reid Bryson, the scientist who was the father of
>> modern climate science, stated that it is "a bunch of hooey."
>>
>> As I said, I absolutely refuse to reduce my own carbon emissions and
>> in fact continue to see ways to increase them. (Do you dumbass hippies
>> really believe that your stoopid windmills are solar panels are capable
>> of keeping people warm and alive in the deep freeze that so much of the
>> U.S. is currently experiencing?)

I'm going to hate myself for doing this ...

> I _refuse_ to buy an electic car which has horrible range, little
> storage and looks absolutely awful in the hope that mining lithium to
> power them somehow causes less pollution than driving a regular,
> gas-burning car.

Range, storage, and looks have nothing to do with electric vs gasoline. If some
goofy designer feels he has to make an electric car look like a golf cart,
that's his decision, not reality. As for pollution, it depends on how the
electricity is produced. How is mining any worse than drilling for oil?

> I want power in my vehicle as well as the ability to drive as far as I
> want to and that is something electric cars will never allow for.

An electric vehicle can have plenty of power. Just as in a gasoline fueled car,
it depends on how much energy one wishes to expend. As far as distance, you can
only go as far as the next gas station. Empty tank, or dead battery, both leave
you walking.

Bill Gunshannon

unread,
Jan 5, 2018, 4:30:19 PM1/5/18
to
VAX didn't have the capabilities that lead to this problem.
I think Alpha does, however.

bill

DaveFroble

unread,
Jan 5, 2018, 5:41:12 PM1/5/18
to
:-)

Yeah, I know that. No predictive speculation in VAX.

Now, as for Alpha, yes, OoO and such, but, the question would be, does it allow
"illegal" access to memory? If Alpha does not allow loading memory it should
not into cache, then perhaps not a problem.

Jan-Erik Soderholm

unread,
Jan 5, 2018, 5:50:40 PM1/5/18
to
> allow "illegal" access to memory?...

The problem with the current issue seems to be that the speculative
pre-fetch is done on a lower level then the page protection checks.
When the page-protection kicks in, the fetch has already been done.

I have no idea how that is designed on the Alpha.

Pabst Blue Ribbon

unread,
Jan 5, 2018, 6:01:40 PM1/5/18
to
Designed By India H1B Engineers <h...@intel.com> wrote:
> Performance hits loom, other OSes need fixes
>
> Updated A fundamental design flaw in Intel's processor chips has
> forced a significant redesign of the Linux and Windows kernels
> to defang the chip-level security bug.
>
> Programmers are scrambling to overhaul the open-source Linux
> kernel's virtual memory system. Meanwhile, Microsoft is expected
> to publicly introduce the necessary changes to its Windows
> operating system in an upcoming Patch Tuesday: these changes
> were seeded to beta testers running fast-ring Windows Insider
> builds in November and December.
>
> Crucially, these updates to both Linux and Windows will incur a
> performance hit on Intel products. The effects are still being
> benchmarked, however we're looking at a ballpark figure of five
> to 30 per cent slow down, depending on the task and the
> processor model. More recent Intel chips have features – such as
> PCID – to reduce the performance hit. Your mileage may vary.
>
>
> The Register
> ?
> @TheRegister
> PostgreSQL SELECT 1 with the KPTI workaround for Intel CPU
> vulnerability https://www.postgresql.org/message-
> id/20180102222354....@alap3.anarazel.de
>
> Best case: 17% slowdown
> Worst case: 23%
>
> 3:58 PM - Jan 2, 2018
> 12 12 Replies 331 331 Retweets 212 212 likes
> Twitter Ads info and privacy
> Similar operating systems, such as Apple's 64-bit macOS, will
> also need to be updated – the flaw is in the Intel x86-64
> hardware, and it appears a microcode update can't address it. It
> has to be fixed in software at the OS level, or go buy a new
> processor without the design blunder.
>
> Details of the vulnerability within Intel's silicon are under
> wraps: an embargo on the specifics is due to lift early this
> month, perhaps in time for Microsoft's Patch Tuesday next week.
> Indeed, patches for the Linux kernel are available for all to
> see but comments in the source code have been redacted to
> obfuscate the issue.
>
> However, some details of the flaw have surfaced, and so this is
> what we know.
>
> Impact
> It is understood the bug is present in modern Intel processors
> produced in the past decade. It allows normal user programs –
> from database applications to JavaScript in web browsers – to
> discern to some extent the layout or contents of protected
> kernel memory areas.
>
> The fix is to separate the kernel's memory completely from user
> processes using what's called Kernel Page Table Isolation, or
> KPTI. At one point, Forcefully Unmap Complete Kernel With
> Interrupt Trampolines, aka FUCKWIT, was mulled by the Linux
> kernel team, giving you an idea of how annoying this has been
> for the developers.
>
> Whenever a running program needs to do anything useful – such as
> write to a file or open a network connection – it has to
> temporarily hand control of the processor to the kernel to carry
> out the job. To make the transition from user mode to kernel
> mode and back to user mode as fast and efficient as possible,
> the kernel is present in all processes' virtual memory address
> spaces, although it is invisible to these programs. When the
> kernel is needed, the program makes a system call, the processor
> switches to kernel mode and enters the kernel. When it is done,
> the CPU is told to switch back to user mode, and reenter the
> process. While in user mode, the kernel's code and data remains
> out of sight but present in the process's page tables.
>
> Think of the kernel as God sitting on a cloud, looking down on
> Earth. It's there, and no normal being can see it, yet they can
> pray to it.
>
> These KPTI patches move the kernel into a completely separate
> address space, so it's not just invisible to a running process,
> it's not even there at all. Really, this shouldn't be needed,
> but clearly there is a flaw in Intel's silicon that allows
> kernel access protections to be bypassed in some way.
>
> The downside to this separation is that it is relatively
> expensive, time wise, to keep switching between two separate
> address spaces for every system call and for every interrupt
> from the hardware. These context switches do not happen
> instantly, and they force the processor to dump cached data and
> reload information from memory. This increases the kernel's
> overhead, and slows down the computer.
>
> Your Intel-powered machine will run slower as a result.
>
> How can this security hole be abused?
> At best, the vulnerability could be leveraged by malware and
> hackers to more easily exploit other security bugs.
>
> At worst, the hole could be abused by programs and logged-in
> users to read the contents of the kernel's memory. Suffice to
> say, this is not great. The kernel's memory space is hidden from
> user processes and programs because it may contain all sorts of
> secrets, such as passwords, login keys, files cached from disk,
> and so on. Imagine a piece of JavaScript running in a browser,
> or malicious software running on a shared public cloud server,
> able to sniff sensitive kernel-protected data.
>
> Specifically, in terms of the best-case scenario, it is possible
> the bug could be abused to defeat KASLR: kernel address space
> layout randomization. This is a defense mechanism used by
> various operating systems to place components of the kernel in
> randomized locations in virtual memory. This mechanism can
> thwart attempts to abuse other bugs within the kernel:
> typically, exploit code – particularly return-oriented
> programming exploits – relies on reusing computer instructions
> in known locations in memory.
>
> If you randomize the placing of the kernel's code in memory,
> exploits can't find the internal gadgets they need to fully
> compromise a system. The processor flaw could be potentially
> exploited to figure out where in memory the kernel has
> positioned its data and code, hence the flurry of software
> patching.
>
> However, it may be that the vulnerability in Intel's chips is
> worse than the above mitigation bypass. In an email to the Linux
> kernel mailing list over Christmas, AMD said it is not affected.
> The wording of that message, though, rather gives the game away
> as to what the underlying cockup is:
>
> AMD processors are not subject to the types of attacks that the
> kernel page table isolation feature protects against. The AMD
> microarchitecture does not allow memory references, including
> speculative references, that access higher privileged data when
> running in a lesser privileged mode when that access would
> result in a page fault.
>
> A key word here is "speculative." Modern processors, like
> Intel's, perform speculative execution. In order to keep their
> internal pipelines primed with instructions to obey, the CPU
> cores try their best to guess what code is going to be run next,
> fetch it, and execute it.
>
> It appears, from what AMD software engineer Tom Lendacky was
> suggesting above, that Intel's CPUs speculatively execute code
> potentially without performing security checks. It seems it may
> be possible to craft software in such a way that the processor
> starts executing an instruction that would normally be blocked –
> such as reading kernel memory from user mode – and completes
> that instruction before the privilege level check occurs.
>
> That would allow ring-3-level user code to read ring-0-level
> kernel data. And that is not good.
>
> The specifics of the vulnerability have yet to be confirmed, but
> consider this: the changes to Linux and Windows are significant
> and are being pushed out at high speed. That suggests it's more
> serious than a KASLR bypass.
>
> Also, the updates to separate kernel and user address spaces on
> Linux are based on a set of fixes dubbed the KAISER patches,
> which were created by eggheads at Graz University of Technology
> in Austria. These boffins discovered [PDF] it was possible to
> defeat KASLR by extracting memory layout information from the
> kernel in a side-channel attack on the CPU's virtual memory
> system. The team proposed splitting kernel and user spaces to
> prevent this information leak, and their research sparked this
> round of patching.
>
> Their work was reviewed by Anders Fogh, who wrote this
> interesting blog post in July. That article described his
> attempts to read kernel memory from user mode by abusing
> speculative execution. Although Fogh was unable to come up with
> any working proof-of-concept code, he noted:
>
> My results demonstrate that speculative execution does indeed
> continue despite violations of the isolation between kernel mode
> and user mode.
>
> It appears the KAISER work is related to Fogh's research, and as
> well as developing a practical means to break KASLR by abusing
> virtual memory layouts, the team may have somehow proved Fogh
> right – that speculative execution on Intel x86 chips can be
> exploited to access kernel memory.
>
> Shared systems
> The bug will impact big-name cloud computing environments
> including Amazon EC2, Microsoft Azure, and Google Compute
> Engine, said a software developer blogging as Python Sweetness
> in this heavily shared and tweeted article on Monday:
>
> There is presently an embargoed security bug impacting
> apparently all contemporary [Intel] CPU architectures that
> implement virtual memory, requiring hardware changes to fully
> resolve. Urgent development of a software mitigation is being
> done in the open and recently landed in the Linux kernel, and a
> similar mitigation began appearing in NT kernels in November. In
> the worst case the software fix causes huge slowdowns in typical
> workloads.
>
> There are hints the attack impacts common virtualisation
> environments including Amazon EC2 and Google Compute Engine...
>
> Microsoft's Azure cloud – which runs a lot of Linux as well as
> Windows – will undergo maintenance and reboots on January 10,
> presumably to roll out the above fixes.
>
> Amazon Web Services also warned customers via email to expect a
> major security update to land on Friday this week, without going
> into details.
>
> There were rumors of a severe hypervisor bug – possibly in Xen –
> doing the rounds at the end of 2017. It may be that this
> hardware flaw is that rumored bug: that hypervisors can be
> attacked via this kernel memory access cockup, and thus need to
> be patched, forcing a mass restart of guest virtual machines.
>
> A spokesperson for Intel was not available for comment. ®
>
> Updated to add
> The Intel processor flaw is real. A PhD student at the systems
> and network security group at Vrije Universiteit Amsterdam has
> developed a proof-of-concept program that exploits the Chipzilla
> flaw to read kernel memory from user mode:
>
> View image on Twitter
> View image on Twitter
>
> brainsmoke
> @brainsmoke
> Bingo! #kpti #intelbug
>
> 6:28 AM - Jan 3, 2018
> 58 58 Replies 1,687 1,687 Retweets 2,362 2,362 likes
> Twitter Ads info and privacy
> The Register has also seen proof-of-concept exploit code that
> leaks a tiny amount of kernel memory to user processes.
>
> Finally, macOS has been patched to counter the chip design
> blunder since version 10.13.2, according to operating system
> kernel expert Alex Ionescu. And it appears 64-bit ARM Linux
> kernels will also get a set of KAISER patches, completely
> splitting the kernel and user spaces, to block attempts to
> defeat KASLR. We'll be following up this week.
>
> https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/
>
> --
> Windows 2000 Pro RC2 on Alpha.
>
>

I wonder if Intel will be sued because of that. I also assume it's
impossible Intel high-ranked engineers were not aware of the issue;
however, even if this is the case, it probably would be hard to prove it in
court.

Pabst Blue Ribbon

unread,
Jan 5, 2018, 6:01:41 PM1/5/18
to
Alan Browne <bitb...@blackhole.com> wrote:
> On 2018-01-05 09:15, DaveFroble wrote:
>> Jan-Erik Soderholm wrote:
>
>>> Becuse the designers, for performance reasons, has mapped kernel memory
>>> into the user process address space and relies on the OS to check
>>> protection before any kernel memory (or code) is accessed.
>>>
>>> The issue with the current issues is that the hardware (the CPU) does
>>> these accesses in hardware "under the hood" without control by the OS.
>>>
>>> If you map your kernel memory in another way that uses the hardware
>>> protection facilities, you are (as I understand) safe, at the cost
>>> of worse performance to switch between user and kernel mode.
>>>
>>>
>>
>> As I wrote, someone dropped the ball on this one.
>>
>> Speculative execution is part of the HW, not software.  It appears the
>> HW doesn't follow it's own rules.  Or, perhaps I don't actually
>> understand the problem?
>
> At least as well as I do. These are very complex mechanisms and
> complexity is usually where you're most likely to get problems.
>
> In this case the h/w implementation didn't reflect the design goal.
>
> This means intel had very poor design review and abysmal testing of
> security features.

I doubt it. Yes, it's assumption but I think Intel was aware and gave OK to
flawed design because of performance/cost.

Jan-Erik Soderholm

unread,
Jan 5, 2018, 6:06:48 PM1/5/18
to
Den 2018-01-06 kl. 00:01, skrev Pabst Blue Ribbon:

[Why quote 100s of lines?]

> I wonder if Intel will be sued because of that.

https://www.theregister.co.uk/2018/01/05/intel_meltdown_cpu_flaw_sued/

"Here come the lawyers! Intel slapped with three Meltdown bug lawsuits!"

Craig A. Berry

unread,
Jan 5, 2018, 6:24:35 PM1/5/18
to
On 1/5/18 4:41 PM, DaveFroble wrote:

> Now, as for Alpha, yes, OoO and such, but, the question would be, does
> it allow "illegal" access to memory?  If Alpha does not allow loading
> memory it should not into cache, then perhaps not a problem.

Alpha may not have any timer with precision sufficient to snoop the
cache. This based on the fact that reducing timer precision is one of
the mitigations in progress and that $GETTIM_PREC only came along in
v8.3-1H1 and is Itanium only and even with that you only get time to the
nearest 1ms, assuming I've understood this discussion correctly:

<http://h41379.www4.hpe.com/openvms/journal/v15/consistency_check.html>

Doomsdrzej

unread,
Jan 5, 2018, 6:54:00 PM1/5/18
to
On Fri, 05 Jan 2018 16:26:30 -0500, DaveFroble <da...@tsoft-inc.com>
wrote:

>Doomsdrzej wrote:
>> On Fri, 5 Jan 2018 03:36:34 -0000 (UTC), Roger Blake
>> <rogb...@iname.invalid> wrote:
>>
>>> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>>>> Might I say that was an awesome post, sir.
>>> His post was sheer idiocy. CO2 is not a pollutant - period.
>>>
>>> Human caused "climate change/global warming" is junk science at
>>> its worst. Even Reid Bryson, the scientist who was the father of
>>> modern climate science, stated that it is "a bunch of hooey."
>>>
>>> As I said, I absolutely refuse to reduce my own carbon emissions and
>>> in fact continue to see ways to increase them. (Do you dumbass hippies
>>> really believe that your stoopid windmills are solar panels are capable
>>> of keeping people warm and alive in the deep freeze that so much of the
>>> U.S. is currently experiencing?)
>
>I'm going to hate myself for doing this ...
>
>> I _refuse_ to buy an electic car which has horrible range, little
>> storage and looks absolutely awful in the hope that mining lithium to
>> power them somehow causes less pollution than driving a regular,
>> gas-burning car.
>
>Range, storage, and looks have nothing to do with electric vs gasoline. If some
>goofy designer feels he has to make an electric car look like a golf cart,
>that's his decision, not reality. As for pollution, it depends on how the
>electricity is produced. How is mining any worse than drilling for oil?

The electic car HAS to look mostly bloated because the amount of
lithium they need to put into the car to get it to have decent range
is enormous. How Tesla managed to make a decent looking car despite
all of that metal is beyond me but they definitely deserve credit.

As for the oil statement, the point is not that oil drilling is better
than mining for lithium, it's that both are polluting the environment.
Therefore, there is no escaping the idea of polluting the environment
by choosing an electric car over a gas one. Both end up doing
something that the filthy hippies won't like.

>> I want power in my vehicle as well as the ability to drive as far as I
>> want to and that is something electric cars will never allow for.
>
>An electric vehicle can have plenty of power. Just as in a gasoline fueled car,
>it depends on how much energy one wishes to expend. As far as distance, you can
>only go as far as the next gas station. Empty tank, or dead battery, both leave
>you walking.

An empty gas tank can quickly be filled at one of the many gas
stations around any country. The process itself takes about 2 to 5
minutes depending on how much gas you need and you're ready to go the
moment you've filled up. In the case of electric, even with fast
charging, you need a good 30 minutes to get to 80%. You'll likely say
that the driver can stop for a piss or whatever, but if the distance
he needs to travel is significant, he'll be pretty annoyed about
stopping for a long piss every two hours and that's only assuming that
there will be a good number of electric charging stations around for
him.

Doomsdrzej

unread,
Jan 5, 2018, 6:57:48 PM1/5/18
to
The most popular theory is that Intel not only left it in there but
actually put the flaw in there to facilitate spying by the NSA. There
was even anonymous testimony by an Intel employee that explained it as
being used for that purpose and that the company was aware of the
issue for years.

Pabst Blue Ribbon

unread,
Jan 5, 2018, 7:19:09 PM1/5/18
to
Jan-Erik Soderholm <jan-erik....@telia.com> wrote:
> Den 2018-01-06 kl. 00:01, skrev Pabst Blue Ribbon:
>
> [Why quote 100s of lines?]

My newsreader is not that great with editing.

>> I wonder if Intel will be sued because of that.
>
> https://www.theregister.co.uk/2018/01/05/intel_meltdown_cpu_flaw_sued/
>
> "Here come the lawyers! Intel slapped with three Meltdown bug lawsuits!"

Thank you. Just like I expected.



IanD

unread,
Jan 5, 2018, 10:51:28 PM1/5/18
to
This would not surprise me

The NSA and the likes have gag orders stopping the speaking out of such code adjustment / insertions

I remember Linus was approached to insert backdoor code into the kernel. It only came to light because his father spoke of the approach, where-as Linus himself was not allowed to speak of it

His response to the organisation was that any attempt to insert code into the base kernel would be quickly picked up by reviewers. It remains unknown if the code was inserted or not but the consensus is that they went away empty handed

I asked the question before but never got a response - does any of the OpenVMS code have additional code inserted into it...

JF Mezei

unread,
Jan 5, 2018, 10:59:06 PM1/5/18
to
On 2018-01-05 17:41, DaveFroble wrote:

> Now, as for Alpha, yes, OoO and such, but, the question would be, does it allow
> "illegal" access to memory? If Alpha does not allow loading memory it should
> not into cache, then perhaps not a problem.


Meltdown also relies on the ability of an unprivileged process to flush
the CPU cache and then read it (to see whatever the first process has
read from illegal memory)..


Not sure why x86 allows unprivileged process to manage the CPU cache. If
Alpha does not allow this, then there would be no means for the first
process to use the "covert channel" to send each byte to a cooperative
process.

DaveFroble

unread,
Jan 5, 2018, 11:06:41 PM1/5/18
to
Well, the lawyers will try anything. Doesn't mean they have a case. Lots of
examples.

Make liquor, somebody gets hurt ....

Make guns, somebody gets shot ....

Make cars, somebody gets hurt ....

Quite frankly, I doubt the CPU makers ever imagined such an exploit. Unless
some lawyers can come up with a "smoking gun", I'd think Intel et;al would have
a great defense. They also could claim that it's the software people who mixed
user and kernel memory who created the problem.

Nobody knows everything.

Johann 'Myrkraverk' Oskarsson

unread,
Jan 6, 2018, 1:18:13 AM1/6/18
to
Craig A. Berry wrote:

> Alpha may not have any timer with precision sufficient to snoop the
> cache. This based on the fact that reducing timer precision is one of
> the mitigations in progress and that $GETTIM_PREC only came along in
> v8.3-1H1 and is Itanium only and even with that you only get time to
> the nearest 1ms, assuming I've understood this discussion correctly:

That is irrelevant, people are pulling off Spectre with timing threads,
see the pthread variant(s) here:

https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9e3d4bb6

--
Johann | email: invalid -> com | www.myrkraverk.com/blog/
I'm not from the Internet, I just work there. | twitter: @myrkraverk

mike

unread,
Jan 6, 2018, 2:48:07 AM1/6/18
to
On 1/5/2018 8:06 PM, DaveFroble wrote:
> Pabst Blue Ribbon wrote:
>> Jan-Erik Soderholm <jan-erik....@telia.com> wrote:
>>> Den 2018-01-06 kl. 00:01, skrev Pabst Blue Ribbon:
>>>
>>> [Why quote 100s of lines?]
>>
>> My newsreader is not that great with editing.
>>
>>>> I wonder if Intel will be sued because of that.
>>> https://www.theregister.co.uk/2018/01/05/intel_meltdown_cpu_flaw_sued/
>>>
>>> "Here come the lawyers! Intel slapped with three Meltdown bug lawsuits!"
>>
>> Thank you. Just like I expected.
>
> Well, the lawyers will try anything. Doesn't mean they have a case.
> Lots of examples.
>
> Make liquor, somebody gets hurt ....
>
> Make guns, somebody gets shot ....
>
> Make cars, somebody gets hurt ....
>
> Quite frankly, I doubt the CPU makers ever imagined such an exploit.
> Unless some lawyers can come up with a "smoking gun", I'd think Intel
> et;al would have a great defense. They also could claim that it's the
> software people who mixed user and kernel memory who created the problem.
>
> Nobody knows everything.
>
And almost nobody here can effect a solution inside the CPU.
It's all just bitching about it.
I'd be very disappointed if Intel were successfully sued over someone
discovering a way to infiltrate their microcode.
Might as well sue Kwikset because someone figgered out how to pick the
40 year old lock on your front door and steal your stuff.
Shit happens, get over it and move on.

johnwa...@yahoo.co.uk

unread,
Jan 6, 2018, 6:40:54 AM1/6/18
to
Alpha had architected high precision timers long before they
were fashionable. Have a look at e.g. an Alpha Architecture
Handbook or other freely available document and look for
info on e.g. the Processor Cycle Counter, and the RPCC
instruction and its close relative RSCC (Read Processor/System
Cycle Counter). You should find text similar to this (apologies
for formatting here):
"The PCC register consists of two 32-bit fields. The low-order 32 bits (PCC<31:0>) are an unsigned, wrapping counter, PCC_CNT. The high-order
32 bits (PCC<63:32>), PCC_OFF, are operating system dependent in their implementation.

PCC_CNT is the base clock register for measuring time intervals, and is suitable for timing intervals on the order of nanoseconds.

PCC_CNT increments once per N CPU cycles, where N is an implementation-specific
integer in the range 1..16. The cycle counter frequency is the number of times the processor cycle counter gets incremented per second. "


A Wizard answer (see below for extract) even referred to
RPCC in 1999:
http://h41379.www4.hpe.com/wizard/wiz_2734.html

Whether these cycle counter things were or are accessible in any
relevant way for the current nightmare is a differerent
(and arguably not hugely relevant) question.

Alpha didn't get speculative execution/OoO until EV6/21264.

Alpha architecturally understands the difference between
cacheable and non-cacheable items, as do (most of?) the
better processors and code generation tools of recent
decades, not just for Alpha (see e.g. various discussions
of what "volatile" might mean to a compiler and its users).

OoO brings engineering risks as well as apparent benefits.
To work well it needs to be architected well and
designed well, and properly tested.

I've seen scenarios where that didn't apply, leading to
Bad Things happening. In a "modern" IT environment it's
easy for people to ignore these Bad Things e.g. by blaming
the OS, or cheap hardware, etc, which may well be the
right analysis much of the time. But not necessarily
always.

In some environments (e.g. no OS involved) the
OS cannot be blamed, and that's where I've seen a
speculative/OoO design go wrong. That example didn't
even need memory management or cache - but they can
add to the fun.

x86 doesn't have a documented architecture and
behaviours or anything close to that. And it's
not known for running trustworthy OSes on
trustworthy hardware. Misbehaviour is expected and inevitable and
largely ignored.

AMD64 appears to have a documented architecture
and behaviours.

Alpha did/does have a documented architecture, and
implementations have documented behaviours (as did,
fwiw, various PDP11s and similar - this isn't rocket
science).

Architecture, documentation, and *understanding* isn't
a luxury if people really want stuff to behave properly.

Meanwhile, here's a bit of that answer from the Wizard,
just in case it falls off the HP website:

The Alpha Architecture requires that Alpha systems implement process and
system cycle counters, which can be useful in certain environments.
(rpcc, rscc) You will need to consult the Architecture manual for the
format and use of the counter.

An example of reading the process cycle counter (from C) follows:

#include [c_asm.h]
unsigned long cycles;

cycles = asm("rpcc %v0;"
"sll %v0, 32, %t1;"
"addq %v0, %t1, %v0;"
"srl %v0, 32, %v0");

Replace [ and ] with the equivilent angle brackets.
.................
Have a lot of fun.

Dirk Munk

unread,
Jan 6, 2018, 7:43:40 AM1/6/18
to
Doomsdrzej wrote:
> On Fri, 5 Jan 2018 12:33:59 +0100, Jan-Erik Soderholm
> <jan-erik....@telia.com> wrote:
>
>> Den 2018-01-05 kl. 04:36, skrev Roger Blake:
>>> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>>>> Might I say that was an awesome post, sir.
>>> His post was sheer idiocy. CO2 is not a pollutant - period.
>>>
>> No, it is a natural part of the atmosphare, but it is a balance.
>> It has to be in the right proportions. To much (and in particual
>> if we continue to burn fosile fuels that ads carbone that was
>> bound millions of years ago) and the climate will be hurt.
> You can't _hurt_ climate. The Earth always balances itself out and
> there are thousands of years of data showing this. Some periods are
> cold; some periods are warm. In the end, there is a balance regardless
> of what its living creatures do.
>
>>> Human caused "climate change/global warming" is junk science at
>>> its worst. Even Reid Bryson, the scientist who was the father of
>>> modern climate science, stated that it is "a bunch of hooey."
>>>
>> I could probably name the scientist that has the opposite view, but
>> the space in one posting would not be enough.
>>
>> And why pick one that has been dead for 10 years? The views on global
>> warming has changed over the years and a lot has happend the last decade.
> Please demonstrate how.
>
>>> As I said, I absolutely refuse to reduce my own carbon emissions and
>>> in fact continue to see ways to increase them.
>> OK. fine. You'll be sorry and your children will be hurt. But then, if
>> you could reduce your C02 emission, what would be the issue?
> Reducing CO2 emissions should be voluntary in the same way that
> companies having a $15 minimum wage should be voluntary. In the United
> States, some companies did so and as a result show that they can
> afford to pay people that well without there being any kind of
> consequences. In Ontario, for instance, the $15 minimum wage was
> forced and companies now have to cut back somehow to afford to pay
> people that well. The liberal approach to CO2 emissions involves
> forcing companies and the people to make significant sacrifices and
> the end result is that it will do damage to the economy and the
> standard of life in the _hope_ that we will somehow be able to slow
> the evitable in a very insignificant way at a time when none of us
> will still be alive. The best governments _should_ hope for is to
> raise awareness about the potential problem and encourage people to
> make whatever changes they can which is not at all what they've been
> doing with schemes like the Paris Climate Accord.
>
>>> (Do you dumbass hippies
>>> really believe that your stoopid windmills are solar panels are capable
>>> of keeping people warm and alive in the deep freeze that so much of the
>>> U.S. is currently experiencing?)
>> That weather phenomenon is probably also caused by the disturbed climate
>> caused by the CO2 emissions. So in the case of the current US weather
>> issues, you could say that it is, in a way, self-inflicted.
>>
>> Anyway, you could probably start with more efficient cars, shutting down
>> all AC equipment and so on. This cold is just a temporarily storm and
>> has little to do with the overall climate issues. One can not use the
>> amount of snow on the back garden to judge about the climate at large.
> Just watch this:
>
> <https://hooktube.com/watch?v=NjlC02NsIt0>
Wonderful video. He made us believe the sea level isn't rising.The only
problem is that the sea level is rising, that is measurable.

However, it is true that the earth has known warmer and colder periods,
even in the past 2000 years. The problem is we can determine this from
descriptions about what was going on, but people didn't have
thermometers to record the temperatures.

Alan Browne

unread,
Jan 6, 2018, 10:27:56 AM1/6/18
to
On 2018-01-05 16:00, DaveFroble wrote:
> Alan Browne wrote:
>> On 2018-01-05 09:15, DaveFroble wrote:
>>> Jan-Erik Soderholm wrote:
>>
>>>> Becuse the designers, for performance reasons, has mapped kernel memory
>>>> into the user process address space and relies on the OS to check
>>>> protection before any kernel memory (or code) is accessed.
>>>>
>>>> The issue with the current issues is that the hardware (the CPU) does
>>>> these accesses in hardware "under the hood" without control by the OS.
>>>>
>>>> If you map your kernel memory in another way that uses the hardware
>>>> protection facilities, you are (as I understand) safe, at the cost
>>>> of worse performance to switch between user and kernel mode.
>>>>
>>>>
>>>
>>> As I wrote, someone dropped the ball on this one.
>>>
>>> Speculative execution is part of the HW, not software.  It appears
>>> the HW doesn't follow it's own rules.  Or, perhaps I don't actually
>>> understand the problem?
>>
>> At least as well as I do.  These are very complex mechanisms and
>> complexity is usually where you're most likely to get problems.
>>
>> In this case the h/w implementation didn't reflect the design goal.
>>
>> This means intel had very poor design review and abysmal testing of
>> security features.
>>
>
> There seems a whole bunch of us "speculating" about things we probably
> don't know enough about.

I am very certain that they either did not design the testing correctly
or didn't test per the test plan correctly. Or a bad scenario: they saw
it and carpeted it.

>
> :-)
>
> It seems to me that before memory is fetched into cache, the CPU should
> be determining whether it should indeed be fetching that memory.  Yeah,

The CPU memory controller is (usually) the arbiter of whether a fetch is
"legal" in the privilege scheme - so if something is allowed to be
fetched, then it is fetched. So (hierarchically) the fetch goes to the
decoding pipeline(s) -and- is simultaneously copied to the cache. At
that point the MC has "allowed" the fetch. Writes to memory are also
written to cache. The issue seems to be that post fetch from Kernel
assigned memory, the cache makes some privileged data available to lower
priority tasks after the context switch. That is the gist.

> sounds simple, but I'm betting it isn't.

I recall when pipelining came to simple microprocessors and we were in
the lab swapping processors and measuring the performance gains and
salivating over not much... or IIRC a competitor to the 8088 came out
with some cycles saved on some instructions and we were doing the same
thing. Then pre-fetching came - then predictive decoding and so on ...

What they do these days in processors in mind-boggling layers of
complexity before you even get close to privilege management.

To me though, multicore processing is the best achievement. Certainly
makes OS's and apps very smooth in operation.

Alan Browne

unread,
Jan 6, 2018, 10:33:22 AM1/6/18
to
On 2018-01-05 18:01, Pabst Blue Ribbon wrote:
> Alan Browne <bitb...@blackhole.com> wrote:

>> In this case the h/w implementation didn't reflect the design goal.
>>
>> This means intel had very poor design review and abysmal testing of
>> security features.
>
> I doubt it. Yes, it's assumption but I think Intel was aware and gave OK to
> flawed design because of performance/cost.

Possible, but I'd discount a deliberate pass. FDIV was very costly to
intel - this could be much more costly if the class action suits start
flying due to increased computing costs and so on.

Pabst Blue Ribbon

unread,
Jan 6, 2018, 10:51:02 AM1/6/18
to
DaveFroble <da...@tsoft-inc.com> wrote:
> Pabst Blue Ribbon wrote:
>> Jan-Erik Soderholm <jan-erik....@telia.com> wrote:
>>> Den 2018-01-06 kl. 00:01, skrev Pabst Blue Ribbon:
>>>
>>> [Why quote 100s of lines?]
>>
>> My newsreader is not that great with editing.
>>
>>>> I wonder if Intel will be sued because of that.
>>> https://www.theregister.co.uk/2018/01/05/intel_meltdown_cpu_flaw_sued/
>>>
>>> "Here come the lawyers! Intel slapped with three Meltdown bug lawsuits!"
>>
>> Thank you. Just like I expected.
>
> Well, the lawyers will try anything. Doesn't mean they have a case. Lots of
> examples.
>
> Make liquor, somebody gets hurt ....
>
> Make guns, somebody gets shot ....
>
> Make cars, somebody gets hurt ....

Make cars, cars do not perform as advertised or not as safe as expected,
have to do a recall.


Pabst Blue Ribbon

unread,
Jan 6, 2018, 10:53:44 AM1/6/18
to
Alan Browne <bitb...@blackhole.com> wrote:
> On 2018-01-05 18:01, Pabst Blue Ribbon wrote:
>> Alan Browne <bitb...@blackhole.com> wrote:
>
>>> In this case the h/w implementation didn't reflect the design goal.
>>>
>>> This means intel had very poor design review and abysmal testing of
>>> security features.
>>
>> I doubt it. Yes, it's assumption but I think Intel was aware and gave OK to
>> flawed design because of performance/cost.
>
> Possible, but I'd discount a deliberate pass. FDIV was very costly to
> intel - this could be much more costly if the class action suits start
> flying due to increased computing costs and so on.
>

"Could be much more costly" didn't stop Volkswagen.

Jan-Erik Soderholm

unread,
Jan 6, 2018, 11:06:56 AM1/6/18
to
As I understand, the CPU fetched prived data "under the hood" even before
the processor has decided that it was prived data. When the user process
get an "slap on the hand", the tracks was already in the cache.

There was never any "context switch" involved at all, it was way below
such constructs. Everything was done from user level "under the radar"
from the point of view of the any OS (or the protection facilities in
the CPU itself, it also seems).

And it never made any priviledge data avalable at all. It used the prived
data (byte) that was pre-fetched as an index into an array in user memory
and could later identify what element of the user data that was touched.

Clever...

Alan Browne

unread,
Jan 6, 2018, 11:09:30 AM1/6/18
to
Easy toss. Doesn't prove anything wrt intel.

Alan Browne

unread,
Jan 6, 2018, 11:11:11 AM1/6/18
to
I didn't mean OS level CS but privilege level switching. Sorry for the
ambiguity.

>
> And it never made any priviledge data avalable at all. It used the prived
> data (byte) that was pre-fetched as an index into an array in user memory
> and could later identify what element of the user data that was touched.
>
> Clever...

Interesting.

Johnny Billquist

unread,
Jan 6, 2018, 11:21:06 AM1/6/18
to
On 2018-01-06 12:40, johnwa...@yahoo.co.uk wrote:
> Alpha did/does have a documented architecture, and
> implementations have documented behaviours (as did,
> fwiw, various PDP11s and similar - this isn't rocket
> science).

This is slightly offtopic, but anyway...
Actually, the PDP-11 did not have a documented architecture, and that
was recognized over time as one of the big problems of the PDP-11, and
why DEC did such a thorough job on the VAX with the architecture
reference manual.

For the PDP-11, different models do behave differently on some more odd
things, and there is no defined "right" way.

(That said, had the PDP-11 ever had speculative execution and all those
fanciness, it would still be pretty safe, since the PDP-11 actually have
different page tables for different processor modes, and also have
execute only page protection and cannot execute code on the stack or
other data areas, if properly set up. :-) )

Johnny

--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: b...@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol

Scott Dorsey

unread,
Jan 6, 2018, 11:26:03 AM1/6/18
to
DaveFroble <da...@tsoft-inc.com> wrote:
>
>Speculative execution is part of the HW, not software. It appears the HW
>doesn't follow it's own rules. Or, perhaps I don't actually understand the problem?

In the case of one of the two problems, that is definitely the case and it
is very obviously a hardware problem.

In the case of the other problem, it's even worse than that. It's not just
a hardware implementation problem but a conceptualization problem. They
didn't just implement a thing wrong in hardware, they implemented the wrong
thing. And because it's in hardware it's going to be hard to fix.

Back in the seventies, stuff like this was being discovered all the time and
there were processor ECOs and microcode patches to deal with them. I
remember running a Pr1me machine for months with some kind of grant card in
place of the I-cache card because of a consistency bug. But we thought all
that stuff had been dealt with and that memory protection was a solved problem.
Turns out, maybe not.

Actually, it's kind of cool if you think about it that way.
Maybe someday we'll get capability architectures and never have to worry about
this stuff again.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."

Johnny Billquist

unread,
Jan 6, 2018, 11:30:46 AM1/6/18
to
Nothing like that either. You just speculatively read data that is in
theYOUR address space, but protected from user access. The speculative
execution fetches the data, even though you are not allowed to read it.
No CS, not even any privilege level change happened.

You just make a speculative execution of an instruction that would fail
because you do not have the right to access the memory, but since it is
speculative, the trap never happens, as it later turns out that the
speculation was wrong. However, the data was still read, since the
speculative execution itself actually bypass the protection. The
protection trap will only hit you if the instruction is decided that it
actually would happen.
But unfortunately, the cache will still hold the read data that you were
not supposed to see.
And then they figured out a clever way of mining the contents of the cache.

One could argue that the cache should be invalidated in such a scenario,
but that is not happening either.

Scott Dorsey

unread,
Jan 6, 2018, 11:35:58 AM1/6/18
to
Doomsdrzej <d...@do.om> wrote:
>
>I want power in my vehicle as well as the ability to drive as far as I
>want to and that is something electric cars will never allow for.

You might want to try the Tesla. Range isn't there yet, but it's got more
low end torque than most V-8s, and it handles decently. Range is actually
better than some of those V-8s too, though of course it takes longer to fill
which is a problem.

It's getting there. What is cool about the electric car from both a
performance and conservation standard is that you're not paying a huge
overhead running with low power most of the time. With the gasoline engine,
you have to size the engine for peak power that you're only using very
occasionally, and you pay an efficiency penalty at low power. With the
electric motor you use only what you need; you get crazy high peak power
and good efficiency in cruise at the same time.

I'm not ready to buy an electric car yet, but I wouldn't discount them if
I were you. The Tesla is way too expensive for what it is, but that will
change given time.

And I might add, given that this is a computer architecture group, that
what made these cars possible was high density battery technology, and
what made that possible was smart charging. The lithium chemistry was
known for many years but it takes a lot of charge control technology to
keep them from destroying themselves. It's not something you can do with
an 8051 anymore. CPU makes it possible.

Scott Dorsey

unread,
Jan 6, 2018, 11:38:43 AM1/6/18
to
chrisv <chr...@nospam.invalid> wrote:
>Doomsdrzej wrote:
>
>>The Earth always balances itself out and
>>there are thousands of years of data showing this. Some periods are
>>cold; some periods are warm. In the end, there is a balance regardless
>>of what its living creatures do.
>
>What the right-wing propagandists always "forget" is that, while there
>are compensating mechanisms that tend to bring the climate back to an
>equilibrium, the forces are gentle, and they work on the time-scale of
>millennia. They cannot cope with a violent change in conditions
>occurring over a short period of time, as in the last 100 years.

Oh, they certainly can. The problem is that the coping may involve things
that are very, very unpleasant to human beings. In the end everything will
come out okay with the climate, yes. The question is whether it will come
out okay with people or not.

Scott Dorsey

unread,
Jan 6, 2018, 11:44:31 AM1/6/18
to
DaveFroble <da...@tsoft-inc.com> wrote:
>
>I wonder whether VAX would have these problems?

Maybe, the problem is that there were several rather different cache
management schemes involved. The 11/730 had hardly any cache at all and
was practically synchronous. The uVax series had multilevel caches and
all kinds of cache consistency logic. There's an article in the
DECNical Journal from around 1988 or so describing the evolution of the
cache management.

Remember the vax architecture also has more than two rings, which forces
a certain view of the situation.

nospam

unread,
Jan 6, 2018, 11:49:19 AM1/6/18
to
In article <p2qttc$rmg$1...@panix2.panix.com>, Scott Dorsey
<klu...@panix.com> wrote:

> >
> >I want power in my vehicle as well as the ability to drive as far as I
> >want to and that is something electric cars will never allow for.
>
> You might want to try the Tesla. Range isn't there yet, but it's got more
> low end torque than most V-8s, and it handles decently. Range is actually
> better than some of those V-8s too, though of course it takes longer to fill
> which is a problem.

the vast majority of trips are under 50 miles, so range is not even
remotely a problem. plus, the batteries will be full every morning. you
can't refuel a gas powered vehicle while parked in your garage.

range is only a problem for long road trips, and in those cases, rent a
vehicle. eventually, that won't be a problem, as more charging stations
are built.

DaveFroble

unread,
Jan 6, 2018, 12:01:45 PM1/6/18
to
Alan Browne wrote:
> On 2018-01-05 18:01, Pabst Blue Ribbon wrote:
>> Alan Browne <bitb...@blackhole.com> wrote:
>
>>> In this case the h/w implementation didn't reflect the design goal.
>>>
>>> This means intel had very poor design review and abysmal testing of
>>> security features.
>>
>> I doubt it. Yes, it's assumption but I think Intel was aware and gave
>> OK to
>> flawed design because of performance/cost.
>
> Possible, but I'd discount a deliberate pass. FDIV was very costly to
> intel - this could be much more costly if the class action suits start
> flying due to increased computing costs and so on.
>

If FDIV is what I remember as the problem in the 386 CPU, then it wasn't so much
that the problem was costly, it was more that Intel's reaction to the problem
was costly. They tried to say the problem affected few users and didn't want to
replace defective CPUs. Well, people aren't that stupid. They saw through
Intel's attitude. That's what hurt Intel the most.

Before that, when an error in the VAX 750 FP unit was discovered, DEC came up
with a fix and implemented it. DEC field service got a bit busy. DEC's
attitude was much different than Intel's, and they were rewarded by customers
acknowledging their commitment to excellence.

Now I may be an aberration, (actually I most likely am), but I still distrust
Intel, and use AMD whenever possible. Not that I feel AMD could stand up to the
integrity DEC showed int he past, few could.

DaveFroble

unread,
Jan 6, 2018, 12:13:32 PM1/6/18
to
Unless you are Ford with the Pinto fuel tank problem.

The person who came up with the idea that it would be cheaper to pay off the law
suits than to do a recall and fix the problem is most likely alive and well in
too many companies. As soon as you put money ahead of people, there is a
problem. We hang murderers, why not those who are worse?

Bob F

unread,
Jan 6, 2018, 12:37:14 PM1/6/18
to
On 1/5/2018 10:46 AM, Doomsdrzej wrote:
> On Fri, 5 Jan 2018 10:43:48 -0500, Alan Browne
> <bitb...@blackhole.com> wrote:
>
>> On 2018-01-05 10:13, Doomsdrzej wrote:
>>> On Fri, 05 Jan 2018 10:04:06 +0100, Peter Köhlmann
>>> <peter-k...@t-online.de> wrote:
>>>
>>>> Roger Blake wrote:
>>>>
>>>>> On 2018-01-04, chrisv <chr...@nospam.invalid> wrote:
>>>>>> Might I say that was an awesome post, sir.
>>>>>
>>>>> His post was sheer idiocy. CO2 is not a pollutant - period.
>>>>>
>>>>> Human caused "climate change/global warming" is junk science at
>>>>> its worst.
>>>>
>>>> Idiot
>>>
>>> Another thought-provoking and irrefutable post by Mainz's greatest
>>> export, Peter the Klöwn.
>>
>> A funny thing about your sort is you believe that putting down others
>> makes your point valid and that resonates in your little echo chambers
>> as some sort of truth.
>
> Says the hypocrite who just defended someone calling another poster an
> "idiot."
>
>> While there is wisdom in crowds...
>
> ... there is none in your posts.
>
> *plonk*
>

Like the rest of the science deniers, go ahead and stick your head back
in the sand.

Alan Browne

unread,
Jan 6, 2018, 12:37:38 PM1/6/18
to
On 2018-01-06 12:01, DaveFroble wrote:
> Alan Browne wrote:
>> On 2018-01-05 18:01, Pabst Blue Ribbon wrote:
>>> Alan Browne <bitb...@blackhole.com> wrote:
>>
>>>> In this case the h/w implementation didn't reflect the design goal.
>>>>
>>>> This means intel had very poor design review and abysmal testing of
>>>> security features.
>>>
>>> I doubt it. Yes, it's assumption but I think Intel was aware and gave
>>> OK to
>>> flawed design because of performance/cost.
>>
>> Possible, but I'd discount a deliberate pass.  FDIV was very costly to
>> intel - this could be much more costly if the class action suits start
>> flying due to increased computing costs and so on.
>>
>
> If FDIV is what I remember as the problem in the 386 CPU, then it wasn't
> so much that the problem was costly, it was more that Intel's reaction
> to the problem was costly.  They tried to say the problem affected few
> users and didn't want to replace defective CPUs.  Well, people aren't
> that stupid.  They saw through Intel's attitude.  That's what hurt Intel
> the most.

Nearly $500M to resolve. That was real money back then. (Nearly $1B in
today's terms).

In recent news: appears intel's CEO recently sold half of his stock ...

Since this has all broken out, intel stock has fallen and AMD's has
risen ... no surprise.

Yep, it's not worth it.

> Before that, when an error in the VAX 750 FP unit was discovered, DEC
> came up with a fix and implemented it.  DEC field service got a bit
> busy.  DEC's attitude was much different than Intel's, and they were
> rewarded by customers acknowledging their commitment to excellence.
>
> Now I may be an aberration, (actually I most likely am), but I still
> distrust Intel, and use AMD whenever possible.  Not that I feel AMD
> could stand up to the integrity DEC showed int he past, few could.

I've had at least one AMD machine and don't recall any issues. But that
was so Windows ago... But I don't make buy decisions based on
components either. (Full disclosure: moving to Mac was facilitated by
them moving to intel so I could (and still do) run legacy code in
virtualization). Other than that I don't care. I assume Macs will
switch to ARM in the next few years.

Bill Gunshannon

unread,
Jan 6, 2018, 12:38:37 PM1/6/18
to
Dave, you live in PA. PA does not execute anyone. They
punish them by making the taxpayers support them for the
rest of their lives to include things like TV, gym facilities,
college degrees, full medical coverage. And many more things
the average taxpayer can't afford.

bill

JF Mezei

unread,
Jan 6, 2018, 12:58:12 PM1/6/18
to
On 2018-01-06 10:27, Alan Browne wrote:

> The CPU memory controller is (usually) the arbiter of whether a fetch is
> "legal" in the privilege scheme - so if something is allowed to be
> fetched, then it is fetched.

Because the fetching is the longest one, you get it started as soon as
you get physical memory. While the fetching is happening, you do the
other checks such as access violation checks. If you did the later
first, then it would slow down the computer because you are delaying teh
start of the longest portion of a memory access.

It seems to me the biggest vulnerability is that an unprivileged process
can access the CPU cache and bypass memnory access checks since that
isn't considered accessing memory.

More worrysome is that ARM would have similar design flaw.

I am guessing this has to do with cache coherence when multiple
processes on different cores share memory and when process 2 gets a
signal that process 1 has deposited memory, it needs to ensure that the
cache serving its core has been refreshed. (doing in in user mode
probably saves a lot of overhead of switching to kernel, doing it, and
switching back)


Stephen Hoffman

unread,
Jan 6, 2018, 1:03:56 PM1/6/18
to
On 2018-01-06 11:40:51 +0000, johnwa...@yahoo.co.uk said:

> Meanwhile, here's a bit of that answer from the Wizard, just in case it
> falls off the HP website:

FWIW... Somebody thought of that possibility back around ~2005. The
complete Ask The Wizard archives are on the Freeware.


--
Pure Personal Opinion | HoffmanLabs LLC

Neil Rieck

unread,
Jan 6, 2018, 1:28:54 PM1/6/18
to
I don't want to sound like an old fart, but the first computer I ever worked on (Interdata Model 70) did not have an Operating System. My first exposure to a true OS was RT-11 and RSX-11M in the late 1970s during classes at DEC "ed-services" in Kanata Ontario and Bedford Massachusetts.

Around that time, many of us hobbyists were playing with 6502 and 6800 micros at home. I recall someone asking the DEC instructor why micros appeared to be so much faster than minis. His paraphrased response went something like this:

DEC hardware and software was built assuming that every user (in a multi-user system) was doing something dangerous like learning MACRO programming which could crash the whole system. This meant that sufficient memory management hardware was built into the system (along with supporting software) so that a crash would only occur in user-space. Also, memory management was used to isolate the memory used by various processes for security reasons.

I moved from the 6502 to the MC68000 which also didn't have built-in hardware although it was available with external chips (MC68851). Later chips starting with the MC68020 claimed to have built-in memory management which rivaled features we saw in minis but those were all CISC machines.

####

One of the latest exploits is called Spectre and (apparently) deals with speculative execution of code in RISC and other non-CISC systems. In effect, CPU caches are being used as a back-channel to snoop data left behind by other processes. So obviously the memory protection paradigm built into CISC systems was not fully extend to all subsystems of non-CISC platforms.

Perhaps some technical guru can comment as to how memory protection was implemented in caches on Alpha and Itanium.

Neil Rieck
Waterloo, Ontario, Canada.
http://neilrieck.net

Kerry Main

unread,
Jan 6, 2018, 1:40:58 PM1/6/18
to comp.os.vms to email gateway
> -----Original Message-----
> From: Info-vax [mailto:info-vax...@rbnsn.com] On Behalf Of JF
> Mezei via Info-vax
> Sent: January 5, 2018 10:59 PM
> To: info...@rbnsn.com
> Cc: JF Mezei <jfmezei...@vaxination.ca>
> Subject: Re: [Info-vax] Intel junk...Kernel-memory-leaking Intel
> processor design flaw forces Linux, Windows redesign
>
> On 2018-01-05 17:41, DaveFroble wrote:
>
> > Now, as for Alpha, yes, OoO and such, but, the question would be,
> does it allow
> > "illegal" access to memory? If Alpha does not allow loading memory it
> should
> > not into cache, then perhaps not a problem.
>
>
> Meltdown also relies on the ability of an unprivileged process to flush
> the CPU cache and then read it (to see whatever the first process has
> read from illegal memory)..
>
>
> Not sure why x86 allows unprivileged process to manage the CPU cache.
> If
> Alpha does not allow this, then there would be no means for the first
> process to use the "covert channel" to send each byte to a cooperative
> process.
>

JF - welcome back ..

😊


Regards,

Kerry Main
Kerry dot main at starkgaming dot com





johnwa...@yahoo.co.uk

unread,
Jan 6, 2018, 2:45:51 PM1/6/18
to
One could argue, and smart people have argued, that
side effects of speculative instructions should not
become visible until such time as the instruction in
question is confirmed as one which will be executed
(nb in cases like this, an instruction which is
initially executed is not always an instruction which
completes e.g. it may be interrupted because of an
exception or whatever, and therein lies another world
of fun where great engineering care is always needed
but is not always used).

A speculative fetch from main memory directly into a
(shadowed) CPU register, a reference which bypasses
cache because it's been told to (e.g. because the
reference in question is to a region declared
non-cacheable) can in general be discarded when the
instruction is discarded because the speculation
turned out to be wrong. And the affected shadowed
register never gets to be visible. And on a good
day there won't be any visible side effects.

A speculative fetch from main memory into a register,
one which does go into cache, has side effects which
cannot always be discarded when the speculation/prediction
turns out to be wrong. But hey, the unnecessary and
inappropriate prefetching means it's faster in the
performance tests, innit. What could possibly go wrong,
nobody will ever notice even if anything does go wring.

Well apparently people not only noticed, they found a
way of making the manufacturers and the media notice,

Going back a decade or four, some readers may remember
devices whose state changed when the device was read
e.g. some Control and Status Registers would clear a
"data available" bit if a particular device register
was read.

That change of device state was a "side effect" which
would have been a Bad Thing if a speculative read had
got as far as a real-world device. But most "modern"
computers don't have that particular challenge with
CSRs, in part because memory has different (predictable)
behaviour; whatever you write into memory, you get
the same back. Otherwise it's not memory. OK there
might sometimes be side effects, and misunderstandings,
and that's where all this fun starts.


There's more to it than that, such as the difference
between a genuine RISC architecture (where load and
store are the usual ways of addressing memory) and
a non-RISC architecture (VAX, legacy x86) where an
incoming instruction may have multiple references to
memory locations within one particular instruction.
Other factors to consider include the different
possible ways of architecting and implementing on-chip
caches.

Is modern x86 a RISC machine? For spin purposes it is
both RISC and non-RISC. Software was x86-compatible,
performance is largely RISC-class, and who cares if
checks that are possible to do properly on RISC
can't realistically on modern x86.

Corrections and clarifications very welcome.

I quite like the Computerphile video referenced earlier
in this here thread. A bit less hype, a bit more detail,
than much of the coverage around at the moment. Respect
is due.

Inevitably, when one chip builder makes almost all the
chips in the volume market, then that chip builder's
design problem(s) will be liable to affect almost all
the products in their volume market.

Doomsdrzej

unread,
Jan 6, 2018, 2:49:09 PM1/6/18
to
The biggest problem in even considering a Tesla is that I live in a
very cold climate which, since mid-December, has seen its temperature
go no lower than -25c. In such a climate, the already poor range of an
electric car is even worse and there are good reasons to believe that
it wouldn't even start. There's also the fact that the computers
within it, something which is essentially problematic for all cars,
tend to go crazy when the temperatures are too low so the car might
effectively become useless.

If I lived in a place like California or Texas, I'd be a lot more
willing to consider an electric car but not here.

nospam

unread,
Jan 6, 2018, 2:51:22 PM1/6/18
to
In article <3t925dd7f93a72tcn...@4ax.com>, Doomsdrzej
<d...@do.om> wrote:

>
> The biggest problem in even considering a Tesla is that I live in a
> very cold climate which, since mid-December, has seen its temperature
> go no lower than -25c. In such a climate, the already poor range of an
> electric car is even worse and there are good reasons to believe that
> it wouldn't even start.

the batteries are heated in cold weather and the cars start just fine.

> There's also the fact that the computers
> within it, something which is essentially problematic for all cars,
> tend to go crazy when the temperatures are too low so the car might
> effectively become useless.

nonsense.

Doomsdrzej

unread,
Jan 6, 2018, 2:53:40 PM1/6/18
to
I'm not a science denier, but I do deny results which come from a
hand-picked set of scientists. If 1,000 scientists are asked about
climate and 100 of them say that humans are causing global warming, is
it honest to grab those 100 and then claim that 100% of scientists
believe in global warming?

I don't believe that it is and yet that's what happened.

Doomsdrzej

unread,
Jan 6, 2018, 2:59:16 PM1/6/18
to
On Sat, 06 Jan 2018 14:51:21 -0500, nospam <nos...@nospam.invalid>
wrote:

>In article <3t925dd7f93a72tcn...@4ax.com>, Doomsdrzej
><d...@do.om> wrote:
>
>>
>> The biggest problem in even considering a Tesla is that I live in a
>> very cold climate which, since mid-December, has seen its temperature
>> go no lower than -25c. In such a climate, the already poor range of an
>> electric car is even worse and there are good reasons to believe that
>> it wouldn't even start.
>
>the batteries are heated in cold weather and the cars start just fine.

Are they heated through the use of a block heater or is there some
other solution I'm not aware of?

>> There's also the fact that the computers
>> within it, something which is essentially problematic for all cars,
>> tend to go crazy when the temperatures are too low so the car might
>> effectively become useless.
>
>nonsense.

Do you live in a climate where -28c temperatures are normal? My
Infiniti started perfectly the other day at such a temperature but the
continued exposure to the freezing temperatures caused the computer to
go nuts and essentially all of the lights within the dashboard lit up
and the system disabled everything from the power steering to the 4WD.
Once things warmed up a few days later, all of the lights as well as
the annoying check engine light turned off. To say the least, I
wouldn't trust an electric car in such temperatures.

nospam

unread,
Jan 6, 2018, 3:07:19 PM1/6/18
to
In article <nga25dh9b1h75o62g...@4ax.com>, Doomsdrzej
<d...@do.om> wrote:

> >> The biggest problem in even considering a Tesla is that I live in a
> >> very cold climate which, since mid-December, has seen its temperature
> >> go no lower than -25c. In such a climate, the already poor range of an
> >> electric car is even worse and there are good reasons to believe that
> >> it wouldn't even start.
> >
> >the batteries are heated in cold weather and the cars start just fine.
>
> Are they heated through the use of a block heater or is there some
> other solution I'm not aware of?

the batteries are heated and shortly before leaving, you can preheat
the cabin via a smartphone app.

> >> There's also the fact that the computers
> >> within it, something which is essentially problematic for all cars,
> >> tend to go crazy when the temperatures are too low so the car might
> >> effectively become useless.
> >
> >nonsense.
>
> Do you live in a climate where -28c temperatures are normal? My
> Infiniti started perfectly the other day at such a temperature but the
> continued exposure to the freezing temperatures caused the computer to
> go nuts and essentially all of the lights within the dashboard lit up
> and the system disabled everything from the power steering to the 4WD.
> Once things warmed up a few days later, all of the lights as well as
> the annoying check engine light turned off. To say the least, I
> wouldn't trust an electric car in such temperatures.

based on that, you shouldn't trust a *gas* powered vehicle in such
temperatures.

many gas powered vehicles have engine block heaters because they won't
start in extreme weather.

JF Mezei

unread,
Jan 6, 2018, 3:14:49 PM1/6/18
to
On 2018-01-06 14:59, Doomsdrzej wrote:

>>the batteries are heated in cold weather and the cars start just fine.
>
> Are they heated through the use of a block heater or is there some
> other solution I'm not aware of?

Heating elements in the battery modules. When you "boot" the car in
cold weather initial autonomy may be bad, but as batteries warm up (and
the act of using power also warms up batteries) autonomy increases back
to normal.

(If the car remains plugged in overnight, I assume batteries are kept warm)

> continued exposure to the freezing temperatures caused the computer to
> go nuts and essentially all of the lights within the dashboard lit up
> and the system disabled everything from the power steering to the 4WD.

It could be because your conventional lead acid batteries were affected
by the cold.


nospam

unread,
Jan 6, 2018, 3:16:29 PM1/6/18
to
In article <X6a4C.115624$CG.9...@fx34.iad>, JF Mezei
<jfmezei...@vaxination.ca> wrote:

> >>the batteries are heated in cold weather and the cars start just fine.
> >
> > Are they heated through the use of a block heater or is there some
> > other solution I'm not aware of?
>
> Heating elements in the battery modules.

they are

> When you "boot" the car in
> cold weather initial autonomy may be bad,

it isn't

> but as batteries warm up (and
> the act of using power also warms up batteries) autonomy increases back
> to normal.

range does.

> (If the car remains plugged in overnight, I assume batteries are kept warm)

they are.

Doomsdrzej

unread,
Jan 6, 2018, 3:40:53 PM1/6/18
to
On Sat, 06 Jan 2018 15:07:18 -0500, nospam <nos...@nospam.invalid>
wrote:

>In article <nga25dh9b1h75o62g...@4ax.com>, Doomsdrzej
><d...@do.om> wrote:
>
>> >> The biggest problem in even considering a Tesla is that I live in a
>> >> very cold climate which, since mid-December, has seen its temperature
>> >> go no lower than -25c. In such a climate, the already poor range of an
>> >> electric car is even worse and there are good reasons to believe that
>> >> it wouldn't even start.
>> >
>> >the batteries are heated in cold weather and the cars start just fine.
>>
>> Are they heated through the use of a block heater or is there some
>> other solution I'm not aware of?
>
>the batteries are heated and shortly before leaving, you can preheat
>the cabin via a smartphone app.

_How_ are they heated?

Pre-heating the cabin essentially means that you've turned on the car
remotely. However, this is only possible if the car starts which, of
course, often requires the batteries to be heated.

>> >> There's also the fact that the computers
>> >> within it, something which is essentially problematic for all cars,
>> >> tend to go crazy when the temperatures are too low so the car might
>> >> effectively become useless.
>> >
>> >nonsense.
>>
>> Do you live in a climate where -28c temperatures are normal? My
>> Infiniti started perfectly the other day at such a temperature but the
>> continued exposure to the freezing temperatures caused the computer to
>> go nuts and essentially all of the lights within the dashboard lit up
>> and the system disabled everything from the power steering to the 4WD.
>> Once things warmed up a few days later, all of the lights as well as
>> the annoying check engine light turned off. To say the least, I
>> wouldn't trust an electric car in such temperatures.
>
>based on that, you shouldn't trust a *gas* powered vehicle in such
>temperatures.
>
>many gas powered vehicles have engine block heaters because they won't
>start in extreme weather.

My previous car, a Jeep Patriot, did. However, neither my BMW 428i nor
the Infiniti QX30 have one. The Patriot, funny enough, was a total
piece of poop in _warm_ temperatures.

Doomsdrzej

unread,
Jan 6, 2018, 3:42:22 PM1/6/18
to
On Sat, 6 Jan 2018 15:14:42 -0500, JF Mezei
<jfmezei...@vaxination.ca> wrote:

>On 2018-01-06 14:59, Doomsdrzej wrote:
>
>>>the batteries are heated in cold weather and the cars start just fine.
>>
>> Are they heated through the use of a block heater or is there some
>> other solution I'm not aware of?
>
>Heating elements in the battery modules. When you "boot" the car in
>cold weather initial autonomy may be bad, but as batteries warm up (and
>the act of using power also warms up batteries) autonomy increases back
>to normal.
>
>(If the car remains plugged in overnight, I assume batteries are kept warm)

An assumption is not enough for me but I guess that if ever I am
considering a purchae, I can ask the salesperson.

>> continued exposure to the freezing temperatures caused the computer to
>> go nuts and essentially all of the lights within the dashboard lit up
>> and the system disabled everything from the power steering to the 4WD.
>
>It could be because your conventional lead acid batteries were affected
>by the cold.

And lithium batteries aren't?

nospam

unread,
Jan 6, 2018, 3:54:23 PM1/6/18
to
In article <vuc25dlvcjip6pcv8...@4ax.com>, Doomsdrzej
<d...@do.om> wrote:

> >> >> The biggest problem in even considering a Tesla is that I live in a
> >> >> very cold climate which, since mid-December, has seen its temperature
> >> >> go no lower than -25c. In such a climate, the already poor range of an
> >> >> electric car is even worse and there are good reasons to believe that
> >> >> it wouldn't even start.
> >> >
> >> >the batteries are heated in cold weather and the cars start just fine.
> >>
> >> Are they heated through the use of a block heater or is there some
> >> other solution I'm not aware of?
> >
> >the batteries are heated and shortly before leaving, you can preheat
> >the cabin via a smartphone app.
>
> _How_ are they heated?

via a heater module on the batteries.

> Pre-heating the cabin essentially means that you've turned on the car
> remotely. However, this is only possible if the car starts which, of
> course, often requires the batteries to be heated.

for a gas powered vehicle, the engine must be running.

not true for an electric vehicle.

Scott Dorsey

unread,
Jan 6, 2018, 3:54:52 PM1/6/18
to
Tim Streater <timst...@greenbee.net> wrote:
>In article <060120181149170429%nos...@nospam.invalid>, nospam
>>range is only a problem for long road trips, and in those cases, rent a
>>vehicle. eventually, that won't be a problem, as more charging stations
>>are built.
>
>I think some of you guys need to calculate the power rate needed to
>charge the higher range cars in any sort of reasonable time. You'll
>find it quite high. Then you have the problem of supplying that power
>at a safe voltage, and without such a high current needed that even Mr
>Muscles can't lift the charging cable, never mind plug it in.

It's not that bad, one or two cars at a time. These days it's not unusual
at all for houses to have 200A service and putting a 100A 240V outlet in
the garage for a charger does not require a major retrofit.

Where it gets bad is when you start thinking about doing that in every house
in the country and the degree to which the grid needs to be enlarged in order
to deal with that load on a constant basis.

It'll happen, and the money is there to make it happen because it's the same
money that is currently going into purchasing gasoline, but it's not going to
happen today and it's not going to happen tomorrow.

But you can go out right now and buy a BMW i3 at your dealer today,
get a charger installed on your existing service panel, and have a whole
lot of fun driving fast right now. It's not cheap, but that's how it goes.

Scott Dorsey

unread,
Jan 6, 2018, 3:57:56 PM1/6/18
to
Doomsdrzej <d...@do.om> wrote:
>The biggest problem in even considering a Tesla is that I live in a
>very cold climate which, since mid-December, has seen its temperature
>go no lower than -25c. In such a climate, the already poor range of an
>electric car is even worse and there are good reasons to believe that
>it wouldn't even start. There's also the fact that the computers
>within it, something which is essentially problematic for all cars,
>tend to go crazy when the temperatures are too low so the car might
>effectively become useless.

Hell, I remember when gasoline engines were effectively useless at -25C....
they got a lot better.... they keep getting better.... so will the electric.
It is loading more messages.
0 new messages