An API is a fancy name for a library, right? ;-)
--
Twitter: http://twitter.com/znmeb Computational Journalism Server
http://j.mp/compjournoserver
Data is the new coal - abundant, dirty and difficult to mine.
On Wed, Apr 18, 2012 at 10:57 PM, DrQ <red...@yahoo.com> wrote:
What Is An API & What Are They Good For? http://www.makeuseof.com/tag/api-good-technology-explained/
Larry Ellison (in court this week) �Arguably, its the most difficult thing we do at Oracle,� Ellison said when asked how hard it is to build an API. http://www.wired.com/wiredenterprise/2012/04/ellison-page/ Charlie Kindel "Don�t build APIs" http://ceklog.kindel.com/2012/04/18/dont-build-apis/
An API is a fancy name for a library, right? ;-)
Information is like nuclear fusion - hard to create and easy to consume
Mario François Jauvin
MFJ Associates, (613) 686-5130, option 1
Sent from my iPhone
> --
> You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
> To post to this group, send email to guerrilla-cap...@googlegroups.com.
> To unsubscribe from this group, send email to guerrilla-capacity-...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/guerrilla-capacity-planning?hl=en.
>
Mario François Jauvin
MFJ Associates
Sent from my iPhone
On 2012-04-19, at 2:44 AM, "M. Edward (Ed) Borasky" <zn...@znmeb.net> wrote:
--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To post to this group, send email to guerrilla-cap...@googlegroups.com.
To unsubscribe from this group, send email to guerrilla-capacity-...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/guerrilla-capacity-planning?hl=en.
--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
--
You received this message because you are subscribed to the Google Groups "Guerrilla Capacity Planning" group.
To view this discussion on the web visit https://groups.google.com/d/msg/guerrilla-capacity-planning/-/Vou__cJ6gswJ.
Recently I went looking for good references on CPU cache-sizing and
didn't find anything I could really use [in the context of 3 decades of
a ~20%/yr increase in CPU cycle times vs DRAM latency].
Do you have references, or more usefully, search terms?
My intuition is, with a increase of CPU/DRAM latency ratio of 'n':
- to maintain the same "cache hit-ratio", last-level cache has to scale
by 'n', and
- to maintain the same effective CPU performance (% of all-in-cache),
the cache hit-ratio needs to increase, but I failed at identifying that
formula [probably staring me in the face :-(].
My thought is that as the latency differential between CPU/DRAM
increases, the proportion of on-chip area dedicated to cache for CPU's
is growing super-linearly... But without having any formulas, that's a
very weak statement.
Thanks in advance.
steve jenkin
rml...@gmail.com wrote on 21/04/12 6:21 AM:
> There are well known
> factors that can be applied to changes in L1 and L2 caches sizes, etc
--
Steve Jenkin, Info Tech, Systems and Design Specialist.
0412 786 915 (+61 412 786 915)
PO Box 48, Kippax ACT 2615, AUSTRALIA
Bob, Recently I went looking for good references on CPU cache-sizing and didn't find anything I could really use [in the context of 3 decades of a ~20%/yr increase in CPU cycle times vs DRAM latency]. Do you have references, or more usefully, search terms? My intuition is, with a increase of CPU/DRAM latency ratio of 'n': - to maintain the same "cache hit-ratio", last-level cache has to scale by 'n', and - to maintain the same effective CPU performance (% of all-in-cache), the cache hit-ratio needs to increase, but I failed at identifying that formula [probably staring me in the face :-(]. My thought is that as the latency differential between CPU/DRAM increases, the proportion of on-chip area dedicated to cache for CPU's is growing super-linearly... But without having any formulas, that's a very weak statement. Thanks in advance. steve jenkin rml...@gmail.com wrote on 21/04/12 6:21 AM:There are well known factors that can be applied to changes in L1 and L2 caches sizes, etc
Dr Carlo Kopp, Associate
Fellow AIAA, Senior Member IEEE, PEng Computer Scientist Email: Carlo...@monash.edu W3: http://www.csse.monash.edu.au/~carlo Ph: +61-3-9905-5229 Cell: +61-437-478-224 |
|
Clayton
School of Information Technology, Faculty of
Information Technology Monash University, Clayton, 3800, AUSTRALIA Monash Provider No. 00008C |
|
PRIVACY,
CONFIDENTIALITY,
DISCLAIMER
AND
COPYRIGHT NOTICE © 2007 - 2010 This communication is copyright and intended for the named recipient/s only. Being an addressee on this Email does not imply and should not be inferred, in any way, as meaning the addressee endorses or agrees with its contents. The contents of this document, including any attachments, should not be copied, distributed, or disclosed to any third party person or organisation without written permission. If received in error or incorrectly onforwarded to you, kindly notify the sender by reply E-mail and permanently delete this message and its attachments. |
Without intimate knowledge of the behaviour (spatial/temporal locality) of the target application and the cache architecture itself this can be an intractable problem.
For x86 you'll probably find the trend you're looking for - early x86
processors had little cache more recent ones have much larger cache. You
also need to look at cache/thread or /core.
If you look at server processors (SPARC, Power etc.) I think you'll find
the trend is less clear. Server processors have had significant cache
for longer, so I don't think you'll find the same growth.
I suspect that the limitation is more to do with process technology than
any equation that governs the ideal amount of cache. I would imagine
that the argument goes "We need to fit two cores onto this die, and then
what ever is left we'll turn into cache.".
Regards,
Darryl.
On 4/20/2012 7:32 PM, steve jenkin wrote:
> Bob,
>
> Recently I went looking for good references on CPU cache-sizing and
> didn't find anything I could really use [in the context of 3 decades of
> a ~20%/yr increase in CPU cycle times vs DRAM latency].
>
> Do you have references, or more usefully, search terms?
>
> My intuition is, with a increase of CPU/DRAM latency ratio of 'n':
> - to maintain the same "cache hit-ratio", last-level cache has to scale
> by 'n', and
> - to maintain the same effective CPU performance (% of all-in-cache),
> the cache hit-ratio needs to increase, but I failed at identifying that
> formula [probably staring me in the face :-(].
>
> My thought is that as the latency differential between CPU/DRAM
> increases, the proportion of on-chip area dedicated to cache for CPU's
> is growing super-linearly... But without having any formulas, that's a
> very weak statement.
>
> Thanks in advance.
>
> steve jenkin
>
> rml...@gmail.com wrote on 21/04/12 6:21 AM:
>> There are well known
>> factors that can be applied to changes in L1 and L2 caches sizes, etc
>
>
This is basically it.
If you take an HPC (loopy) application, then you iterate over a block of
data, using each item once. If the block fits in cache you get a nice
performance benefit, if it doesn't then you stream from memory.
If you have a random access application, then you still have a block of
data, you get the best performance if the cache is large enough to fit
the entire block of data. However, since it's random access, you get a
curve as the cache size increases to the size of the working set.
You get a performance benefit if you are cache resident. What's more
"concerning" is that, at some point, for some (typically hpc)
applications, a slight increase in the size of the workload can cause it
to no longer be cache resident, and for performance to fall off a cliff.
A few years back I did some work looking at the working set size of the
cpu2006 benchmarks. This is not about cache sizing, more about figuring
out how much memory each code touched.
http://www.spec.org/cpu2006/publications/SIGARCH-2007-03/05_cpu2006_wss.pdf
Regards,
Darryl.
> ------------------------------------------------------------------------
> Dr Carlo Kopp, Associate Fellow AIAA, Senior Member IEEE, PEng
> Computer Scientist
> Email: Carlo...@monash.edu <mailto:Carlo...@monash.edu>
> W3: http://www.csse.monash.edu.au/~carlo
> <http://www.csse.monash.edu.au/%7Ecarlo>
> Ph: +61-3-9905-5229
> Cell: +61-437-478-224
> Clayton School of Information Technology, Faculty of Information Technology
> Monash University, Clayton, 3800, AUSTRALIA
> Monash Provider No. 00008C
> PRIVACY, CONFIDENTIALITY, DISCLAIMER AND COPYRIGHT NOTICE � 2007 - 2010
> This communication is copyright and intended for the named recipient/s
> only. Being an addressee on this Email does not imply and should not be
> inferred, in any way, as meaning the addressee endorses or agrees with
> its contents. The contents of this document, including any attachments,
> should not be copied, distributed, or disclosed to any third party
> person or organisation without written permission. If received in error
> or incorrectly onforwarded to you, kindly notify the sender by reply
> E-mail and permanently delete this message and its attachments.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Guerrilla Capacity Planning" group.
> To post to this group, send email to
> guerrilla-cap...@googlegroups.com.
> To unsubscribe from this group, send email to
> guerrilla-capacity-...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/guerrilla-capacity-planning?hl=en.
My interest in the scaling of cache has to do with the upper bound of
current Silicon CMOS technology.
It could be as low as 4-10 times current size.
The maximum L3 cache size I've seen is ~1B transistors, around 24Mb.
At 64-200Mb, is that large enough to be treated as a Virtual Memory
system and managed as such with working sets?
So there'd be two levels of working memory (fast on-chip, slow RAM) and
"backing-store" on Flash/SCM/Disk.
SCM = Storage Class Memory. Things like spin torque memory.