|APIs make programming easier. Right?||DrQ||4/18/12 10:57 PM|
What Is An API & What Are They Good For?
Larry Ellison (in court this week)
“Arguably, its the most difficult thing we do at Oracle,” Ellison said when asked how hard it is to build an API.
Charlie Kindel"Don’t build APIs"
|Re: APIs make programming easier. Right?||M. Edward (Ed) Borasky||4/18/12 11:42 PM|
|Re: APIs make programming easier. Right?||James||4/19/12 5:15 AM|
On 4/19/2012 1:42 AM, M. Edward (Ed) Borasky wrote:
I think what Larry was trying to say is that the hardest thing they do is write API's that allow them to maximize revenue with F.U. licensing schemes. :-)On Wed, Apr 18, 2012 at 10:57 PM, DrQ <red...@yahoo.com> wrote:What Is An API & What Are They Good For? http://www.makeuseof.com/tag/api-good-technology-explained/Larry Ellison (in court this week) ï¿½Arguably, its the most difficult thing we do at Oracle,ï¿½ Ellison said when asked how hard it is to build an API. http://www.wired.com/wiredenterprise/2012/04/ellison-page/ Charlie Kindel "Donï¿½t build APIs" http://ceklog.kindel.com/2012/04/18/dont-build-apis/An API is a fancy name for a library, right? ;-)
|Re: APIs make programming easier. Right?||Mario François Jauvin||4/19/12 4:08 AM|
> Data is the new coal - abundant, dirty and difficult to mine.
Information is like nuclear fusion - hard to create and easy to consume
Mario François Jauvin
|Re: APIs make programming easier. Right?||Mario François Jauvin||4/19/12 4:03 AM|
An API make a component (library, web service) easier to use. That in itself will help us write programs that use these components.
Mario François Jauvin
On 2012-04-19, at 2:44 AM, "M. Edward (Ed) Borasky" <zn...@znmeb.net> wrote:
|Re: APIs make programming easier. Right?||rml...@gmail.com||4/19/12 2:42 PM|
Since we are concerned with performance, I will interject that there
are several considerations that should be documented in any API.
1. APIs should publish performance characteristics, especially if they
are intended for server side use:
a. throughput, service times, response times, and utilizations
under increasing load for hardware components [CPU, disk, NIC, etc.]
on defined hardware assets without stubbing out services.
b. document the tested commands
c. plot of numer of threads (x-axis) and throughput, response time,
and utilization (y-axis)
d. plot of throughput (x-axis) and response time (y-axis)
2. Users of APIs need to know functional and logical aspects of the
a. intra or inter process/thread/host/etc. communications?
b. synchronous or asyncronous with respect to what?
c. limitations of the API (e.g., 32 bit-ness limits memory address
e. side effects and responsibilities?
f. inputs, output on success/failure, faults and error conditions.
g. are operations expected to be idempotent/ACID? What is the
h. stateful or stateless?
i. what security issues exist? (e.g., password exchange, possible
sources of data loss, etc.)
j. what are the downstream dependencies and infrastructure?
> Larry Ellison *(in court this week)
> “Arguably, its the most difficult thing we do at Oracle,” Ellison said when> *Charlie Kindel*
|Re: APIs make programming easier. Right?||Puft||4/19/12 4:38 PM|
There should always be two sets of APIs. One high performing set for internal organizational use. A second, poorly performing set for external users and customers. These should also have undocumented features, poorly documented features, and inconsistent approaches in specifying the calls.
|Re: APIs make programming easier. Right?||Greg||4/20/12 5:02 AM|
I suspect your item 1 will be difficult to usefully generalise. It is very hard comparing, for example, a Sun multi-threaded and machine with a randomly chosen Intel server, let alone comparing one random Intel server with another. How many combinations of models of i3, i5, i7 and Xeon processors are there with combinations of chipsets, motherboard designs and memory? Then we move on to the network and the disk. I have been torturing myself with disk characteristics lately: OS-level IO scheduler, SCSI interface provider, SAN fabric, storage array characteristics. When all that is specified how much do we actually know?
|Re: APIs make programming easier. Right?||wasque||4/20/12 4:16 AM|
Great question...I read the pros and cons ... a had a LOLm with from the developer from microsoft...
The powerpoint slide deck from the google principle engineer (all due respect) .... that the reasons API's are so hard ... is there are so many rules...(based on this powerpoint) ...
but I guess they have to ..especially when an API is for communicating from one language to another.... Remote API ....
Lately I have been impressed with windows powershell ... it is kind of an api to get the information I want.... The tab completion and good IDE makes a world of difference...
A good API is like a musical instrument ... there is no limit to how fluent you can get...
PDQ is a great example of great API .... There are so many good working examples ...that it allows the API user to give feedback quickly to API designer .... My intuition says tells me a fusion of lego component(drag and drop) approach with cross-cutting concerns of AOP.... grounded in test driven design.... But most importantly ... the ability to take a Natural language requirement and convert into an API called that delivers..... wolfram alfa comes to mind ...
My top 5 API's would be:
1. Loadrunner web API
3. QTP visual basic API
4. R module API
5. Weblogic JMX Api
Least liked API:
--To view this discussion on the web visit https://groups.google.com/d/msg/guerrilla-capacity-planning/-/Vou__cJ6gswJ.
|Re: APIs make programming easier. Right?||rml...@gmail.com||4/20/12 1:21 PM|
There are well understood back of envelope calculations that one can
make to approximate workload performance from one architecture to
another. Depending on the methodology used (e.g., using published
benchmarks in the same technology domain as the API [e.g., Java) using
source and target architectures, operating systems, et cetera) a
knowledgeable performance analyst can create a conversion factor that
captures first order effects (and probably more accuracy). The details
of the conversion factor can even include such things as CPU:bus ratio
but for most purposes this is unnecessary. There are well known
factors that can be applied to changes in L1 and L2 caches sizes, etc.
The process of creating conversion factors is widely used among system
vendors to compare current and project future performance. This
subject is probably beyond the scope of this discussion. Dr. Q can
probably include a chapter on the approaches, methods, and caveats
with worked examples in his next book that we are all probably eager
|API performance||Alex||4/20/12 1:24 PM|
Actually it is a smaller issue with 1. At least theoretically it is possible to normalize - and most commercial vendors have some mechanisms of cross-platform performance analysis (what-if scenarios). How well - that is another story, it is indeed challenging. More speculations about the subject http://applicationperformanceengineeringhub.com/how-do-we-measure-computer-resources/
A more challenging issue is that API performance depends on arguments. For example, you have a call returning all people working in a department - and it may be one person, or the whole corporation. Or a call executing an sql query (jdbc) - it may be a simple query or multiple joins of huge tables. And if in the first case you may try to bind performance to the number of people returned (set performance for specific numbers of people returned - but the question is for what levels), in the second case you can't bind it to anything - whatever formal parameters of sql query you choose (number of tables to join, number of record in the tables, number of records returned), they don't quite define the query performance. And this is a huge challenge if you are trying to define performance requirements.
|Re: APIs make programming easier. Right?||Greg||4/20/12 2:27 PM|
Thanks but I have actually done it. My point was that there are an awful lot of factors to consider and an awful lot of effort to get a very rough conversion. For example you refer to using Java benchmarks for scaling, but Java benchmarks are usually of limited use for comparison if the application's workload characteristics, heap usage, JDK version and GC strategy are not roughly comparable with the benchmark. This week I have seen a 75% reduction in the elapsed time of a process from switching between two apparently production ready SCSI interfaces from different vendors (I knew there would be an effect but the vendor's published material suggested that it would not be that big). Its not that its impossible, its that it takes a lot of effort to get a useful and reliable conversion.
|Re: APIs make programming easier. Right?||SteveJ||4/20/12 7:32 PM|
Recently I went looking for good references on CPU cache-sizing and
Do you have references, or more usefully, search terms?
My intuition is, with a increase of CPU/DRAM latency ratio of 'n':
My thought is that as the latency differential between CPU/DRAM
Thanks in advance.
rml...@gmail.com wrote on 21/04/12 6:21 AM:
|Re: APIs make programming easier. Right?||Carlo Kopp||4/21/12 1:49 AM|
There are standard formulas for cache performance, but you need specific knowledge of the L1 and L2 architecture to use these, ie split / common, size, write-through vs write-back etc. This is not always disclosed by manufacturers in its entirety. Moreover, in a multicore CPU you usually have consistency mechanisms in operation to resolve potential inconsistency, and these may also impact cache performance.
Most CPUs lacked a cache trace mechanism so you needed a CPU specific emulator which you run with the application of interest and this logs the cache behaviour for that application, so that you could look at cache occupancy and perform optimisations.
Usually you will find that increasing cache size beyond typical loop span in an applications yields no further gains, and much the same happens with increasing set associativity beyond 4X.
When I taught undergraduate machine architecture, before the courses got dumbed down, we used to have the students run lab simulations of cache behaviour to determine the best choices. This involved trace runs captured for a specific application.
Without intimate knowledge of the behaviour (spatial/temporal locality) of the target application and the cache architecture itself this can be an intractable problem.
PS This is a lab exercise I designed in 2001-2002 - http://www.csse.monash.edu.au/~davida/teaching/cse2324/Pracs/prac5.html
|Re: APIs make programming easier. Right?||SteveJ||4/21/12 8:50 AM|
Dr Carlo Kopp, PEng wrote on 21/04/12 6:49 PM:
Thanks very much. I think that says what I already knew...
|Re: APIs make programming easier. Right?||Darryl Gove||4/21/12 11:28 AM|
You can probably extract the cache size data you need from wikipedia.
For x86 you'll probably find the trend you're looking for - early x86
If you look at server processors (SPARC, Power etc.) I think you'll find
I suspect that the limitation is more to do with process technology than
|Re: APIs make programming easier. Right?||Darryl Gove||4/21/12 11:38 AM|
On 4/21/2012 1:49 AM, Dr Carlo Kopp, PEng wrote:
This is basically it.
If you take an HPC (loopy) application, then you iterate over a block of
If you have a random access application, then you still have a block of
You get a performance benefit if you are cache resident. What's more
A few years back I did some work looking at the working set size of the
> Email: Carlo...@monash.edu <mailto:Carlo...@monash.edu>
> W3: http://www.csse.monash.edu.au/~carlo
|Re: APIs make programming easier. Right?||SteveJ||4/21/12 4:57 PM|
Darryl Gove wrote on 22/04/12 4:38 AM:
My interest in the scaling of cache has to do with the upper bound of
The maximum L3 cache size I've seen is ~1B transistors, around 24Mb.
At 64-200Mb, is that large enough to be treated as a Virtual Memory
So there'd be two levels of working memory (fast on-chip, slow RAM) and