

"Started with one. Now there's seven, with an eighth on assignment with my boyfriend in England (he was always posing mine and I could tell he needed one of his own, though he was reluctant to admit it at first - heehee!). It's waihey 's fault. She started it. Then I in turn infected ShellyS, and a few others."
These little guys are just way too fun and deserved a set of their own. They've got so much personality and I think everyone who loves toys, Star Wars fan or not, NEEDS one of their own.
This small text above is part of Flickr gallery set
Check out the Flickr gallery set
Source: ap303
"Voir en ligne : Storm Troopin'
Voir en ligne : Pascal Nègre nous explique la riposte graduée et le P2P
Social rating sites are invaluable in our era of information overload. And in applying social ratings to media coverage of the 2008 presidential campaign, a partnership between nonprofit social news site NewsTrust and The Huffington Post’s OffTheBus project is taking an innovative approach: ‘news hunts.’ Instead of bookmarking news articles with del.icio.us or voting on them with Reddit, news hunters collaboratively evaluate hundreds of news articles about a specific topic based on journalistic criteria, including fairness, evidence, sourcing, and context. Hunts last for one week, findings are published on both NewsTrust’s and OffTheBus’s websites, and then the smart mob of news hunters moves onto another topic. The result is a synchronized effort to identify best news coverage, as well as help readers learn more about each candidate and make informed decisions. This week’s hunt is dedicated to evaluating the quality of news coverage about John McCain. Visit NewsTrust’s welcome page to join in.
Voir en ligne : news hunting for quality journalism
We mentioned Entriq a few weeks back when it acquired DayPort in order to improve the technology it uses to deploy digital media across networks and devices. Today, the company announces a partnership with Inergize Digital Media, in which it empowers Inergize’s content management system for the syndication of digital content across networks and devices as well.
As Inergize works with several clients in the traditional media sector, the need for these clients to have a more robust way of syndicating content is quite relevant in the face of changing media consumption behavior. Entriq is all about providing content to consumers, wherever the consumers are. That means television, the web, radio, mobile phones, etc.
What’s interesting is the inability for some in the traditional media sector to be nimble enough to deploy such broad redistribution of content in a targeted and efficient manner. This is especially important as mobile video consumption begins to increase, especially for localized content based on geographic locations. Inergize has become the third-party solution for getting content in the right places at the right time, working with broadcasters in the news industry, amongst other areas.
There are a few ways in which the Entriq-Inergize system differs from others, including an integrated solution for task management, media conversion, real-time media delivery and optimization of content based on demographic data. The wave of the future, or a fleeting stage of transition that will soon be overrun by the traditional media sector? Time will tell, but the end result is clear–better content delivery options for the end users.
---
Related Articles at Mashable! - The Social Networking Blog:
Entriq Acquires DayPort to Better Service Cross-Platform Ads
Voir en ligne : Inergize Looks to Entriq to Help Syndicate Traditional Broadcast Video
Voir en ligne : Sony propose en avant-première l'album de Doré
Voir en ligne : YouTube adds annotations
Google, dans les affaires Vuitton, Eurochallenges et la Bourse des vols, a été condamnée pour contrefaçon, réalisant un usage de ces marques via son système Adwords. Google a alors formé un pouvoir en cassation afin d’obtenir la cassation des arrêts des Cours d’Appel de Paris et Versailles le condamnant par la Cour de Cassation. Dans ses arrêts du 20 mai 2008, cette dernière n’a toutefois pas pris position (Source Legalis.net article du 3 juin 2008).
Dans un contexte où le sort de Google concernant son système Adwords diffère d’un pays à un autre (voir décision de la High Court du Royaume-Uni, source Marks&Clerk), la Cour de Cassation s’en remet à la Cour de Justice des Communautés Européenne (CJCE) afin que cette dernière, au regard des directives communautaires en vigueur et transposées en France, détermine si :
- réserver un mot-clé reproduisant une marque pour l’affichage d’une annonce concernant des produits identiques ou similaires est un acte de contrefaçon à défaut d’autorisation du titulaire de la marque, et a fortiori une atteinte à une marque de renommée;
- si le prestataire de service de référencement fait un usage de la marque enregistrée, et donc sans autorisation de son titulaire des actes de contrefaçon, via notamment les systèmes de suggestion de mot-clé;
- et dans l’hypothèse où l’usage du prestataire de référencement ne porterait pas atteinte au droit de marque, si ce prestataire peut-il être considéré comme un hébergeur, ne voyant pas sa responsabilité engagée en l’absence de notification d’usage illicite par le titulaire des droits.
La décision de la CJCE va ainsi intervenir dans un contexte, où les opérateurs économiques ne peuvent déterminer de façon certaine le champ de protection de leur droit de propriété intellectuelle, ou l’étendue de leur responsabilité suivant le cas. Il est de ce fait espéré que la solution de la CJCE apportera une plus grande stabilité juridique en Union Européenne sur ces questions.
Similar Posts:
Voir en ligne : Liens sponsorisés et contrefaçon, la CJCE amenée à se prononcer
Voir en ligne : Nouvelle Star: je vote pour qui ce soir?
Yesterday, as expected, Facebook revealed the code behind their F8 platform, a little over a year after its launch, offering it under the Common Public Attribution License.
I can’t help but notice the glaring addition of Section 15: Network Use and Exhibits A and B to the CPAL license. But I’ll dive into those issues in a moment.
For now it is worth reviewing Facebook’s release in the context of the OSI’s definition of open source; of particular interest are the first three sections: Free Redistribution, Source Code, and Derived Works. Arguably Facebook’s use of the CPAL so far fits the OSI’s definition. It’s when we get to the ninth attribute (License Must Not Restrict Other Software) where it becomes less clear whether Facebook is actually offering “open source” code, or is simply diluting the term for its own gain, given the attribution requirement imposed in Exhibit B:
Each time an Executable, Source Code or Larger Work is launched or initially run (including over a network), a display of the Attribution Information must occur on the graphic user interface employed by the end user to access such Covered Code (which may include a splash screen).
In other words, any derivative work cleft from the rib of Facebook must visibly bear the mark of the “Initial Developer”, namely, Facebook, Inc., and include the following:
Attribution Copyright Notice: Copyright © 2006-2008 Facebook, Inc.
Attribution Phrase (not exceeding 10 words): Based on Facebook Open Platform
Attribution URL: http://developers.facebook.com/fbopen
Graphic Image as provided in the Covered Code: http://developers.facebook.com/fbopen/image/logo.png
Most curious of all is how Facebook addressed a long-held concern of Tim O’Reilly that open source licenses are obsolete in the era of network computing and Web 2.0 (emphasis original):
…it’s clear to me at least that the open source activist community needs to come to grips with the change in the way a great deal of software is deployed today.
And that, after all, was my message: not that open source licenses are unnecessary, but that because their conditions are all triggered by the act of software distribution, they fail to apply to many of the most important types of software today, namely Web 2.0 applications and other forms of software as a service.
And in the Facebook announcement, Ami Vora states:
The CPAL is community-friendly and reflects how software works today by recognizing web services as a major way of distributing software.
Thus Facebook neatly skirts this previous limitation in most open source licenses by amending Section 15 to the CPAL, explicitly covering “Network Use”:
The term ‘External Deployment’ means the use, distribution, or communication of the Original Code or Modifications in any way such that the Original Code or Modifications may be used by anyone other than You, whether those works are distributed or communicated to those persons or made available as an application intended for use over a network. As an express condition for the grants of license hereunder, You must treat any External Deployment by You of the Original Code or Modifications as a distribution under section 3.1 and make Source Code available under Section 3.2.
I read this as referring to network deployments of the Facebook platform on other servers (or available as a web service) and forces both the release of code modifications that hit the public wire as well as imposing the display of the “Attribution Information” (as noted above).
. . .
So okay, first of all, we’re not really dealing with the true historic definition of open source here, but we can mince words later. The code is available, is free to be tinkered with, reviewed, built on top of, redistributed (with that attribution restriction) and there’s even a mechanism for providing feedback and logging bugs. Best of all, if you submit a patch that is accepted, they’ll send you a Facebook T-shirt! (Wha-how! Where do I sign up?!)
Not ironically, Facebook’s approach with fbOpen smells an awful lot like Microsoft’s Shared Source Initiative (some background). Consider the purpose of one of Microsoft’s three Shared Source licenses, the so-called “Reference License”:
The Microsoft Reference License is a reference-only license that allows licensees to view source code in order to gain a deeper understanding of the inner workings of a given technology. It does not allow for modification or redistribution. Microsoft uses this license primarily for technologies such as its development libraries.
Now compare that with the language of Facebook’s announcement:
The goal of this release is to help you as developers better understand Facebook Platform as a whole and more easily build applications, whether it’s by running your own test servers, building tools, or optimizing your applications on this technology. We’ve built in extensibility points, so you can add functionality to Facebook Open Platform like your own tags and API methods.
While it’s certainly conceivable that there may be intrepid entrepreneurs that decide to extend the platform and release their own implementations (which, arguably would require a considerable amount of effort and infrastructure to duplicate the still-proprietary innards of Facebook proper — remember that the fbOpen platform IS NOT Facebook), they’d still need to attach the Facebook brand to their derivative work and open source their modifications, under a CPAL-compatible license (read: not GPL).
In spite of all this — and whether Facebook is really offering a “true” open source product or not &mdhas; is really not the important thing. I’m raising issues simply to put this move into a broader context, highlighting some important decision points where Facebook zagged where others might have otherwise zigged, based on their own priorities and aspirations with the move. Put simply: Facebook’s approach to open source is nothing like Google’s, and it’s critical that people considering building on either the fbOpen platform or OpenSocial do themselves a favor and familiarize themselves with the many essential differences.
Furthermore, in light of my recent posts, it occurs to me that the nature of open source is changing (or being changed) by the accelerating move to cloud computing architectures (where the source code is no longer necessarily a strategic asset, but where durable and ongoing access to data is the primary concern (harkening to Tim O’Reilly’s frequent “Data is the Intel Inside” quip) and how Facebook is the first of a new class of enterprises that’s growing up after open source.
I hope to expand on this line of thinking, but I’m starting to wonder — with regards to open source becoming essentially passé nowadays — did we win? Are we on top? Hurray? Or, did we bet on the wrong horse? Or, did the goalposts just move on us (again)? Or, is this just the next stage in an ongoing, ever-volatile struggle to balance the needs of business models that tend towards centralization against those more free-form and freedom seeking and expanding models where information and knowledge must diffuse, and must seek out growth and new hosts in order to continue to become more valuable. Again, pointing to Tim’s contention that Web 2.0 is also at least partly about harnessing collective intelligence, and that data sources that grow richer as more people use them is a facet of the landscape, what does openness mean now? What barriers do we need to dissemble next? If it’s no longer the propriety of software code, then is it time that we began, in earnest, to scale the walls of the propriety data horders and collectors and take back (or re-federate) what might be rightfully ours? Or that we should at least be given permanent access to? Hmm?
"Voir en ligne : Parsing the “open” in Facebook’s “fbOpen” platform
Summize.com provides the capability to search Twitter conversations. Summize now offers the Summize Twitter Search API (profile), providing programmable Twitter searches. You can search for:
The returned tweets can be limited by language, status ID, and geocoding. The geocoding operation is interesting: you can search only for tweets that originated within a certain radius of a given latitude and longitude (based on the user’s profile location).
See our Summize Twitter Search overview for details on the API. The API utilizes a standard REST protocol. You can call the API and receive the results using Atom and JSON data formats.
Twizon, a Mashup of the Day this week, is a new mashup that combines the Summize Twitter Search API with the Amazon eCommerce API to provide recent tweets about Amazon products. Visit Twizon.com to find out what Amazon products people are tweeting about.
Peter Laird has written about the potential benefits for companies that mine Twitter for candid thoughts and user opinions about company products. The Summize Twitter API is the perfect tool for automating these types of search.
Voir en ligne : Twitter Search via the Summize API
La Fondation Mozilla veut lancer la troisième mouture de son navigateur libre en fanfare. Menacée par le retour en force de Microsoft qui promet de mettre fin à l'hémorragie avec Internet Explorer 8, elle veut établir un record du monde en faisant de Firefox 3 le logiciel le plus téléchargé en 24 heures.Voir en ligne : Mozilla vise un record du monde avec Firefox 3
The LA Times and others are reporting that EA has acquired Shawn Fanning’s social-network-gaming startup Rupture for around $30 million. We reported this deal a month ago.
The first sentence of the LA Times story: “Shawn Fanning…has finally earned some money.”
The title of our post a month ago: “Shawn Fanning Finally Gets A Real Payday…”
They did add a link to the story giving us some credit for breaking it, albeit with a statement suggesting we hit the trigger too soon: “When the widely read blog TechCrunch wrote two weeks ago that gaming giant EA had bought Rupture for a reported $30 million, it wasn’t true. But it is now.”
Here’s the very short EA press release on the deal.
Crunch Network: MobileCrunch Mobile Gadgets and Applications, Delivered Daily.
Voir en ligne : EA Acquires Shawn Fanning’s Rupture, Says The LA Times A Month After We Did
Qui ne connait pas cette entreprise qui dès l'origine de Flickr avait proposé d'imprimer des cartes avec des photos issues de la plate forme de partage de photos. Depuis peu Moo livre en France et pour fêter cela ils vous offrent 10 codes pour créer et recevoir des lots de 100 minicards frais de port compris.
Alors pour faire simple, reprenons le système de la dernière fois pour déterminer les heureux élus : les 10 premiers qui laisseront un commentaire qui sera daté avec un nombre de secondes pair gagneront un code Moo, Bien entendu c'est un coup de pot, vous avez une chance sur deux et c'est la base de données qui donnera l'unique réponse. Dès que j'ai les 10 gagnants je stoppe le jeu…
Pour les autres 20% de réduction sur votre première commande avec le code 2803moo.
A vous de jouer…
Voir en ligne : Les cartes moo débarquent en France
The first part of The Machine That Changed the World covered the earliest roots of computing, from Charles Babbage and Ada Lovelace in the 1800s to the first working computers of the 1940s. The second part, "Inventing the Future," picks up the story of ENIAC's creators as they embark on building the first commercial computer company in 1950, and ends with the moon landing in 1969 and the beginning of the Silicon Valley.
Notes:
Shortly after the war ended, ENIAC's creators founded the first commercial computer company, the Eckert-Mauchly Computer Corporation in 1946. The early history of the company's funding and progress is told through interviews and personal home videos. They underestimated the cost and time to build UNIVAC I, their new computer for the US Census Bureau, quickly sending the company into financial trouble. Meanwhile, in London, the J. Lyons and Co. food empire teamed up with the EDSAC developers at Cambridge to build LEO, their own computer to manage inventory and payroll. It was a huge success, inspiring Lyons to start building computers for other companies.
The Eckert-Mauchly company was in trouble, with several high-profile Defense Department contracts withdrawn because of a mistaken belief that John Mauchly had Communist ties. After several attempts to save the company, the company was sold to Remington-Rand in 1950. The company, then focused on electric razors and business machines, gave UNIVAC its television debut by tabulating live returns during the 1952 presidential election. To CBS's amazement, it accurately predicted an Eisenhower landslide with only 1% of the vote. UNIVAC soon made appearances in movies and cartoons, leading to more business.
IBM was late to enter the computing business, though they'd built the massive SSEC in 1948 for scientific research. When the US Census ordered a UNIVAC, Thomas Watson, Jr. recognized the threat to the tabulating machine business. IBM introduced their first commercial business computers in 1953, the mass-produced IBM 650. While inferior technology, it soon dominated the market with their strong sales force, relative affordability, and integration with existing tabulating machines. In 1956, IBM soared past Remington-Rand to become the largest computer company in the world. By 1960, IBM captured 75% of the US computer market.
But developing software for these systems often cost several times the hardware itself, because programming was so difficult and programmers were hard to find. FORTRAN was one of the first higher-level languages, designed for scientists and mathematicians. It didn't work well for business use, so COBOL soon followed. This led to wider adoption in different industries, as software was developed that could automate human labor. "Automation" become a serious fear, as humans were afraid they'd lose their jobs to machines. Across the country, companies like Bank of America (with ERMA) were eliminating thousands of tedious tabulating jobs with a single computer, though the country's prosperity and booming job market tempered some of that fear.
In the '50s, vacuum tubes were an essential component of the electronics industry, located in every computer, radio, and television. Transistors meant that far more complex computers could be designed, but couldn't be built because wiring them together was a logistical nightmare. The "tyranny of numbers" was solved in 1959 with the first working integrated circuit, developed and introduced independently by both Texas Instruments and Fairchild. But ICs were virtually ignored until adopted by NASA and the military for use in lunar landers, guided missiles, and jets. Electronics manufacturers soon realized the ability to mass-produce ICs. Within a decade, ICs cost pennies to produce while becoming a thousand times more powerful. The result was the birth of the Silicon Valley and a reborn electronics industry.
Interviews:
Ted Withington (network engineer, industry analyst), Paul Ceruzzi (Smithsonian), J. Presper Eckert (ENIAC co-inventor, died 1995), Morris Hansen (former US Census Bureau, died 1990), John Pinkerton (Chief Engineer, LEO, died 1997), Thomas J. Watson, Jr. (Chairman Emeritus, IBM, died 1993), James W. Birkenstock (retired Vice President, IBM, died 2003), Jean Sammet (programming language historian), Dick Davis (retired Senior V.P., Bank of America), Robert Noyce (co-inventor, integrated circuit, died 1990), Gordon Moore (former Chairman of the Board, Intel), Steve Wozniak (Co-founder, Apple)
Up Next...
Part 3: The Paperback Computer. The development of the personal computer and user interfaces, from Doug Engelbart and Xerox PARC to the Apple and IBM PCs.
Voir en ligne : The Machine That Changed the World: Inventing the Future
Voir en ligne : The transformation of Barack Obama
Voir en ligne : Starbucks offers new flavor: Free Wi-Fi
Rough Type had an interesting post on how Amazon’s web service platform evolved from their own need to build infrastructure for the Amazon online store. Nicholas argues that Amazon is in a good position to succeed in this first phase of the fledgling utility computing space. I’m a huge fan of AWS, and I absolutely agree with him on this point. However, I’d argue that while Amazon web services have a solid first-mover advantage in this valuable space, I’m not sure that Amazon will win in the long run. Why? Because at it’s core, Amazon is an e-commerce company, not a platform company.
Who cares more about gaining web API share on the web, Amazon or Google? Who cares more about gaining database share on the web, Amazon or Oracle? Who cares more about gaining server share on the web, Amazon or Sun? Who cares more about gaining developer mind share on the web, Amazon or Microsoft?
While web services are a great high-margin business for now, as these other large companies build out their could computing strategies, the competition will drive margins lower and lower. What’s more, these existing platform companies can afford to offer these services at or below cost, in exchange for a greater share in their respective categories. As these services become commoditized and the price drops lower and lower, at what point does Amazon’s web service strategy stop making good financial sense for them?

Voir en ligne : Are Amazon Web Services Doomed to Fail?
Researchers at Princeton's Center for IT Policy have released a new paper urging federal agencies to focus on improving the availability of raw government data rather than building better user-facing web sites. They predict that if the data is made available in a structured format, private parties will develop innovative sites to view and manipulate it.
Voir en ligne : Study: .gov web sites should focus on RSS, XML—not redesigns
What a different emotional register from John McCain's; Obama seems on the verge of tears; the enormous crowd in the Xcel center seems ready to lift Obama on its shoulders; the much smaller audience for McCain's speech interrupted his remarks with stilted cheers. (Note: there was a large overflow crowd for McCain's speech, and he repeated his remarks for them later in the evening.)
McCain appealed to Clinton supporters based on their resentments, pointing out that the pundits and party elders seemingly anointed Obama; Obama appeals to them based on their hopes, promising that Clinton would play a major role in securing universal health care.
Obama thanked his grandmother above all else; without her, he said, none of this would have been possible. She is white, of course. The explicit message is obvious. The implicit message: this thing, this event, is much more than just a step for racial equality.
"Voir en ligne : Thoughts About Obama's Speech


Voir en ligne : Obama, Propelled by the Net, Wins Democratic Nomination
Voir en ligne : British Government Reconsiders New .gov.uk Websites
I wrote last week that Alexa may have died and no one noticed as the service hadn’t been updated in 10 days at the time of writing. Alexa has only now updated in the last 24 hours, a break of over 2 weeks. There’s zero on Alexa blog about the outage, and all I got was this email
Dear Duncan,
Thank you for your question.
We strive to update the data on our traffic details pages frequently.
Unfortunately, due to technical difficulties, there may at times be a
delay in the display of data. The data are still being tracked, but may
not be displayed immediately. It has now been resolved, and we lament
any inconvenience this may have caused you.
We appreciate and thank you for your interest in Alexa.
Best regards,
Alexa Internet Customer Service
Everybody loves to pile on Alexa, but old habits die hard and I’ve kept using the service despite most people giving up. It still has the best package of up-to-date, free statistics available, even if those stats are pretty flawed. But seriously: “due to technical difficulties, there may at times be a delay in the display of data.” Over two weeks, no public recognition that there was a problem, and just a rubbish brushoff after the event? If this is how they treat their supporters, god help the people who hate the service. I’ll probably keep on using Alexa on occasion, but I’m tired of trying to defend a company that doesn’t care about its product or user base.
Voir en ligne : Alexa Doesn’t Care, Why Should We?
http://www.rollingstone.com/news/coverstory/21023786/page/2
Joss and Eliza make the list of awesomeness, along with the Muppets and others!
A silly list for the most part, but they mark Dollhouse as both "Actually Excellent" and "Secretly Genius." Which we knew all along.
[ edited by BandofBuggered on 2008-06-04 01:04 ]
Voir en ligne : Joss featured on Rolling Stone's Top 10 Best in TV and beyond.
Voir en ligne : Review: 'Ninja Gaiden II' Gets Combat Right, Everything Else Wrong
The Machine That Changed the World is the longest, most comprehensive documentary about the history of computing ever produced, but since its release in 1992, it's become virtually extinct. Out of print and never released online, the only remaining copies are VHS tapes floating around school libraries or in the homes of fans who dubbed the original shows when they aired.
It's a whirlwind tour of computing before the Web, with brilliant archival footage and interviews with key players — several of whom passed away since the filming. Jointly produced by WGBH Boston and the BBC, it originally aired in the UK as The Dream Machine before its U.S. premiere in January 1992. Its broadcast was accompanied by a book co-written by the documentary's producer Jon Palfreman.
With the help of Simon Willison, Jesse Legg, and (unofficially) the Portland State University library, we've tracked down and digitized all five parts. This week, I'm uploading them, annotating them with Viddler, and posting them here as streaming Flash video as they're finished. Also, the complete set will be available as high-quality MP4 downloads via BitTorrent by Friday.
Here's the first of the five-part series, The Machine That Changed the World. Enjoy!
Note: Like all the other materials I post here, these videos are completely out-of-print and unavailable commercially, digitized from old VHS recordings. If they ever come back into print, or the copyright holders contact me, I'll take them down immediately.
Part 1: Great Brains
Notes:
The first part begins with a brief introduction to the series, summarizing the impact of computers on every aspect of our lives, attributed to their versatile nature. The history of computing begins with the original definition of "computers," human beings like William Shanks that calculated numbers by hand. Frustration with human error led Charles Babbage to develop his difference engine, the first mechanical computer. He later designed the analytical engine, the first general-purpose programmable computer, but it was never finished. Ada Lovelace assisted Babbage with the design and working out programs for the unbuilt machine, making her the first programmer.
100 years later, German engineer Konrad Zuse built the Z1, the first functional general-purpose computer, using binary counting with mechanical telephone relays. During World War II, Zuse wanted to switch to vacuum tubes, but Hitler killed the project because it would take too long. At the University of Pennsylvania, John Mauchly and J. Presper Eckert built ENIAC, the first digital computer, to aid in military calculations. They didn't finish in time to be useful for the war, but soon after, Eckert and Mauchly started the first commercial computer company. It took years before they brought a computer to market, so a British radar engineer named Freddie Williams beat them to building the first computer with stored programs. In Cambridge, Maurice Wilkes built EDSAC, the first practical computer with stored programs. Alan Turing imagined greater things for computers beyond calculations, after seeing the Colossus computer break German codes at Bletchley Park. Actor Derek Jacobi, performing as Alan Turing in "Breaking the Code," elaborates on Turing's insights into artificial intelligence. Computers can learn, but will they be intelligent?
Interviews:
Paul Ceruzzi (computer historian), Doron Swade (London Science Museum), Konrad Zuse (inventor of the first functional computer and high-level programming language, died in 1995), Kay Mauchly Antonelli (human computer in WWII and ENIAC programmer, died in 2006), Herman Goldstine (ENIAC developer, died in 2004), J. Presper Eckert (co-inventor of ENIAC, died in 1995), Maurice Wilkes (inventor of EDSAC), Donald Michie (Codebreaker at Bletchley Park)
Up Next
Part 2: Inventing the Future. The rise of commercial computing, from UNIVAC to IBM in the 1950s and 1960s.
Voir en ligne : The Machine That Changed the World: Great Brains