Chip Windows Movie Maker 2012

0 views
Skip to first unread message
Message has been deleted

Phillipp Schneeberger

unread,
Jul 11, 2024, 5:25:46 AM7/11/24
to erverrentlawn

Microsoft is looking to build out the infrastructure to support growing demand for AI innovation. The software developer is "reimagining every aspect of our datacenters" to support the needs of its customers, according to the press release.

chip windows movie maker 2012


Descargar https://urlcod.com/2yPjLb



Meanwhile, Intel said it now expects $15 billion in foundry orders, up from the $10 billion the company had announced in January. The increased growth projection is due to several factors, including Intel's recent partnership with United Microelectronics Corp., several new advanced packaging customers, and now, the Microsoft deal, Intel CEO Pat Gelsinger said in a CNBC interview last month.

The chip giant has also been making strides building new fabrication plants all over the country. Intel is building an advanced semiconductor packaging facility in New Mexico, a $20 billion fab in Ohio and is expanding two plants in Arizona.

While Intel will disclose its exact CHIPS and Science Act grant amount soon, Gelsinger said in the keynote, the chipmaker has said its projects in Arizona, New Mexico, Oregon and Ohio are being aided by the legislation.

During her remarks, Raimondo noted that of the 600 funding proposals the government has received so far, it will prioritize projects that will be operational by 2030. The Biden administration plans to announce additional grants in the coming months.

Microsoft got a later start with custom servers, storage, and datacenters, but with the addition of the Cobalt and Maia compute engines, it is becoming a fast follower behind AWS and Google as well as others in the Super 8 who are making their own chips for precisely the same reason.

What we can tell you is that the Maia 100 chip is based on the same 5 nanometer processes from TSMC and includes a total of 105 billion transistors, according to Nadella. So it is no lightweight when it comes to transistors or clock speed. The Maia 100 chip is direct liquid cooled and has been running GPT 3.5 and is powering the AI copilot that is part of GitHub right now. Microsoft is building up racks with the Maia 100 accelerators and will be allowed to power outside workloads through the Azure cloud next year.

One of the neat things about the Maia effort is that Microsoft has designed an Open Compute compatible server, which holds four of the Maia accelerators, that slides into the racks it has donated to OCP and has a companion sidekick rack that has all of the liquid cooling pumps and compressors to keep these devices from overheating and allowing them to run hotter than they otherwise might with only air cooling. Take a look:

I view the Zen cores a closer analogy to Amdahl because both are binary compatible with the market leader during their times. It is plausible one of the main reasons the x86 architecture is still relevant is because there are two independent companies making compatible products. For example, the architecture would not even have made the jump to 64-bit without AMD.

There is a similar competition for ARM due to the fact that Apple has rights. At the same time, the trouble Qualcomm recently had with their Nuvia architecture licenses suggests future ARM designs may not include the same level of the competitive engineering as the current dynamic between AMD and Intel.

The next big change in DC architecture will be when someone defines a way to get rid of all the fibre and other cables by having standard rack designs with plugs in the back for the network/power/ILO support all in one connector in a standard placement.

When it comes to connectors and standard placement: unfortunately physics play a role there and with everything copper, trace lengths become critical. There is a lot of cables these days on Gen5 PCIe servers, because running equal length traces on planar mainboards becomes either impossible or too expensive as you add layers. So going for cables rather than slots and connectors may just be dictated by a combination of physics and economy, even if these high precision cables are neither cheap to make nor to deploy.

AI in the cloud will always be more powerful than what is possible on a client device. On the other hand, as client devices get more AI hardware and AI algorithms improve, client devices might become good enough for most purposes. Will the AI assistants most people use will stay in the cloud or will they eventually run on client devices? The computer industry has gone through several transitions between centralization and decentralization. I would like to know if AI in the cloud is going to remain the dominant way of implementing AI assistants or is it just a temporary thing (less than 5 years) until client devices become more capable and AI algorithms improve.

I think the few resources to favour hegemony for gigant microsoft co., there is duet ARM-WINDOWS and their asociate programs suite, I think Apple had arrived late for big presence on heavy enterprise soft, S. Jobs (rip), dont fall in this error if he live. Otherwise DEll, HEWLETT PACKARD, SUPERMICRO and other makers big digital infrastucture, to love MSFT if this last company only make tuned CPUs ARM for only general sale, dont for make Microsoft their complete server line product new business. Important!!!

Qualcomm believes that PC makers will be more enthusiastic this time around, in part by bringing generative AI to their computers. But what may impress consumers more are the claims Qualcomm made on stage at the summit that the Snapdragon X Elite outperforms leading Apple M2 and Intel chips in performance and power efficiency. And that may be what really compels consumers to seek out an X Elite-powered laptop in mid-2024 when they're expected to start arriving.

"I think the Windows [PC makers] are looking for similar capabilities," said Alex Katouzian, senior vice president leading Qualcomm's mobile, compute and XR work. "The only answer to what Apple has is really Qualcomm."

Qualcomm trumpeted its performance advantage over Macs powered by Apple's M2 processors, but just days later, Apple announced more-powerful M3-based Macs that'll be shipping for months before the Snapdragon X Elite makes its first appearance in 2024.

"The reality is that Microsoft desperately needs a partner that can compete with Apple both in performance and battery life, and considering what Qualcomm has shown with its preliminary benchmarks at Snapdragon Summit, it definitely seems like Qualcomm has a fighting chance," said Anshel Sag, principal analyst at Moor Insights and Strategy. "I think the Snapdragon X Elite is, without a doubt, Qualcomm's best chance to take market share from Intel and AMD."

Customers buying Qualcomm-powered PCs more likely will be using Windows machines powered by Intel or AMD processors, where laptop speed tests likely will paint a rosier picture for Qualcomm. However, although I independently verified that Qualcomm's benchmarks lived up to claims the company made on the summit stage, there's still a lot that can change, from its potential speed and efficiency heights to implementation in PCs on store shelves, including the arrival of Intel's Meteor Lake laptop processor in December.

"Intel still has distribution and incumbency advantages, and we haven't gotten hands on with commercial Snapdragon X Elite designs yet, [while] Apple has been making steady inroads in the PC market," Greengart said. Yet, he added, "the Snapdragon X Elite offers such a huge jump in performance per watt compared to today's x86 architecture that it's hard to imagine Qualcomm not winning some of Intel's business."

The X Elite is for laptops, though there's no reason it can't be used in a desktop. It's been tested in laptops for its efficiency in power and performance, and Qualcomm sees it as a solution for on-the-go computing -- which includes performing intensive tasks for business and pleasure.

"As you look at every consumer today, almost every consumer is a creator. And every consumer is a casual gamer," said Kedar Kondap, senior vice president and general manager of compute and gaming at Qualcomm

Which means X Elite-powered laptops could fill plenty of niches, though the chipset isn't being positioned to go toe-to-toe with chips powering big desktop rigs for pixel-pushing photo or video editing.

It also may not be for big gamers, at least until we see how the chip's integrated GPU (graphics processing unit) handles gaming under high graphics and fast frame rates. It's unclear how the chip plays with discrete GPUs like those from Nvidia and AMD, which isn't Qualcomm's focus with this chip. Instead, the devices we expect to see come to market with the X Elite will use its integrated GPU, Kondap said.

The computing industry is on the cusp of a new generation as it integrates AI, and it's not yet clear whether that will just mean new tools or a more meaningful overhaul of how we use PCs. 2024 and 2025 will be the inflection point, Kondap said, with Windows 10 phasing out and app vendors starting to harness neural processing units on every chip. This could lead to optimizations in efficiency, operational accuracy and quicker results, he said, as well as things only generative AI can do -- like Microsoft's Copilot, which is aimed to help users with many tasks around the PC.

On the second day of the Snapdragon Summit, filmmaking software company Black Magic went on stage to show how its software can harness the generative AI capabilities on the X Elite chip. When compared to an unspecified 12-core processor with an integrated GPU, the Snapdragon X Elite was 1.7 times faster at using a compute-intensive AI tool, Magic Mask, in Blackmagic's DaVinci Resolve Studio software. The Qualcomm chip's NPU was three times faster at running DaVinci tasks than the one on the other chip.

More generally, Qualcomm agrees with Apple, Google and others about two big on-device AI advantages: It protects your privacy better than services that upload your data to the cloud, and it can run faster without the communication lag. Running AI on your device means you don't have to pay for cloud computing infrastructure, too.

d3342ee215
Reply all
Reply to author
Forward
0 new messages