Gta 5 Super Low Pc 39;s Mod

0 views
Skip to first unread message

Roser Blazado

unread,
Jun 30, 2024, 6:35:00 AM (2 days ago) Jun 30
to tysanbitee

I have several computers, some new, some old ( I collect them, since my first one, which has 2Kbytes of RAM, from 1979 ). The collection is getting huge, but the purpose of this question is related to the fact that I always loved the power of supercomputers, or at least, the power of the big machines ).

I ever thought about the idea of joinning machines in order to get a more powerful one. I run a LAN ( local area network ) 1Gbit apeed, where there are 4 machines intel i7 2600k running at 4.8Ghz watercooler, each one with 16Mb RAM, SSD and common hard drives, for a total of 30Tb of space ( total on the LAN ). Having read articles and watched many videos about virtualization, I question for the possibility of installing bare metal ( Type 1 ) hypervisors on each machine, then, creating a virtual machine which spread across the physical machines, so I could install an operational system like windows on top, to run softwares that need much resources, like CPU, RAM, Hard disk, etc.

I imagine that there must be a way that a virtual machine "thinks" that its installed on a single machine, but indeed, its spread along several nodes ( like a cluster ). For the virtual machine, it sees the system as only one big machine, but indeed, theres shared CPU, shared RAM and shared hard drives.

Using this way, we could install an OP and run for instance, Adobe After Effects, or Adobe Premiere, which needs an outstanding parallel processing ( or cpu power ) in order to make previews in real time, or to run complex software which could benefit from multiple processors. I know many people would suggest purchasing a big multi-cpu, multi-core xeon machine for parallel processing, but its not the case...I like to think that with the current technology, there must be a way to join PCs and get more computational power.

I see people joinning Raspberry pi and making "supercomputers" at youtube, with kind of 1 teraflop, so why cant we do it with our own machines, which has LAN, ram, disks...isnt it the same thing, we only need the software and how to do it, no ? Is it possible ? How to do it ?

The existing hypervisors for virtualization such as Hyper-V, VMware ESXi, XenServer allow running Virtual Machines on a single host or on a cluster. Hypervisor takes hardware CPU, RAM and storage and "converts" it to virtual resources to run VMs. For storage, it can be configured in shared volumes being mirrored between hosts using network connectivity to data transmission (alike iSCSI SAN, VMware VSAN, StarWind VSAN etc.), but still each VM utilize only local compute CPU and RAM.

The applications are limited to those that make efficient use of your given resources. You can't run After Effects on your "supercomputer" unless software exists that knows how to split up the workload among all all of your slaves.

First, you don't have the required software. Even if you acquire the needed virtualization software (at whatever cost, or if the company is even willing to sell it to you!), there are minimum requirements for the cluster, which usually include being of nearly identical specifications. The closest thing I could find is VMware ESXi.

Second, there are massive penalties from the computers communicating with each other, to the point which any performance gains essentially cancel out. Sharing RAM over the network is too slow to be viable, and sharing a drive over iSCSI might not turn out to be as reliable as you expected.

MOSIX is a package that provides load-balancing by migrating processes within clusters and multi-cluster private clouds. MOSIX is intended primarily for distributed, concurrent computing, such as used for intensive computing.

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS).[3] For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013).[4][5] Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems.[6] Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.[7]

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis.[8]

Supercomputers were introduced in the 1960s, and for several decades the fastest was made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were added, with one to four processors being typical. In the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.[9][10]

The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, with China becoming increasingly active in the field. As of May 2022, the fastest supercomputer on the TOP500 supercomputer list is Frontier, in the US, with a LINPACK benchmark score of 1.102 ExaFlop/s, followed by Fugaku.[11] The US has five of the top 10; China has two; Japan, Finland, and France have one each.[12] In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.[13]

In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than the newly emerging disk drive technology.[14] Also, among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.[15]

The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas operating system swapped data in the form of pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing to supercomputing, so that more than one program could be executed on the supercomputer at any one time.[16] Atlas was a joint venture between Ferranti and Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.[17]

The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design.[18] Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.[19][20][21][22]

Cray left CDC in 1972 to form his own company, Cray Research.[20] Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history.[23][24] The Cray-2 was released in 1985. It had eight central processing units (CPUs), liquid cooling and the electronics coolant liquid Fluorinert was pumped through the supercomputer architecture. It reached 1.9 gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.[25]

The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.

d3342ee215
Reply all
Reply to author
Forward
0 new messages