Reason 3 Serial Number Machines

0 views
Skip to first unread message
Message has been deleted

Christal Rasband

unread,
Jul 10, 2024, 5:23:52 PM7/10/24
to laqnconstodi

In regards to stability: I can't think of any reason an odd number is less stable than an even one. I did an experiment in college where I ran a computationally intense multi-threaded program (in C) on a VM and changed the number of cores from 1 through 8 and measured the process run time. The results were mostly linear, as long as the process is configured to take full advantage of the number of cores. I can't think of any reason why the guest OS would have an issue either, nor should any guest processes since they get their processor time from the guest OS.

Performance might be slightly better if you stick to even numbers since some processor architectures are designed to share a cache between two cores. Hence, the odd core being used by the VM would have to share its cache with the host. But even that is a shaky theory since the VM software itself is never guaranteed to always get the same cores and cache between time slices. At this point, we are splitting hairs. For example, if you have a 4 core host, and you can't decide whether to give 2 or 3 cores to the VM, the slightly less optimum odd number of 3 will still run faster than a VM of 2. In this example, you might as well assign 4 cores to the VM. The host OS will only give up the time slices it can afford, so as long as you're not running huge processes on the host simultaneously, the VM should run fine.

reason 3 serial number machines


Download File https://xiuty.com/2yY40U



You likely count up to the largest possible number with one hand, and then you move on to your second hand when you run out of fingers. Computers do the same thing, if they need to represent a value larger than a single register can hold they will use multiple 32bit blocks to work with the data.

To literally display the number "1000000000000" requires 13 bytes of memory. Each individual byte can hold a value of up to 255. None of them can hold the entire, numerical value, but interpreted individually as ASCII characters (for example, the character '0' is represented by decimal value 48, binary value 00110000), they can be strung together into a format that makes sense for you, a human.

A related concept in programming is typecasting, which is how a computer will interpret a particular stream of 0s and 1s. As in the above example, it can be interpreted as a numerical value, a character, or even something else entirely. While a 32-bit integer may not be able to hold a value of 1000000000000, a 32-bit floating-point number will be able to, using an entirely different interpretation.

As for how computers can work with and process large numbers internally, there exist 64-bit integers (which can accommodate values of up to 16-billion-billion), floating-point values, as well as specialized libraries that can work with arbitrarily large numbers.

First and foremost, 32-bit computers can store numbers up to 232-1 in a single machine word. Machine word is the amount of data the CPU can process in a natural way (ie. operations on data of that size are implemented in hardware and are generally fastest to perform). 32-bit CPUs use words consisting of 32 bits, thus they can store numbers from 0 to 232-1 in one word.

By pressing 1 once and then 0 12 times you're typing text. 1 inputs 1, 0 inputs 0. See? You're typing characters. Characters aren't numbers. Typewriters had no CPU or memory at all and they were handling such "numbers" pretty well, because it's just text.

Proof that 1000000000000 isn't a number, but text: it can mean 1 trillion (in decimal), 4096 (in binary) or 281474976710656 (in hexadecimal). It has even more meanings in different systems. Meaning of 1000000000000 is a number and storing that number is a different story (we'll get back to it in a moment).

Now, back to storing numbers. It works just like with overflowing text, but they are fitted from right to left. It may sound complicated, so here's an example. For the sake of simplicity let's assume that:

We have predicted that overflow may happen and we may need additional memory. Handling numbers this way isn't as fast as with numbers that fit in single words and it has to be implemented in software. Adding support for two-32-bit-word-numbers to a 32-bit CPU effectively makes it a 64-bit CPU (now it can operate on 64-bit numbers natively, right?).

You are also able to write "THIS STATEMENT IS FALSE" without your computer crashing :) @Scott's answer is spot-on for certain calculation frameworks, but your question of "writing" a large number implies that it's just plain text, at least until it's interpreted.

Edit: now with less sarcasm more useful information on different ways a number can be stored in memory. I'll be describing these with higher abstraction i.e. in terms that a modern programmer may be writing code in before it's translated to machine code for execution.

Data on a computer has to be restricted to a certain type, and a computer definition of such type describes what operations can be performed on this data and how (i.e. compare numbers, concatenate text or XOR a boolean). You can't simply add text to a number, just like you can't multiply a number by text so some of these values can be converted between types.

Ok, we've covered integers which are numbers without a decimal component. Expressing these is trickier: the non-integer part can sensibly only be somewhere between 0 and 1, so every extra bit used to describe it would increase its precision: 1/2, 1/4, 1/8... The problem is, you can't precisely express a simple decimal 0.1 as a sum of fractions that can only have powers of two in their denominator! Wouldn't it be much easier to store the number as an integer, but agree to put the radix (decimal) point instead? This is called fixed point numbers, where we store 1234100 but agree on a convention to read it as 1234.100 instead.

A relatively more common type used for calculations is floating point. The way it works is really neat, it uses one bit to store the sign value, then some to store exponent and significand. There are standards that define such allocations, but for a 32-bit float the maximum number you would be able to store is an overwhelming

True, if a computer insists on storing numbers using a simple binary representation of the number using a single word (4 bytes on a 32 bit system), then a 32 bit computer can only store numbers up to 2^32. But there are plenty of other ways to encode numbers depending on what it is you want to achieve with them.

One example is how computers store floating point numbers. Computers can use a whole bunch of different ways to encode them. The standard IEEE 754 defines rules for encoding numbers larger than 2^32. Crudely, computers can implement this by dividing the 32 bits into different parts representing some digits of the number and other bits representing the size of the number (i.e. the exponent, 10^x). This allows a much larger range of numbers in size terms, but compromises the precision (which is OK for many purposes). Of course the computer can also use more than one word for this encoding increasing the precision of the magnitude of the available encoded numbers. The simple decimal 32 version of the IEEE standard allows numbers with about 7 decimal digits of precision and numbers of up to about 10^96 in magnitude.

But there are many other options if you need the extra precision. Obviously you can use more words in your encoding without limit (though with a performance penalty to convert into and out of the encoded format). If you want to explore one way this can be done there is a great open-source add-in for Excel that uses an encoding scheme allowing hundreds of digits of precision in calculation. The add-in is called Xnumbers and is available here. The code is in Visual Basic which isn't the fastest possible but has the advantage that it is easy to understand and modify. It is a great way to learn how computers achieve encoding of longer numbers. And you can play around with the results within Excel without having to install any programming tools.

You can write any number you like on paper. Try writing a trillion dots on a white sheet of paper. It's slow and ineffective. That's why we have a 10-digit system to represent those big numbers. We even have names for big numbers like "million", "trillion" and more, so you don't say one one one one one one one one one one one... out loud.

32-bit processors are designed to work most quick and efficiently with blocks of memory that are exactly 32 binary digits long. But we, people, commonly use 10-digit numeric system, and computers, being electronic, use 2-digit system (binary). Numbers 32 and 64 just happen to be powers of 2. So are a million and a trillion are powers of 10. It's easier for us to operate with these numbers than multitudes of 65536, for example.

We break big numbers into digits when we write them on paper. Computers break down numbers into a greater number of digits. We can write down any number we like, and so may the computers if we design them so.

32bit and 64bit refer to memory addresses. Your computer memory is like post office boxes, each one has a different address. The CPU (Central Processing Unit) uses those addresses to address memory locations on your RAM (Random Access Memory). When the CPU could only handle 16bit addresses, you could only use 32mb of RAM (which seemed huge at the time). With 32bit it went to 4+gb (which seemed huge at the time). Now that we have 64bit addresses the RAM goes into terabytes (which seems huge).
However the program is able to allocate multiple blocks of memory for things like storing numbers and text, that is up to the program and not related to the size of each address. So a program can tell the CPU, I'm going to use 10 address blocks of storage and then store a very large number, or a 10 letter string or whatever.
Side note: Memory addresses are pointed to by "pointers", so the 32- and 64-bit value means the size of the pointer used to access the memory.

Because displaying the number is done using individual characters, not integers. Each digit in the number is represented with a separate character literal, whose integer value is defined by the encoding being used, for example 'a' is represented with ascii value 97, while '1' is represented with 49. Check the ascii table here.
For displaying both 'a' and '1' is same. They are character literals, not integers. Each character literal is allowed to have max value of 255 in 32-bit platform storing the value in 8 bit or 1 byte size(That's platform dependent, however 8 bit is most common character size), thus they can be grouped together and can be displayed. How much separate characters they can display depends upon the RAM you have. If you have just 1 byte of RAM then you can display just one character, if you have 1GB of RAM, you can display well 1024*1024*1024 characters(Too lazy to do the math).

aa06259810
Reply all
Reply to author
Forward
0 new messages