Megabytesare typically for storage (RAM, HDD, SSD, NVMe, etc.), and megabits are typically for network bandwidth or throughput (network cards, modems, WiFi adapters, etc.). It can be easy to confuse the two because both bits/s and bytes/s represent data transmission speeds, but remember that, in the abbreviations for each, the uppercase "B" stands for bytes while the lowercase "b" stands for bits.
Storing and retrieving data locally on a computer has always been faster than transmitting it over a network. Transmission over the network was (and still is) limited by the transmission medium used. As file sizes grew over the years, it was easier to understand how long it would take to store or retrieve the file. The key to understanding the terminology for storage is remembering that eight bits equals one byte. So a one Megabyte file is actually an 8,000,000 bit file. This means the file is composed of 8,000,000 ones and zeroes, and it can be stored at a rate of one MB/s or 8,000,000 bits/s.
Stephen Wilson is a Senior Storage Consultant with Red Hat, Inc. He has over 20 years of experience in information systems management. His professional interests include system administration, cybersecurity, cloud technologies, and virtualization. More about me
The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat. The content published on this site are community contributions and are for informational purpose only AND ARE NOT, AND ARE NOT INTENDED TO BE, RED HAT DOCUMENTATION, SUPPORT, OR ADVICE.
Key strengths, and their equivalences, become meaningless when they reach the zone of "cannot be broken with existing and foreseeable technology", because there is no such thing as more secure than that. It is a common reflex to try to think of key sizes as providing some sort of security margin, but this kind of reasoning fails beyond some point.
Basically, the best known algorithms for breaking RSA, and for breaking elliptic curves, were already known 25 years ago. Since then, breaking efficiency has improved because of faster computers, at a rate which was correctly predicted. It is a tribute to researchers that they could, through a lot of fine tuning, keep up with that rate, as shown on this graph:
The bottom-line is that while a larger key offers longer predictable resistance, this kind of prediction works only as long as technology improvements can be, indeed, predicted, and anybody who claims that he knows what computers will be able to do more than 50 years from now is either a prophet, a madman, a liar, or all of these together.
The conclusion is that there is no meaningful way in which 3000-bit and 4000-bit RSA keys could be compared with each other, from a security point of view. They both are "unbreakable in the foreseeable future". A key cannot be less broken than not broken.
An additional and important point is that "permanent" keys in SSH (the keys that you generate and store in files) are used only for signatures. Breaking such a key would allow an attacker to impersonate the server or the client, but not to decrypt a past recorded session (the actual encryption key is derived from an ephemeral Diffie-Hellman key exchange, or an elliptic curve variant thereof). Thus, whether your key could be broken, or not, in the next century has no importance whatsoever. To achieve "ultimate" security (at least, within the context of the computer world), all you need for your SSH key is a key that cannot be broken now, with science and technology as they are known now.
Another point of view on the same thing is that your connections can only be as secure as the two endpoints. Nothing constraints your enemies, be they wicked criminals, spies or anything else, to try to defeat you by playing "fair" and trying to break your crypto upfront. Hiring thousands upon thousands of informants to spy on everybody (and on each other) is very expensive, but it has been done, which is a lot more than can be said about breaking a single RSA key of 2048 bits.
In my previous blogs, I gave an overview of what it means to work with an 8-bit, 16-bit, 32-bit, etc., number, or binary number, and how you would solve an algorithm problem that requires a certain sized bit integer without the computer science background knowledge to help make sense of it all. This post specifically tackles what exactly it means to have a signed or unsigned binary number. It won't change much the way integers are restricted when solving algorithm sets, but it will change the range you can work with dramatically. Then I'll use the same problem solved previously but accommodated to help solve for a signed binary integer instead of one that isn't.
The biggest difference between a signed and unsigned binary number is that the far left bit is used to denote whether or not the number has a negative sign. The rest of the bits are then used to denote the value normally.
This first bit, the sign bit, is used to denote whether it's positive (with a 0) or negative (with a 1). If you want to get technical, a sign bit of 0 denotes that the number is a non-negative, which means it can equal to the decimal zero or a positive number.
Most importantly, the first bit used to denote sign means that we have one less bit to denote value. So if we have an 8-bit signed integer, the first bit tells us whether it's a negative or not, and the other seven bits will tell us what the actual number is. Because of this, we're technically working with a more limited range of numbers that can be represented; 7 bits can't store numbers as big as 8 bits could.
To review binary numbers, the ones and zeroes act like switches that metaphorically turn powers of 2 on, and then it's added up to create the decimal value. Normally, we'd "mark" a bit value with a one.
When a signed binary number is positive or negative it's 'marked' with a 0 or 1 respectively at the first far-left bit, the sign bit. The number above doesn't change at all. It's just more explicitly a positive number.
When a binary integer is negative, the zeroes will now act as a "marker", instead of the ones. You would then calculate the negative binary number in the same way you would with a positive or unsigned integer, but using zeroes as markers to turn bit values "on" instead of ones and then adding the negative sign at the end of your calculation.
Going from an unsigned binary to a signed binary integer changes your end value in a couple of different ways. The first is the more obvious change in value when the first bit is used to denote sign instead of value. You can see between example 2a and 2b above that it means if you had a one at the first bit of your 4-bit integer, you're losing a value of 23 that would've been added to your end value with an unsigned bit, but is now instead used to represent a negative. With a larger bit integer, that could be an extremely larger value that you lose the ability to represent.
Something else that isn't obvious right away is that you calculate a negative binary integer's value starting at 1, not 0. Because the decimal zero is not included in a negatively signed bit integer, we don't start counting at zero as we would when it's a positively signed bit integer.
To explain that quirk let's compare positively and negatively signed integers. Working with a 4-bit integer, if we had four bits with a value of zero, the number would equal to 0. That's the lowest value we can have. Because a non-negative signed bit means we can have a positive integer, or a 0.
A 4-bit negative integer of four bits of one values (the ones now being the "off switch"), the number would not equal 0, but -1. Which makes sense, since that's the highest decimal number we can represent while still having a negative.
So even if I were to perfectly flip the "switches" from the positively signed binary number above into its negative counterpart, it would not perfectly switch to its negative decimal counterpart value in the way one might expect:
Another way to calculate the negative is to keep using the ones as 'markers' and use the sign bit as a marker for the value at its corresponding power of two at a negative value. This also illustrates a different way to understand what's going on in binary negative representations.
This way of calculating the decimal value might be a little easier when working with smaller decimal numbers, but then becomes a little more complicated to do some mental math when you're working with bigger decimal numbers:
The range of positive decimal numbers that can be stored in any sized bit integer is shortened by the fact that the first bit is used to denote sign. This means that, in the case of a 32-bit signed integer, we are actually working with 31 value bits instead of 32, and that last bit could have stored an exponentially bigger integer. In fact, this completely halves the range of positive integers we can work with compared to a 32-bit unsigned integer. That one extra bit would have doubled our max possible integer, and without it, we lose the ability to store as many positive integers.
On the other hand, we gain the ability to store a bunch of negative integers that we couldn't have before with an unsigned bit integer. In the end, the size of the range we work with is kept the same, but the range moves to account for being able to store both positive and negative numbers.
Because of this loss of a bit, our maximum is calculated by 2bits - 1 - 1, or, if working with 32-bit integers 231 - 1.
I explained why we have to subtract the one last time, which we still have to do since we're including the zero in the range and not subtracting would cause one extra bit to be needed to store that number.
Our minimum in the range is the inverse, -2bits - 1, or, if working with 32-bit integers, -231. We don't subtract one for our minimum range because the zero is not included and we start counting from -1. This gives us that one extra negative number in our range that can be represented.
3a8082e126