Sosay we had a way to use successive addresses in EEPROM rather than the same 8 bit location all the time. That would give us 1024 times the write endurance for an EEPROM that can store 1024 values if we only have to write one 8 bit value for an application.
For example, normally we write maybe 0x34 to location 0x001 and then later write 0x8A to location 0x001, but with wear leveling we would write 0x34 to location 0x001 and then later write 0x8A to location 0x002, leaving location 0x001 alone for now. Once we got to the last location we would wrap around back to the location 0x000 or whatever.
Of course, wear leveling will reduce the number of writes to any given location so it will increase the overall life of the eeprom. Most data indicates that the 100,000 write specified life of the eeprom registers is very conservative.
The question is really about a low write frequency application, where the writes might only be 16 times in one day, then not written to for another 7 days. On the other hand, if wear leveling does any good then i might increase the write frequency which would help improve the program somewhat. So it's partly what i can get away with, not what else i can do about it.
The reason for the question itself is because not all EEPROM's are equal. Some write to blocks of bits rather than just one 8 bit section. For example, some write to 64 bits all at once even if we only tell it to write to one 8 bit section. That means that for say the first 8 bytes if we write to location 0x02, it will write to 0x02 but will also over write 0x00 to 0x07, and that means that using wear leveling on a byte by byte basis is the wrong way to do it. At best, we would be able to do it on an 8 byte basis which would also improve the endurance but only 1/8 as effective unless we store 7 values and then write on the 8th value occurance. That would still help though, unless it worked on a 16 byte block and then we'd have to resort to only a 16 byte block algorithm in one way or another.
I tend to think the EEPROM in the Uno or Nano probably works on an 8 bit block, so one byte per block, but i have no data to back this up. If this is true, i would not need any other type of memory although i know there are better ones out there. It could even be different on those such as the Due, which has more EEPROM. The larger the EEPROM probably the more likely it is to use multi byte block processing rather than single byte in order to speed the write process up for the user.
FRAM eh? That's those little Ferris cores that you thread copper wire through to store memory in magnets, right? The kind they used on the moon lander and early shuttle missions? It's supposed to be impervious to high radiation you would find in space, unlike most chips. Maybe I'll get me one.
Cattledog:
That quote may or may not mean that one byte is written to, but i tend to think it does mean that. I can write one byte to my flash drive too, but it might actually write 64k bits just for that one byte.
Shawnlg:
Those little ferrite cores are ferromagnetic, a FRAM is ferroelectric. Slightly different operating principles.
I used to have a ferromagnetic core that held about 32 bytes or so. The core was about 5 inches by 5 inches and the PC board with the drive electronics was about 12 inches long by maybe 8 inches wide. That's pretty big for 32 bytes
Yes the little 1/8 inch ferro cores had two very thin copper wires running through the center. When both wires passed current the core would flip it's state. Sense amplifiers were used to detect any pulse that would indicate that the tiny core flipped it's magnetic state. Pretty cool, but kinda big for what it did.
MrAl:
Shawnlg:
Those little ferrite cores are ferromagnetic, a FRAM is ferroelectric. Slightly different operating principles.
I used to have a ferromagnetic core that held about 32 bytes or so. The core was about 5 inches by 5 inches and the PC board with the drive electronics was about 12 inches long by maybe 8 inches wide. That's pretty big for 32 bytes
Going back in memory to early to mid 70's, the cores are arranged on a grid say N x M.
If you want to write a '1' to one bit, you have to energize the Nth column and Mth row with half the current it takes to set the state of the core. That individual core then gets the full current sum and so the state changes to a '1' unless it was already a '1' and then it would not change. To write a '0' use negative currents.
To read, do the same thing, but after the current pulses have ended read the sense amplifiers and see if any of them sensed a pulse after the energizing pulses have ended. If they sensed a pulse then the core changed state so it was a '0', and then the '0' has to be written back to that core, but if no sensed pulse then no extra write is needed and it must have been a '1'.
To do this with an Arduino would mean addressing it with i/o lines where the half current is set somehow, maybe with resistors. For a 4x4 core matrix we'd need 8 i/o lines. We'd also have to see if we can sense the pulse though, which may be very fast and of low level so we'd need sense amplifiers too and if the pulse was too fast we might need an external latch to catch the pulse too, then reset when done.
So it would take a little doing but is most likely possible.
I dont have that core anymore though but i would imagine we could find tiny cores somewhere on the web and make our own matrix with very thin magnetic wire (like 32 gauge perhaps).
For proof of concept, all we would have to do is learn how to energize one single core and be able to read back the stored state. So we'd only need one tiny toroid core to start with. I am not sure however if a little 'bead' core used for noise suppression would work or not. I think the core has to have a square hysteresis loop and i am not sure how those bead cores are. We'd have to look at the data sheet or something.
This would be a very interesting project though, because often we just have to store a tiny amount of data so that when the application starts up from power up it can remember one little thing like one byte.
If we could find an easy enough way to do this, it would be a good thing to have and i am sure other people would like to do it too for their own projects.
Oh yeah another thing would be the level of current required. If the current had to be higher than maybe 20ma then we might have to use transistors or a transistor array to drive the cores. That would require 8 transistors for a 4x4 array which is only 2 bytes of data.
The EEPROM is organized in pages, see Table 28-12 on page 285. When programming the EEPROM, the program data is latched into a page buffer. This allows one page of data to be programmed simultaneously.
Table 28-12 says that the EEPROM has a 4 byte page size. When I first used my Arduino EEPROM the datasheet seemed contradictory, so I proved to myself that I could write to individual EEPROM bytes. I don't know if behind-the-scenes somehow the AVR was read-modify-writing 4 physical bytes when I wrote one byte, but if I ever get an Arduino I don't care about I might write a sketch to purposely beat down one EEPROM location to see if it has any effect on the 3 bytes around it.
So the next question then is, are these contiguous bytes?
That is, are location 0x000, 0x001, 0x002, and ox003 one of those groups of 32 bits?
I would hope so, but have no way to prove this, yet. Maybe it is written down somewhere.
If you decide to burn one out for the sake of knowing for sure,you might want to check the whole EEPROM if possible just to make sure you can see all the changed bits,if any. You might want to use test bit patterns like the binary 01010101 and 10101010 for the testing, or something like that.
Would be interesting.
I have one Nano with the USB connector broken off, so if i decide to burn that one maybe i'll try it too. I would think it would still program using another Arduino as programmer and the typical 4 wire connection to the pins of the Nano. I'll have to look that up again though as i have not done that for probably two years now, using the USB interface for almost everything.
CrossRoads:
"Do you understand what the datasheet means, when it says the EEPROM has a 4 byte page size in Table 28-12?"
I think that you can write 4 bytes at a time during one 3.3mS write operation.
Well, I guess inspecting the assembly should tell me how it's done. Feel free to ignore this as I doubt many other people like inspecting assembly code (and it certainly goes against everything the Arduino stands for). I did comment it to make it easy to follow, however:
So, I'm actually more confused now. All of the real action occurs in __eewr_r18_m328p and it seems to follow the example on page 24 of the 328P datasheet. Reading the datasheet, however, it seems that the code should begin its 3.4ms cycle at instruction 9ee and that we should get stuck in the wait loop at 9dc when the subsequent bytes are written .
Maybe what it does is when it rewrites locations 0x001 to 0x003 it does not have the flip any bits so there is no change. But then again that doesnt seem to make sense either because i think when it writes data it first erases the old data and then writes the new data. But maybe if the data in locations 0x001 to 0x003 is already in the erased state nothing changes. So that would mean that the data in those three locations would have to be data with some bits 1 and some bits 0 in order to make sure the test was effective.
I did not study the code in detail to make sure this is what was done but perhaps you can check.
It doesnt seem like it would make too much sense to specify a page size if they had some algorithm that could check to see if the other bytes really had to be changed, unless maybe if we could save 4 bytes in the same write time which would save time in some algorithms. That would mean larger data types (like type long) would write maybe four times faster.
But the test did prove one thing already though, and that is that wear leveling would help, even if not for every single byte. If we had to rewrite type long a lot and only say one or two of them at a time, then we could use successive locations instead of the same location all the time.
If we did use type long, one variable, and we got 1 milliion writes before failure, then with a 1024 byte memory we'd get 256 million writes before failure, which is a lot. Of course if we had to write two longs then we'd only get 128 million writes before failure, but that's still good.
For my purpose for example, i'd want to save the hours, minutes and seconds, which could be stuck into one var of type unsigned long.
3a8082e126