I'd like to get excited, but ...
... too little detail here and too far from off-the-shelf availability.
California-based Spin Transfer Technologies has claimed recent results of its "precessional spin current technology" – three years in the making – will make it easier to write data to spin transfer torque magnetic memory (STT-MRAM) and for that data to be retained more than 10,000 times longer. STT claimed "the results confirm …
I'd love to think that one day there will be no difference between storage and memory and programmes will run from where they are and merely be added to the 'stack' or running applications - rather than having to write them from one place to another in RAM et*c, but I am not sure if computer architecture and OSes are ready for this yet.
I mean, how much technology is every really ready and finished when you buy it these days?
*Having said that, I wonder if having storage ('HDD/Flash: a bit more permanent') versus RAM (dynamic, expect it to change, easy to flush out etc) is maybe a good architecture which serves us well.
That's basically how the old cartridge based games consoles used to work. All program code and static data in the cartridge was just mapped into the processor address space and referenced as is.
The biggest problem with mapping storage memory into the processor's address space would be performance. RAM has always been positioned very close to the processor. I wouldn't want to run executable code directly from a storage device - even a storage device directly on the PCI bus would be too slow. (I.e., you'd still have to copy code into the caches and with the hundreds of processes typically running on a modern system you'd end up with requiring much more cache memory to maintain performance, and ultimately you might as well just stick with RAM.)
If I may ask, do you mean physically close to the processor or architecturally?
I primarily meant architecturally. but bus lengths directly affect signal quality, power requirements, transmission speeds, and latency, so in that respect the shorter the better. (And to be honest, I don't think I've ever seen a motherboard which didn't have the RAM slots directly next to the processor(s).)
You can already get NVME SSD drives that are RAM DIMM sticks. The ones I've been presented about actually encrypt the content in hardware as well.
Linux already has some support for this depending what you want to do.
They can be mounted either as volatile RAM sticks (actually persistent, but if you reboot it forgets the previous encryption key, effectively wiping it), or be exposed as a disk that is persistent across reboots (a persistent ram-disks so to speak).
The biggest question is what the write endurance is. Current SSDs biggest limitation is the fact the cells stop working over so many writes, where as with RAM applications assume they can write an infinite amount of time. A naive global replacement of NVRAM with nvme storage would soon cause the nvme cells to be worn out. Hopefully this technology could be a complete replacement.
Even then there is a useful distinction between persistent and non-persistent storage of content. Linkers may modify the in-memory image of the application to insert references etc as appropriate, these shouldn't be written onto the persistent copy of the application as they may not be valid the next time it starts.
An os and applications designed for persistent memory would be an interesting evolution, potentially it would allow 'instant' on machines as the state would automatically be preserved at power off (assuming all components support it).
A single, unified, freely-rewritable storage space might not be such a good idea. For instance, what would happen when something went wrong with data in the OS working memory? At the moment, you can just reboot because it's the RAM contents that's wrong, not the disk. With a unified storage space, you'd have to reinstall the OS. Either that, or you'd need twice as much memory so that you can always have a "last known good" image.
You'd still just reboot -- or you may be able to get away with restarting a given service or component (or you would if anyone really cared about writing a truly modular OS anymore.../rant>).
The key here is working memory -- That contains current state data, which the OS should reset to reasonable start-up values restart.
"A single, unified, freely-rewritable storage space might not be such a good idea. For instance, what would happen when something went wrong with data in the OS working memory?"
Even a unified memory architecture can use backups that can use things like rust to help maintain system integrity. It removes a degree of separation, yes, but there are ways around this.
A single, unified, freely-rewritable storage space might not be such a good idea. For instance, what would happen when something went wrong with data in the OS working memory? At the moment, you can just reboot because it's the RAM contents that's wrong, not the disk. With a unified storage space, you'd have to reinstall the OS. Either that, or you'd need twice as much memory so that you can always have a "last known good" image.
Old minicomputers with core store were exactly like this. You could turn them off, turn them on, and they would continue running.
If the OS got corrupted, then you would toggle in a small bootstrap loader in binary via the front panel switches, which in turn would reload the OS from paper tape.
I am just old enough to have done this :-)
You wrote:
> I'd love to think that one day there will be no difference between storage
> and memory and programmes will run from where they are and merely
> be added to the 'stack' or running applications - rather than having to write
> them from one place to another in RAM et*c, but I am not sure if computer
> architecture and OSes are ready for this yet.
Real computers always did this, those with true computer architecture. Microprocessor-based computers have to copy data out of RAM, into on-board memory locations, and after processing is done, they have to copy the results back. I learned Assembly on the TI-99/4A, which was one of the only true computer-architecture microcomputers that was ever sold to the public. It used the TI TMS9900 chip, Its Assembly language included the LWPI (Load Workspace Pointer Immediate) instruction ... which does not exist in assembly languages for the Intel microprocessor lines. Because in those chips there is no workspace sitting in RAM along with programs and memory-mapped I/O, instead the workspace is replaced by on-board registers.
Spin is a quantum property of electrons (and, of course, almost by definition nobody understands that). The spins of electrons trapped in a material affect its magnetic properties. If they are all aligned they turn it into a magnet. Align them a different way and the magnetic field changes.
As any mechanical engineer will tell you, to change the spin of anything you have to apply a spin force called torque. By applying torque directly to the electrons in a magnetic memory cell you can change their spin and hence its magnetic field. All you then need to do is sense the magnetic field to read its state, and you have a viable memory cell.
The neat thing about this trick is that it uses less energy and less space than other more blunt-force methods of changing the magnetic field.
There's a lot more underlying tech to make it actually work, but I hope this helps
I wonder how this handles writes bleeding into adjacent cells - Rowhammer. Magnetic coupling is a bit harder to stop than capacitive coupling. You need distance, ferrous shielding, a shorting shield, or adjacent balanced opposing currents. All of those seem like they'd be incredibly bulky for memory storage. Forcing writes to happen in a large organized block could solve the problem too, but now you're driving latency up.
Unfortunately, the linked PDF wasn't quite about using STT for RAM.