fuck HP, here's why
They've delayed the memristor for business reasons (see Hynix). Fuck 'em. And this confirms no consumer memristors before 2016. I hope Crossbar eats HP's lunch.
HP may have found a way to save itself from oblivion, but apparently the only way to be sure is to throw three quarters of its research team at an ambitious new product suite based on a much-hyped yet troublesome technology. The beleaguered IT giant plans to rejuvenate itself with a set of advanced technologies that, when …
"They've delayed the memristor for business reasons (see Hynix).
Those bastards. And I'll bet they're the reason practical fusion is still thirty years in the future too.
Or... sometimes new technology is hard to launch.
On a more practical matter, the new-everything-running-together claim is a little over the top. I'd be happy enough with memristor technology working on a FreeBSD-based machine.
I certainly do hope that HP does manage to make the memristor manufacturable.
Some technologies once considered exciting have disappeared because they were inherently not suited to being manufactured on monolithic integrated circuits. The tunnel diode is an example of that; one can make lots of discrete tunnel diodes, and pick the good ones, but the odds of all the tunnel diodes on a chip being good are basically zero, so even though it was touted as a way to make really fast computers, no microprocessor ever used them.
Discrete memristors, though, would be just a toy, not competitive with conventional CMOS at all.
At HP, they should realize that saying that failure is not an option doesn't make it so, though. Nobody beat the obstacles to making tunnel diodes repeatedly on a chip. It could be there's no guarantee that the same doesn't apply to the memristor - or, it could be that HP knows what the technical challenges remaining are, and is in a position to beat them.
I hope they'll succeed, and even more, I hope they haven't let wishful thinking take over.
HP would be well advised to recall how committing themselves to a non-standard processor architecture has worked out for them.
A new architecture. A new OS. New code. New everything. Wow, that sounds attractive. Maybe some of the air-conditioning will still be applicable (or at least at a lower setting). If I were an IT VP, I sure would be titillated about laying my career on the line for the first one. And, of course, converting all my applications and infrastructure. I would be mad not to pursue this. If it worked, in ten years or so, I could demand megabucks from any corporation I choose as corporate computer systems architect. It should cause new users (or old ones screwed by the last iteration) to leap out of the woodwork. Certainly the big users are holding their breath.
I would be willing to address their technical direction for a mere 10% of the total salaries currently being pissed away on their corporate direction. They can keep consuming whatever substances they now use to see the future, but not have to come to the office.
Now, if they could print all of this on one of their lasers; that would be impressive. A financial disaster; but impressive.
HP would be well advised to recall how committing themselves to a non-standard processor architecture has worked out for them.
Yeah, over 20 years of highly lucrative sales and support contracts is a real -- oh, you don't mean PA-RISC?
Remember, not everything happened within the last fifteen years, and every processor architecture is non-standard at the beginning...
Also, Itanic (which I'm assuming is what's being referenced here by the AC minding HP to watch their memories) was competing against X86-64, which was 'good enough' for most people, and easier to work with - what with being fairly familiar.
At the moment, if The Machine is significantly faster than x86-64 or ARM64 in the relevant sectors, then that might give them a usable edge, even if it's costly. If it's buggy, not as fast as expected, and hellishly expensive, it'll likely remain a niche in the HPC world. People may have short memories, but they also have fixed period financials to worry about. And I can't see HP giving this stuff away for free.
That is, of course, if it's not still just 'effective' vapourware - that is, has anyone seen one of these things running? Are the numbers they are showing actual benchmarks, or projections?
Steven R
(As a consultant/contractor, not an employee) I'm happy to see them investing in the future. It seemed every time I was back the mood was worse, employees become used to the constant reorgs and layoffs similar to how people living in a war zone become used to the sound of gunfire and explosions.
I hope it works out, not just because it'll make hardware a lot more interesting than "watch us slowly close in on PC power in a cell phone" but because it'll revitalize a once-great company that is today but a shell of its former self.
I hope too.
If they can build nonvolatile memory with pico-seconds latency, they will also have to reinvent CPUs because disproportionate majority of die space is currently sacrificed to dealing with latency (all 3 level caches, multiple pipelines, cache synchronization protocols etc). Luckily there was a time when processors speeds were bottleneck, not memory (fond memories of Z80 ...) so the knowledge is there, somewhere.
Also for an operating system to make the optimal use of this new speed, it would have to be written specifically to optimize for low CPU utilisation (because this is where bottleneck will be once again) rather than for cache-friendly memory accesses. For (what we currently call) modern software, this is actually large change in direction. Given the very ambitious plan I hope they won't write a whole new OS from scratch, and will rather improve Linux. For the same reason, I pray they do not abort this project prematurely.
Which they will be tempted to do, since planned delivery dates seem a little unrealistic to me.
"optimize for low CPU utilisation (because this is where bottleneck will be once again)"
And, I expect, the lack of differentiation between RAM and HDD/SSD. It sounds like it's all one big chunk, managed by the OS. I'm not sure how any existing OS would cope with that.
Maybe it's time for a quantum leap in system design instead of the step changes we're used to.
Did they have a dangerous looking guy in a black suit hanging around the keynote?
Part of the design brief was that the Virtual Adress eXtension architecture would stay relevant for something like 20 years as the implementation technology upgraded from MSI TTL and MSI ECL through gradually rising in house chip flows.
PA-RISC seems to be the last integrated hardware/OS/database combo still in service outside the IBM iSeries (and PA-RISC was designed to be backwards compatible with the earlier HP3000 boxes).
I will wish them well. We'll see.
You only have to look at the unique selling points to release why there's been a delay. 160PB per rack, huge datasets, massive processing power...
All the first units are being shipped to the NSA as we speak. Once NSA have enough to collect and monitor all the RAM/disc of every single PC/Mobile on the planet in real-time, they will slowly be available on general release (about 2018 onwards).
There are a lot of papers out there detailing the obstacles to a competitive crosspoint resistive memory. Without getting too technical, let's just say that it is really hard to make a large array of resistive memory elements where you can isolate one resistor and tell if it is high or low resistance. All the solutions to this problem that I've seen to date either add significant time to the read/write process (lots of nanoseconds, instead of the picoseconds HP is claiming), or they take up more room on the die. There are many manufacturing obstacles too.
This all leads me to conclude that we are (many) years away from having a resistive memory that is competitive with DRAM for speed, and flash for cost.
Maybe HP will be the first to figure it out, but Intel and IBM and others are dumping a lot of money into similar research too.
The reason they need to build a new OS has relatively little to do with latency. For a memristor to reach its full potential, you'd use it in place of both DRAM and fixed storage. That means the distinction between disc and RAM is gone.
There's no such thing as a reboot anymore. Power down the machine and all you do is pause it.
There's no intrinsic difference between a file on disc and a process's working memory anymore.
There's no performance advantage to loading / unloading / copying things around.
What does process isolation mean when the disk and the memory are co-extensive?
etc.
While you could gain some big advantages by making a memristor disk and using it in place of a conventional hard disc, the real wins would come from building a machine to the strengths of the technology.
It's particularly interesting because early ideas around computation (such as the universal Turing machine) didn't make a distinction between fixed and volatile storage.
Fascinating stuff, but mega high risk. It's a total moonshot and I'm really glad that somebody still has ambition and optimism.
"... for HP's "Machine OS" to mean anything, it has to fully interoperate with legacy software"
Jack is too young to remember Ken Olsen (DEC) and Edson de Castro (Data General) bravely making the same arguments while bashing the early microprocessor systems. A few years later Gary Kildall repeated the effort while attacking MSDOS. Anybody still running RSX-11M, RSTS, RDOS, or CP/M?
The norm in the tech world is the conservative but slow slide to irrelevance; Digital Equipment, Digital Research, Kodak, or Microsoft.
HP may or may not prevail with Machine OS, but it will not be due to lack of "compatibility" with some legacy code. Meg Whitman is swinging for the bleachers in a manner almost unknown for large public companies.
Bet your company, bet your job, bet your fortune, that's pretty strong stuff.