2037 makes sense
Neatly avoids the 2038 time problem. Oracle won't be fixing that!
51 publicly visible posts • joined 24 Feb 2015
The concept of Optane as a second layer of memory had been tried before, and hasn't been successful each time it has been reinvented. It had only a niche appeal for a number of reasons.
And remember - it was cleared every time you rebooted. No storing stuff across boots. That could be a security issue, and you might not reboot the same OS and application right away, say if there was a failure in the node. You could end up with obsolete data you'd have to clear anyway.
It's both the idea and the implementation that have to work. Not the case here.
Optane wasn't the first, or even the second, try at adding a memory hierarchy. The IBM 360/91 (I think it was) had the memory channel in the late 1960's, the IBM 3081 had expanded memory in the early 1980's. Not a lot of good use cases then either.
Interesting technology, but Optane always seemed like a technology solution in search of a problem.
CXL is a different solution, for a different set of problems. It should do better, but it's likely to be some time before it makes it to on-premise data centers.
But when you've used a certain paradigm long enough, you'll find there are problems that are hard to solve. So there's a stampede to a "new" paradigm that doesn't have those problems. Then, when you discover that approach has problems, head over to the "newer" one.
Rinse and repeat.
Centralize, decentralize. Cloud, on-prem. Big systems, groups of smaller systems.
Seems to be about 3 paradigms that run in a circle. It's fun to watch something from the 60's or 70's be rediscovered. (Server class memory is one example.)
The new approach is the solution to all problems, until it isn't.
From an article on Seymour Cray:
For his next machine, the Cray-2, he intended to shift from the usual silicon chips to faster, but unproven, gallium arsenide technology. Manufacturing difficulties forced him back to silicon, and the Cray-2 arrived, delayed, in 1985. Nevertheless, it broke the giga-flop (one thousand Mflops) barrier. This was the machine that was cooled by being completely immersed in an inert fluorocarbon liquid, the same liquid used as artificial human blood. Ever stylish, Cray included a decorative fountain in the coolant circulation system.
Will we get a fountain in these new systems?
When W10 was announced in 2015, as noted you could install it over Windows 7 and 8 systems, supporting hardware back at least from 2000. If W11 supported the same as W10, they'd have to support these old systems until maybe 2030 or later. Support 30 year old hardware? I'm not sure I know of any OS that does that, certainly not a commercial one.
So I understand why they have the hardware requirements, looking ahead to future support until W11 end of support, but I do suspect they could have been a bit more liberal with where they placed the line. I'm sure they'd lose too much face if they went back and changed it now.
Looking back from 2030, a line at 2018 hardware might make sense. But looking from 2022, it would certainly seem silly to put the line there. Be interesting to see MS try to explain this, though.
I always figured that the short list of recent hardware for Windows 11 was due to how long Windows 10 has been around since mid-2015, and will be supported until 2025. Windows 11, released in late 2021, would likely be supported until 2031. (Just in time to handily miss the 2038 date problem.)
I suspect they don't want to support 2010 hardware until 2031. Cant say I blame them; most OS vendors do similar things, just not as poorly.
I've got 47 years of good, widely varied experience, but I'm not qualified for anything because I don't have the proper degree. I've known a lot of good people with physics, math, and music degrees, so they get passed up too. (Mine's in philosophy.) And, of course, I'm too old to actually know anything. Very disappointing.
The two compact objects will spiral into each other, most likely creating a black hole, if there wasn't one there before, or making the existing one larger. This will generate gravity waves that LIGO and Virgo will detect if the two compacts are large enough.
There's more to come!
Say you're trying to reduce data into a 1024 x 1280 picture. Using two variables, Monte Carlo will generate a pixel for each variable pair, chosen at random. Eventually you'd have enough of the image to tentatively identify the picture. QC would just produce the picture.
Now assume you've got 50,000 variables. That takes a lot of computing to get any kind of (multi-dimensional) picture. Certainly would make QC popular, if it could ever manage that many variables.
Maybe QC In Spaaaace to keep it cool!
Again:
Great article, Rupert Goodwins. Hit all the right spots, spot on. Bravo! Encore ! More, More, More! :-)
I remember working for Ingres in the mid-80's, when Ingres and Oracle were the same size. Oracle had a soundex (sp?) function that would search for words that sounded like others. There was almost never any use for it, but Ingres didn't have it. Thus Oracle told all the prospects that you had to have it, and so of course you couldn't possibly use Ingres.
Somethings don't change, I guess.
I worked for a company in the 70's and 80's where the QA group returned a product release literally 10 times due to all the bugs. The development managers were livid, of course, so they got the SW Dev VP to declare that the QA group could not run any tests that the developers themselves had not run. Problem solved!
On one of our Windows 10 systems, running Cisco AMP, after the Tuesday patches went on we get a yellow '!' on the Windows Defender shield icon. When you look, it says "Virus and threat protection status unavailable, open Cisco AMP for Endpoints for information." The link it provides is to the AMP Connector, which won't open. Opening AMP directly, it thinks things are fine. I'd turn on AMP Connector Debugging if I could find where the log file for this is!
Anyway, something seems off in the Connector interface.
We're seeing the re-invention of the divide and conquer approach: X was too big and too slow to provision, so we're going with smaller systems that are much more agile and that "anyone" can manage. Of course there are more of them, so maintenance time and effort is multiplied (1400 security patch applications, anyone?) and we need more people to do it. After a while, this gets to be a problem. Wait - look! We can consolidate all this little servers into a few big ones. Problem solved!
There's a time-honored tradition of stampeding over to a "new" approach that solves your current issues, without any insight (or memory) that the new approach has its issues, too. Too hard to figure out how to solve your current issues, so just follow the PR/hype and go with something different.
Fun to watch this on its second or third go-around.
I've asked vendors why, and they say they're protecting themselves against users who don't know how to use their product, run a benchmark, or tune it properly. They publish their own benchmarks, because they know how to use their products. Of course, they can't publish benchmarks of their competitors products, but you can bet they run them (even if the EULA says they can't).
So when we see an ad about performance, it refers to a competitor's published benchmark.
Of course, unless you run benchmarks as your company's workload, a benchmark isn't really all that useful anyway.
Sure sounds similar. When you wanted to attach non-Bell equipment to the network, you had to have their adapter (DAA) so you wouldn't damage the network. They made that stick for about 8-10 years, as I recall.
https://books.google.com/books?id=QLZG2v-kw7sC&pg=PA9&lpg=PA9&dq=telephone+network+daa&source=bl&ots=6LjWKom2st&sig=mS4vsvR6dy2lKJBJwnvTrIiksoA&hl=en&sa=X&ved=0ahUKEwj4mZXe3YLLAhVN-mMKHdINBwoQ6AEINjAE#v=onepage&q=telephone%20network%20daa&f=false
With the impact of cloud on storage products, there's the related impact on on-premise servers and server networking, and the follow-on effects to systems and reseller staff. (Personal/workstation device type networking will still be important!)
Sounds like a major dislocation for folks working today, especially those starting: many of their jobs may not be needed over a relatively short horizon A look at that would be a very interesting article.
Many years ago, I worked for a major computer company. The manager at the east coast support center decided his staff (never seen by customers) need to look more professional, and sent a memo declaring that everyone must wear a tie.
And they did.
You never saw so many spiffy headbands, belts, armbands, and so on. Needless to say the policy didn't last very long.