* Posts by Roo

1686 publicly visible posts • joined 21 Sep 2010

GCC 14 dropping IA64 support is final nail in the coffin for Itanium architecture

Roo

Re: Rolling rolling rolling ... rawhide

"While that's something everyone accepts and "knows" now, it was far from clear in the early 90s when HP began development on PA-RISC 3.0 which became Itanium."

Well, not everyone is a computer architect or designs microprocessor front-ends for a living ... However within that community dynamic scheduling was already recognized as a win by the 70s, perhaps the most famous example of it was the CDC 6600 (released in 1964). Superscalar processors were old hat by the 90s - there was plenty of data out there from the boat-anchor machines to show the benefits (eg: CDC 6600, IBM S/360/370/etc) of dynamic scheduling.

By the late 80s/early 90s superscalar microprocessors had just started appearing and transistor budgets were sky-rocketing year on year with no plateau in sight. So from my PoV in the 90s HP/Intel & EPIC (1997) were very much swimming against the tide - everyone else was going superscalar: AMD 29K (1990), Motorola 88110 (1991), Alpha EV4 (1992), POWER2 (1994), HyperSPARC (1993), MIPS R8000 (1994), AMD K5 (1996), and even the Intel P6 (1995)... Even INMOS had a superscalar hack called the "Grouper" designed for the T9000 in 1990.

I like oddball CPUs and would not have begrudged EPIC some success if it was a good fit for the problem - but that wasn't what I was seeing before or after it's release.

Roo
Windows

The IBM BlueGene processors looked to be better balanced than Cell processors to me - and they had FMA with direct pipes to memory (bypassing cache). Their power efficiency was exceptionally good, and they got much closer to hitting their theoretical peak performance than pretty much anything else at the time.

Roo
Windows

Re: Rolling rolling rolling ... rawhide

I had forgotten that the initial "Yamhills" shipped to consumers were crippled - although I'm pretty sure the Xeon versions were shipped 64bit from day 1.

Timeline wise AMD published the first specs of AMD64 back in 1999 and released their first Opteron in 2003. References to Intel's Yamhill seemed to pop up in early 2002 - at which time AMD were publicly calling out Intel to jump on to their AMD64 bandwagon (and Microsoft had announced they were developing for the AMD64 - so presumably they already had AMD silicon samples to use and Intel could have seen how much of a threat the AMD silicon was).

All that said there is evidence that there was a predecessor to Yamhill, corroborated by whispers on USENET and at least one statement from AMD pointing out that their proposed 64bit chips would support running of 32bit legacy code in early 2002.

Roo
Windows

Re: Rolling rolling rolling ... rawhide

I don't buy that story at all. What I do know is that, before EPIC was released, Intel's Andy Glew did reveal that Intel had done some fairly well advanced studies into a 64bit x86 but this avenue was shelved in favor of EPIC which was seen as having more growth potential. From what I recall from Glew's posts Intel were keeping the same kind of register count but widening them, AMD took a different route which allowed for more growth - increasing the register count and binning the 8087 shaped barnacle, thus we have monstrously quick AMD64 chips that can pretend to be a Pentium Pro on the rare occasion that they run 32bit code.

Roo
Windows

Re: Rolling rolling rolling ... rawhide

EPIC was doomed before it left the drawing board for three reasons - both of which were already apparent from the short life of Multiflow Computer Inc - and these reasons were (IIRC) articulated by a number of well-seasoned computer architects on USENET (which back then included folks who worked on the DEC Alpha, ARM, PA-RISC, MIPS, SPARC, POWER, x86 (incl x86-64) - see Andy Glew's posts on the topic in particular).

1) Dynamic Scheduling is always better than static scheduling because a) it adapts to what is actually happening in real time and b) it can work in conjunction with static scheduling.

2) Static scheduling requires large register files and wide datapaths in whatever fabrication process you choose. This means that a VLIW style design will inevitably have a lower clock rate in a given implementation technology relative to say it's RISC peers. The only way a VLIW machine can compensate for this is to widen the datapaths - which again forces the clock to be slower (to manage skew), thus compounding the problem.

3) Advances in hardware fabrication & implementation vastly outpaced any benefits attributable to VLIW.

Full disclosure - I liked the idea of VLIW, it's just that whenever it was implemented in hardware it ended up being slow and more complicated than whatever else was current.

On the plus side Multiflow's efforts were not totally wasted. They did have to write a very good compiler. Their compiler was very well regarded and licensed by competitors who promptly put it to work compiling code for RISC chips (during the peak period of the compiler wars)... The big register files are still with us, but they are implementation details - rather than architectural details, thus modern processors can (and do) take advantage of wide issue and huge register files when and where it suits the implementation...

I think the real point of EPIC was to sink the competitors but leave the x86 alive at the low end, which didn't pan out because AMD spoilt the party with AMD64.

IIRC Linus Torvalds spent some time working for Transmeta on a VLIW architecture - which also sank without (much) trace...

Rust rustles up fix for 10/10 critical command injection bug on Windows in std lib

Roo
Windows

Impressed that some kind soul put that page together. Really glad I left all that behind years ago tbh.

Roo
Windows

Re: Argh

The Manglement have been chugging the Redmond Koolaid for over 40 years solid, idiocy is one of their minor problems at this point. :)

How this open source LLM chatbot runner hit the gas on x86, Arm CPUs

Roo
Windows

Shock and Awe

"I believe the trick with CPU math kernels is exploiting instruction level parallelism with fewer memory references."

Hurrah for folks who understand machinery !

Somewhat shocked (and impressed) that Tunney managed such a big speed advantage over Intel's MKL, but such a margin does beg the question "WTF are Intel playing at ?"...

Good news: HMRC offers a Linux version of Basic PAYE Tools. Bad news: It broke

Roo
Windows

Re: It's 2024

Fair play. Have been there myself... Currently *retro-fitting* unit tests to ~250,000 lines of C++ written in layers with the structure and aesthetics of a demolished skyscraper - complete with mangled corpses, just to meet the "must have at least 70% coverage" diktat. There are some silver linings, for example I am finding a lot of unused and unreachable code that would have been shown up years ago if they had any tests at all, so quite a lot of that stuff is going directly to the shitcan (hurrah for coverage!).

Full disclosure: In this instance I am knowingly writing terrible tests because I have not been allowed the time & resource to fully understand the code and then re-write it for C++20x (it's written in pre C++0x with lots of third party libraries that are now superceded by C++17x stdlibs). These tests are awful because they are not really written to test out features of the code - they are written to provide coverage and by doing so fossilize the code in amber. The intention is that while I am not in a position to understand all the code, should someone make a change to the code (or change compiler / libraries etc) the tests are good enough to fail, thus (hopefully) forcing the poor sod into looking at the unit test and (hopefully) the code that blew up.

Not fan of this way of working either - but in this case I'm seeing some value in it as far as cruft removal and gaining some understanding of this production code that folks have been avoiding building ("It's too hard! Waaaaaaaaaaaah!") for nearly a decade...

Best of luck with your travels & travails. :)

Roo
Windows

Re: It's 2024

Fair play on your well thought out response, and apologies for the somewhat trite reply I offer by return (which does not directly invalidate the points you make):

Firstly there is absolutely nothing stating that you can't write Unit Tests that validate the boundaries between components.

Secondly if you are simply duplicating code you've written with your Unit Tests I would suspect that you should change your approach to writing them.

Roo
Windows

Re: It's 2024

Testing combined with coverage tools and good coverage shakes out the typing issues pretty quickly. I work with both C++ and Python on a daily basis - so I get to have my cake, eat it and blow my toes off type-safety wise. From my perspective (having cut my teeth on K&R C in the mid 80s) it seems that encapsulation, type-safety and inheritance aren't really worth the hoops you have to jump through over the long run. I would trade them all for good quality unit tests every time. :)

How a single buck bought bragging rights in the battle to port Windows 95 to NT

Roo
Windows

NT 3.51 looked OK on the surface and mostly worked - it was a lot better than Windows 3.11... But as someone who wrote image processing code that maxed out the CPU, memory and hard drives it quickly became clear that it was half-baked.

+ A lot of the advertised features (POSIX subsystem - haha) were missing all together or simply half implemented.

+ The virtual memory system & *file* caching were plain wrong - poor bad design choices meant that NT would cache sequentially read (and written) files in RAM, and *page out* your application's working-set to disk in order to accommodate all those never to be used again bits of file. Essentially NT would put your files in memory and your application on disk. Dumb as rocks.

+ Perhaps Microsoft did foresee the problem with file caching - because they provided some methods to flag opened files as being sequentially accessed, however those calls did nothing in practice, NT still paged out your application to disk so it could keep files (that were much bigger than the available RAM) in memory... Sigh...

By contrast Linux of the same era was a *lot* quicker for that application simply because it made more sensible choices with respect to balancing the demands of the block cache vs the application's working set. I was able to take the highly tuned "NT specific" code - naively convert it to POSIX and run the application much *faster* under Linux on the exact same machine. Furthermore the performance did not degrade when the image file sizes exceeded the available memory (unlike NT where the machine basically thrashed the drives to bits until folks got bored of waiting for it and pulled the plug).

Vernor Vinge, first author to describe cyberspace and 'The Singularity,' dies at 79

Roo
Windows

Learning by experience is clearly not enough, at a minimum to qualify for most folk's idea of "intelligence" and AI would have to exhibit self-awareness and agency.

Trying out Microsoft's pre-release OS/2 2.0

Roo
Windows

Re: Very Different

In the summer of 1990 I saw some sales figures that indicated that volume of 32bit INMOS Transputers shipped *exceeded* the Intel 80386 by some margin...

At the time the 386 really didn't have the kind of volume in sales (relative to it's even shittier forebears) that made it worthwhile in the eyes of the vendors to ship software for it. The best '386 platform that I came across at that time was the Sun 386i, but even that fell between the cracks... On on hand SunOS made great use of the hardware and it was a pretty capable box, on the other hand it was a waste of time (and money) to run DOS applications on it because UNIX made better use of the hardware than DOS. Furthermore by 1990 you had Solbourne SPARC SMP boxes that kerb-stomped 386s (about 3-4x as fast at the same clock) when it came to running UNIX apps. Going back to a 386 PC after having a (2 socket) Solbourne Series 5 to myself was pretty hard. :)

Roo
Windows

Re: Very Different

"It ran only 32 bit. Win9x killed the Pentium Pro because it had no simple switch back to native 8086 mode."

FWIW I ran Win9x on a Pentium Pro just fine - the main gotcha with it was some games didn't run quite as quickly as a stock Pentium at the same clock frequency, it was rarely a deal breaker (for me) though. The flip side was doing actual *work* type stuff under Linux was a *lot* faster on the PPro than the Pentium boxes, because the code was optimized for the PPro rather than for a Pentium.

The Pentium Pro (P6) had a *huge* 256Kb L2 cache that ran at core speed, anything recompiled to take advantage of that cache absolutely blew the doors off P5s (plain old Pentiums). It wasn't much more expensive than the P5 at the time - and was a steal if you were in a position to run code optimized for it. The point of the Pro was to get Intel's snout into the troughs monopolized by the RISC vendors - and it succeeded in that - and also went on to have a long and productive life running Windows 9x & NT in the guise of the Pentium II and Pentium III...

Those were happy days coding for a processor with a fat, fast *and* low-latency cache. Back then I'd happily trade any P5 for any PPro that needed a home where it would be appreciated. :)

They call me 'Growler'. I don't like you. Let's discuss your pay cut

Roo
Windows

Re: Obviously I’m missing something

I very much doubt that "in writing" means much to Growler, and lets face it Growler would not have asked Corey along unless he was essential to the deal. My guess is the vendor knew full well Growler was worth precisely zero in himself and that is why Growler wanted someone along who had some value with respect to the IP.

Drowning in code: The ever-growing problem of ever-growing codebases

Roo
Windows

Re: Optane is basically ROM

I get your point - but it glosses over a lot of really important implementation detail.

Back in the day (a long long time ago) when I had an direct insight into how memory & microprocessors were designed & fabbed - the memory chips were fabbed on *different* fab processes from stuff like logic. That was the case in the 70s, 80s, 90s and I haven't seen anything to say that it's any different today. Sure there are some instances where the domains overlap - for example IBM offer "eDRAM" - often used to provide a big block of memory on a processor die (note: stock DRAM is fabbed using specialized processes).

Semiconductor fabrication is a fascinating topic, the world would be a better place with more folks who understand it... it's very definitely not Lego when you get down to the metal. :)

Roo
Windows

Re: Optane is basically ROM

SRAM didn't look like it "naturally formed out of ICs" in my day - it looked like it was generated by a script hooked into the CAD system, validated by the DRC engine, masks were then made from the design, and those masks used to fabricate some dies on 200mm wafers... About as natural as Trump's tan.

Roo

Amdahl's Law absolutely still holds and always will... I think what you might be looking for is Gustafson's law - which is for the case where a workload can scale up in size and take advantage of parallelism as a result.

Roo
Windows

Re: Pentium IV

"The reasoning behind risc was that compilers couldn't, without becoming insanely complicated and huge, optimally use the clever cisc instructions and that compilers could translate higher level code in the much simpler risc instructions and effectively apply far more types optimisations with the actual compilers becoming simpler and smaller."

It's often presented that way - but if you follow John Cocke's work that explanation seems backwards and missing the most important bit...

The key driver behind RISC was to fit the processor architecture to what could be efficiently implemented in hardware. Moving the complexity into software to fill in the gaps was a necessary side-effect - I feel fairly safe in asserting that compilers really did not get any simpler. :)

With 20/20 hindsight you can see that as microchip technology advanced the hardware (RISC & CISC) got a lot more complex and the compilers also got a lot more complex too. The NVAX/NVAX+ -> Alpha 21064 -> 21164 -> 21264 -> 21364 illustrate the progression of hardware quite nicely over a short space of time (DEC introduced the GEM compiler with the Alpha line - see Digital Technical Journal Vol. 4).

Intel's Itanium was an interesting throwback to the "simple hardware"/"complex compiler" idea - they made exactly the wrong trade-offs resulting in relatively dumb + inefficient hardware tightly coupled to it's compilers - at a time when competitors were fielding smart + efficient hardware that could run any binary produced by any old compiler quickly & efficiently via the magic of Out-Of-Order execution.

Roo
Windows

Re: Optane is basically ROM

I can see how you would arrive at your position if you've only ever known EEPROM tech ... The old school *ROM technology was *very* different and should be occupying it's own category though:

ROM : Read Only Memory. Back in the day this might be implemented as a hardwired circuit of diodes, definitively not modifiable (by code) after it left the assembly line.

PROM : Programmable Read Only Memory... Can be programmed (once) - can be implemented as a circuit of fuses.

EPROM : Erasable Programmable Read Only Memory... Typically erased via UV light (those chips with windows covered by sitckers) - really don't know how the programmable bit was done for these...

I can see why you blur the lines with EEPROM, but it is fundamentally different from RAM in terms of how it is addressed and how it is overwritten (aka Selectively Erased then Written).

Just to add to the confusion : Core memory is non-volatile RAM - which also supports unlimited writes... Folks could power cycle a machine and skip the booting as the OS was already in memory... ;)

Roo
Windows

Re: "Late in Wirth's career, he became a passionate advocate of small software."

"unix was the necessary part of multix:" not quite *violently* disagreeing with you on this, but necessary really is in the eye of the beholder in this case. The MULTICS team had very different objectives to the AT&T UNIX crew, they weren't adding unnecessary stuff from their point of view. MULTICS looks pretty lean to me (see multicians.org), but not lean enough to run on an early PDP-11... I do like your characterization of C+UNIX as being "implemented as a kludge rather than as a design". :)

Roo
Windows

I spend quite a bit of time throwing stuff away that isn't needed - but has been "accumulated" in the codebases I work on. Throwing out chunks of code and dependencies is the easy bit, working out which tests need to be binned/changed is the hard bit. The choice of development tools & frameworks plays an important part too - for example a Java or Node project will pull in a zillion dependencies, most of which you don't *actually* need, and those extra dependencies will introduce unnecessary attack surfaces and wasted time in a) understanding them and b) maintaining/updating them.

Typically I find a lot of low-hanging fruit that can go for the chop without too much effort - but it does require a fair amount of courage and a good working relationship with your users (ie: so you can work out what their requirements actually are right now and work with them to validate the cleaned up code). Just to be clear here; I am a glass half-full person when it comes to nuking junk code - it makes me happy and it's relatively low effort when compared to adding another bag of snakes to the nest of dependency vipers. ;)

The successor to Research Unix was Plan 9 from Bell Labs

Roo
Windows

"But I like the silly thing. I want to find a use for it. Maybe someday."

That is Plan9 in a nutshell.

I reckon Plan9 could be a useful OS for HPC, *if* it can scale to millions of nodes. You'd need some kind of resource allocation/management functionality in there to carve up a cluster of nodes into application sized chunks (and stop them from stomping on each other) which could render Plan9 not Plan9 anymore. The other opportunity would be what comes after the "Web", IoT, and mobile phone OSes as we know them today. No one cares what's running under the hood in those instances - they just want their interwebz, texting and doorbells that spy on their neighbours..

Forgetting the history of Unix is coding us into a corner

Roo
Windows

I confess that I have never actually bothered to read that book so thank you for pointing out the Anti-Forward. The world is a poorer place without DMR. :(

IBM pitches bite-sized $135k LinuxONE box for smaller biz types

Roo
Windows

No, it's the Linux Closet, where deviant Linuxen stay hidden from public view.

The Land Before Linux: Let's talk about the Unix desktops

Roo
Windows

Re: Meet the New War....same as the Old War

Splitters !

JPMorgan exec claims bank repels '45 billion' cyberattack attempts per day

Roo
Windows

Re: Bloated headcount

According to JPMorgan's website they employ "~290,000" worldwide... So that would be at least 1 in 5 are "Cybersecurity Technologists" which does strain my credulity to breaking point. If pushed I'd guess that Mary doesn't expect held accountable for her pronouncements, and that could well be a reasonable expectation given her role in JPMorgan settling to the tune of $290m in an Epstein related lawsuit.

Microsoft suggests command line fiddling to get faulty Windows 10 update installed

Roo
Windows

Re: Tiny market share of Linux on the desktop

Pretty much every single machine on every desk at my place of work (thousands of machines) are Linux based thin clients - hooked up to a small pool of overtaxed Windows servers. Most commonly used apps are email and web clients... The actual number crunching and production workloads are pretty much all run on Linux (again, thousands of machines). Linux has won, Windows is clinging onto a shrinking pool of desktops while it gets swamped by Android (which already killed Windows Phone 3+ years ago).

Roo
Windows

Re: When did Windows turn into Linux?

Adorable router. Tempted.

War of the workstations: How the lowest bidders shaped today's tech landscape

Roo
Windows

Re: Sorry Liam, Not Even Wrong...well those ATG releases..

"Anyone know if you can get a standard cell Transputer T400 / T800 that you could stick in multi meg cell FPGA's."

Fairly sure the answer to that is no because the T4/T8 were put together using a custom CAD system (Fat Freddy's Cat) and targetted a bespoke process. INMOS did produce the Reusable Micro Core (IIRC - might have that wrong) in the early 90s - which ended up as the ST20 series. That *might* have made it into Macrocell form. In the mid 2000s Jon Jakson produced the R16 Transputer design for FPGAs hooked up to LPDDR memory, and wrote a paper on it - it was a neat bit of work, IIRC it's a more conventional RISC core than a stack machine like the T4/8. I see that there's been a few papers written on the same lines since then too... Have a google. :)

Roo
Windows

Re: Sorry Liam, Not Even Wrong... -- Dave Cutler

"I have done device driver work on RSX-11M - the kernel code that Dave Cutler wrote was very well written (and unusually even better commented!!)"

That adds weight to my assertion that Cutler should have spent more time writing code and less time writing books / whatever it was he was doing that wasn't writing code. The history of RSX-11 is actually quite interesting as it turns out with multiple (independent ?) strands of development and it originating as a port from a PDP-15 OS (RSX-15). It's incredible how many OSes DEC produced - they were developing & supporting TOPS-10, TOPS-20, Ultrix, RSX-11* (several distinct variants) and VMS at one point - so they must have had a *lot* of decent system programmers working at t'Mill.

FWIW I used VMS for a decade or so and found it to be a reliable if eccentric friend. That said I really don't want to see DCL & TPU again. VAX MACRO was kinda fun though - especially having come from 6502 machine code & assembler. :)

Roo
Windows

Re: Sorry Liam, Not Even Wrong...really?..again?

Here in Blighty the PC was pretty expensive relative to the competition - and was primarily bought & sold as a business computer to run business software. MDA was the cheapest option - and the most common in the early 80s (at least where I lived). Most *business* software was written for that baseline too - you know the stuff: text entry, word processing, spreadsheets, dBase II, etc. I still remember the feeling of acute disappointment on encountering my first PC: the graphics and sound were piss poor relative to a BBC Micro or a C64, and you actually had to spend *extra* money to get sound and graphics that were usuable... :)

Roo
Windows

Re: Sorry Liam, Not Even Wrong...

I haven't worked with Dave Cutler, but I have spent a fair amount of time using judging him by his works - or at least the stuff he is credited for. The words that come to mind for the products attributed to DC are "baroque", "half-baked", "flakey", "obtuse" and "sophisticated". This might be unfair on Dave Cutler, because he was part of a team of people that produced these products. My guess is that no one else in the teams wanted to "own it" having seen the result of their labours - so they were only to happy for their work attributed to the man.

"Inside Windows NT" by Helen Custer & David N Cutler illustrated the *massive* gulf between DC's "vision" for NT vs what it actually was. I was probably the only person to actually read that book, and then actually measure it up to the grim reality of mid-late 90s vintage Windows NT. Suffice to say Windows NT 3.51 fell some way short of the "vision". The common narrative is that Microsoft somehow double crossed DC so he took his ball home, leaving the product as a half-baked pile of not very good OS. Having compared the "vision" presented in the book vs the grim reality of mid-late 90s Windows NT that story sounds plausible, but the take away point remains that Dave Cutler was given a stratospheric budget & timescale (compared to say RSX or VMS) and still delivered a half-baked pile of not very good OS.

TL;DR : I think DC should have taken a leaf out of Linus' book and spent more time writing his OS instead of writing books about his OS. Case in point: an early cut of Redhat Linux (ie: $0 budget, amateur developers) beat NT 3.51 hands down in every department: hardware support, technical support (Microsoft's support amounted to "pay us $128 to file a bug report that we will ignore"), networking, reliability, performance and scalability.

Doom is 30, and so is Windows NT. How far we haven't come

Roo
Coat

UNIX as a cut down MULTICS...

As you allude in the article, the approaches and goals of the teams were polar opposites, case in point the MULTICS team designed an OS and then had some hardware built to support it, whereas the UNIX team scrounged up some low-end hardware and cobbled together some tools & an OS to run on it. :)

Therefore I don't think that it can be honestly claimed that UNIX is a cut down MULTICS. IMO it would be more correct (and fairer) to say that UNIX was a text-processing orientated OS that borrowed some concepts from MULTICS (see https://multicians.org for fabulous MULTICS resource). I do agree that UNIXen have accumulated a lot of bloat + warts over the years, pushing way beyond the boundaries of good taste and good design, but I'd argue that's a (perverse) consequence of a sound bunch of high-level abstractions being chosen in the first place (ie: very good design). At the end of the day I have been able to get work done in pretty much every application domain I've tackled using a UNIX-like OS (if not an actual UNIX) - some of which predated me attending secondary school - and I'm very grateful that I haven't had to relearn basic stuff like how to construct a path to a file every time I switched hardware or OS (case in point VMS was "unique", Windows 3.x -> 95 -> NT had kinks for each shift in exec/kernel).

Running everything from inside a LISP interpreter clearly rocked some folks' boats, but it clearly didn't rock enough people's boats to be a "thing", personally I think that's a fair result. You think decoding some rando's C++ is bad, try decoding LISP crafted in a heavily customized and continually changing environment for a laugh...

I'll get my coat, it's the one with a copy of "Transputer Instruction Set (C) 1988" in the pocket. :)

Roo
Windows

Workstations, remember them ?

Fair enough from the point of view of the PC innovation may have stopped in the mid 90s, but from the PoV of folks using workstations mid-90s PCs were copying machines that were sitting on their desks in the mid-80s. That said it's pretty cool how the bandwidth and capacity of PCs has ramped up - no complaints from me about that (and kudos for only being ~8 years behind workstations for 64 bit CPUs).

Postgres pioneer Michael Stonebraker promises to upend the database once more

Roo
Windows

Re: Blast from the Past

I built "University" Ingres from source a very long time ago (maybe early 90s) only to find that the buffers holding pathnames were *very* short - causing lots of really interesting failures when you tried having your DB reside somewhere like /home/me/foo/bar/wibble. So yeah - it was definitely coded for a PDP-11 vintage UNIX. :)

IMO there haven't been many (if any) major strides forward in Comp.Sci since the 70s. It's not all bad there have been plenty of useful (and good) refinements since then though !

CLIs are simply wizard at character building. Let’s not keep them to ourselves

Roo
Windows

NeXTSTEP

Not sure I'd lay the lack of CLI on Jobs, after all he did found NeXT, which produced an OS (NeXTSTEP) that was built on a MACH kernel with a BSD userland, and had a pretty neat GUI complete with a terminal emulator (ie: CLI) application. As it happens NeXT was bought by Apple on the return of St. Jobs, and Darwin just happened to be a MACH kernel with a BSD userland - exactly like NeXTSTEP.

Apparently the NeXTSTEP terminal emulator lives on in Mac OS/X, though I would guess it must have changed beyond all recognition by now... Surely ?

UEFI flaws allow bootkits to pwn potentially hundreds of devices using images

Roo
Windows

There is a *really* simple solution to this...

The only firmware should be to load a bunch of bytes off a USB stick and execute them.

The vendors can supply their USB stick installed in the motherboard by default with all their cruddy unnecessary, unreviewed guffware that they want. Folks who don't want the guffware can put their own stick in with whatever pre-boot crap they want (or maybe just a simple locked down bootloader) and optionally superglue the stick in place for extra security. We have USB headers on motherboards already, this is not hard or expensive folks.

USB Cart of Death: The wheeled scourge that drove Windows devs to despair

Roo
Windows

How cute.

I sincerely hope that wasn't Microsoft's primary mode of testing their USB stack. Then again, given that they made a fetish out of releasing stuff late and buggy and being macho about it, it seems plausible that their stone age cut the victim in half to see if it bleeds approach was the best they had.

Now, how about that unit testing to ensure those BSODs don't happen in the first place - and maybe we get some more helpful diagnostics instead ?

Want a well-paid job in tech? You just need to become a cloud-native god

Roo
Windows

We're cheaper because we live in a 2 bit slumlord economy.

Roo
Windows

Re: Someone Else's Computer certification

I think the biggest win w.r.t to (a big) Cloud provider is the ability to serve content globally - and provide some redundancy in the case of an entire region going down. That said there is nothing stopping you from developing your system to be capable of being run in the cloud or on your own host(s), it's not rocket science (IMO).

The UK government? On the right track with its semiconductor strategy?

Roo
Windows

Re: pile it high, sell it cheap

True, but it was the Tories that couldn't bin INMOS fast enough.

Roo
Windows

Re: pile it high, sell it cheap

"Selling it entirely is akin to selling your home because the neighbourhood has deteriorated to the point where you no longer feel safe even going out to buy groceries."

That's a pretty decent analogy, the thing is this is not a new phenomenon. Case in point Maggie Thatcher's government tried to arrange for the sale of INMOS to American investors at a knock down price before brokering a sale to Thorn EMI (yeah, the fire extinguisher folks). INMOS was the company that built the fab in Newport. Total government investmernt in INMOS was £50m (£235m in today's money going by BoE's inflation calculator) which is peanuts really when you compare it to HS2 which comes in at £247m per km.

Roo
Coat

Re: 'semi' conductor strategy.

You left you coat. :)

Downfall fallout: Intel knew AVX chips were insecure and did nothing, lawsuit claims

Roo
Windows

Being sold a leaky bag of shit instead of a high performance processor is worth a law suit IMO, doubly so given that Intel sat on the vulns (and asked the researchers to delay publication). The honorable thing to do would have been to fess up to the pwnage, fix the next release of CPU and refund folks if they install the patch that halves their performance. Intel were pushing AVX *very* hard back then - speaking from my own experience as someone who had to evaluate those chips for HPC applications. YMMV

Intel's PC chip ship is sinking with Arm-ada on the horizon

Roo
Windows

Re: "Intel's deep history of innovation failure"

There are at least two good reasons for ditching a big hairy old ISA:

1) It's far quicker, easier and cheaper to design an implementation of a clean, small and well defined IDA vs a very complex very old & crufty ISA.

2) It's far quicker, easier and cheaper to validate an implementation of a clean, small and well defined IDA vs a very complex very old & crufty ISA.

Those two reasons underpin why RISC architectures continue to survive and thrive through domination of the SoC scene - which happens to be where most of the money is. Shipping big clunky and expensive 2000+ pin packages is dandy - but it doesn't cut it when folks are trying to sell a couple of million mobile phones.

The most damning indictment of Intel's innovation failure is that it was *AMD* who developed the current dominant incarnation of x86 (AMD64 - remember that ?). Just to add a bit of salt to the wound there were senior Intel engineers posting on USENET sometime before Itanic (2001) saw the light of day that stated they reckoned a 64bit cut of x86 was what folks wanted (and was very doable). Not to mention that the whole dynamic / static optimization argument had already been decided by the Alpha EV6 (1998) vs the EV4 (1992).

Sorry Pat, but it's looking like Arm PCs are inevitable

Roo
Windows

Re: Compatibility

SoftPC emulated PCs on UNIX machines back in the mid-80s to the point where you could run FlightSimulator on it (slowly). VMs and emulation have moved on a bit since then - those problems are old hat and quite frankly most of that "optimized assembly code" doesn't actually run that well on modern hardware anyway... ie: you're not losing anything by trans-piling the code on the hoof. With respect to device drivers - they are largely a solved problem - although I'd argue you probably shouldn't be targeting running those as native code - just provide the interfaces (again VMs do this just fine).

The PowerPC, Alpha and Itanium editions of Windows faltered because they don't make the hardware any more. Joking apart modern software development is a different ballgame - stuff like unit tests and coverage vastly simplify and accelerate validation of "ports" of software to new environments. By comparison Windows development was pretty stone age even in the mid 90s. :(

Roo
Windows

Re: Anybody remember Intel StrongARM?

StrongARM was actually developed and produced by Digital Equipment Corporation. StrongARM was a big deal at the time, it was clocked a lot higher than the contemporary ARM cores and opened up a lot of new applications for ARM cores. Compaq bought out DEC, HP bought out Compaq - and at some point in that kerfuffle Intel paid whoever owned DEC at the time a huge wedge of cash to license some big chunks of DEC IP - which included StrongARM. Intel carried on producing StrongARM for a while, got bored and dumped it.

Roo
Windows

"Linux has failed to fix DLL Hell, which MS fixed by 1997"

As someone who has coded for DOS, Win 3.x, Win 95, Win NT (3.51, 4, ... and so on), no they really didn't sort it out.

These days we have Java, Spring and Node on top of that DLL hell. To compound it we "solve" the problem with VMs, Containers and Flatpack etc to work around the inherent packaging problems...