* Posts by bazza

3499 publicly visible posts • joined 23 Apr 2008

Judge refuses to Ctrl-Z divorce order made by a misclick

bazza Silver badge

Re: Huh? That's Our Courts Dragging Themselves Into the Mud.

"The judge refers to that in their ruling. They said that to get to the final screen on the system where divorce can happen with "the 'click of the mouse'" they would have had to pass through several earlier screens, "each of which prominently bears the names of the parties"."

Fine, but it was still a mistake. And would he say the same if a solicitors firm had had two cases both with surname "Smith", initial "A"? I'm sure hope there's more than just people's names used as the identifier.

If the court system doesn't permit / encourage 2 person verification of data entered, the court needs to accept that occassionally mistakes are going to be made.

Names as identifiers are not great, not even combined with DoB and PoB. I remember reading an article about the problems someone was having with the UK credit reference system that decided to use name, dob and pob as a unique identifier for a person. Turned out some guy shared all these details with someone else, and was permanently afflicted by their attrocious banking behaviour.

bazza Silver badge

Re: Huh? That's Our Courts Dragging Themselves Into the Mud.

"Since when? "

(looks at 700+ Post Office cases) These ones. What opportunity was given by the courts to the defences to properly probe the veracity of the expert testimony in all of those hundreds of cases?

The issue is that, if the judge permits discussion on the correctness of technical evidence at all (may be some progress has been made there), a court case in front of a jury is absolutely the worst possible place for that discussion to occur, and can never comprise all the right people or exhaust the topic under debate. It's also massively inefficient; every defence council essentially has to be able to go find their own credible experts in many different topics. That's just nuts.

Our juries at best are getting to make a quasi random choice between the two sides (probably based more on who appears to be the more eminent than on any understanding of what was in testimony), or are having to follow a Judge's direction to accept one side or the other, or just being told to swallow prosecution evidence whole. Also our juries have no power to reject the testimony of all sides and and get their own expert opinion, meaning that if by chance someone on the jury does know their shit and has smelled bullshit, there's nothing they can do.

It's also a bit awkward if incorrect testimony from a witness is discredited (as seems to have been the case in your glass example) in anything other than the first case they're involved in. What happens if, 100 cases in, the witness finally comes up against a knowledgable adversary who finally exposes the inadequacy of their testimony? There's 99 cases that have gone wrong to sort out. That too is nuts. That's happened repeatedly in our court system (finger prints in Scotland, inadequate standards for DNA matches in the early days, failure to understand that radar is not communications, etc).

I think the French system is a whole lot more mature over the issue; they're inquisitorial, rather than adversarial. The jury is largely there to certify that the court's inquiry has been carried out correctly. It the court itself that makes the decision. This allows French courts to employ in as many people of the right sort to reach concensus on what a piece of data actually means as evidence. This is far, far closer to the scientific process of review, peer review, consensus. If someone like Roy Meadow were making a statistical assertion in the French system (such as he did in the British system), as I understand it the court itself is obliged to go get an actual statistician to double check the assertion (because they have to be able to tell the jury that the assertion was checked out). In the UK system, his word was taken as gospel sworn truth, and he bears a large slice of responsibility for what happened to those innocent mothers.

"That's not the judge's role."

Indeed not, but ultimately it is their responsibility. The Judge is there to run the case in a suitable manner such that any conviction is as fair and as sound as can be (and to pass sentence of course). They've let some absolute stinkers of prosecution witnesses hold sway over court cases for decades, not least Roy Meadow and the Post Office trials. The moment a judge gives a jury any direction whatsoever, or limits the information that the jury is allowed to see, or controls / limits what witnesses appear, or permits an imbalance in technical resources between prosecution and defence, they are personally responsible for it.

For example, I know of one case where a end of shift tired junior doctor's contemporaneous written opinion was admitted as prosecution evidence and the jury directed to accept it straight. The defence was not allowed to call any of the senior paediatricians who were adamanant the junior had made a basic error as witnesses, because they hadn't been there in the A&E department at the time. I know that lead to at least one senior paediatrician withdrawing their services from the court system in protest (a relative of a friend).

The sooner our court system stops being the adversarial pissing competetion and starts being a proper system of inquiring after the facts of a case, the better. I'm saying this not even having been on the right or wrong end of a court case, it's just that as a system to determine facts in the modern age of scientific reason the adversarial system is woefully inadequate and riddled with flaws.

I suspect that one of the reasons why there is so little change is because to make such a major change they have to admit that the adversarial system may have made mistakes. That then opens up the can of worms of mass appeals. They're kicking the problem down the road, hoping it won't be their problem to deal with. And now look. We've now had such a widespread miscarriage of justice with such devastating consequences for so many people that there is no possibility of clearing up the mess within the court system itself. The government looks like it's going to have to pass an act of parliament to blanket reverse the mess and award compensation wholesale, simply to achieve any kind of positive outcome for the victims within their lifetimes (never mind on a fast timescale). If that isn't a clear indication that our adversarial system of courts isn't totally and utterly ****ed, I don't know what is.

Okay, there's been some very bad behaviours within the post office, but the whole idea of a court and judge is to protect people from bad cases, not roll over and let them proceed without an iota of curiousity. And it's very worrying that none of the enquiries seem to be asking the question as to whether the adversarial court system is capable of running such cases with such evidence in play. It clearly isn't (it's not even capable of fixing its own messes), but I can't see any changes to it being recommended by any of the enquiries.

How We Stand Abroad

Not very well I'm afraid. I know of a social services case in England that lead to an English family being granted asylum in France by the French courts, to protect them from malicious and repeated prosecution by an English social services department. That case was long, drawn out and eventually became an occassional topic in the more serious sunday newspapers. It culminated in the matter eventually getting to an extradition hearing in France at which (for the first time) the English social services were obliged to submit sound, incontrovertible evidence that would actually get tested. They rolled out the bunkum they'd been putting before the English courts, and the French court immediately (like, within minutes) dismissed the case and issued an order protecting the family in France and put out a Europe-wide order cancelling the Euro arrest warrant that the English system had issued and forbidding any renewal of it. (I happen to know the solicitor for the defence here in England).

That how bad our courts can be. Our continental neighbours have had cause to protect our own citizens from our own court processes that were being manipulated by a rouge actor (an "expert") within a social services department. The inquisitorial system means that the court / judge themselves are expected to form an opinion, which I think means they get pretty good at doing so, and can therefore smell bullshit a mile off (as happened in this case).

bazza Silver badge

Huh? That's Our Courts Dragging Themselves Into the Mud.

A clerk in a law firm presses the right button on the wrong file on a government web portal. The court issues an order as a result, seemingly with zero measures in place to check for error. That, frankly, sounds terrifying.

Who’d want to be a clerk in a solicitor’s office, pressing buttons on that court web portal now?

There are all sorts of reasons why the data in an IT system could be not as intended. You'd think that in the midst of the Post Office IT scandal (something that the courts themselves have been astonishingly naive about) a judge wouldn't just go with what their own IT says. You'd like to think they'd actually ask someone if that is indeed the case / motion they were expecting to be heard. But no. IT is always right, right?

Had that been a child adoption case and a clerk had made a similar error leading to an adoption being awarded "automatically" as a result, there is no undoing that; adoption is a lacuna in law. Even if the Judge wanted to reverse such an error in such a case, there is no means to do so.

Troubling Attitude of the Judiciary?

Another reason this is important. Our judiciary won't let defence contest expert testimony. However, the judiciary themselves are not adequately testing expert testimony. They've repeatedly got this wrong, not least with Roy Meadow persuading them that bereaved mothers who'd lost children to cot death should be jailed, and now with the Post Office. How the court system as a whole didn't get curious as to why there were 700+ very similar cases all with an identical narrow evidence base is both alarming and astonishing.

The problem isn't the experts as such, the problem is the judiciary refusing to follow any sort of process that resembles "consensus building". They insist on a single person giving independent testimony, yet by definition the judiciary is not able to judge the fitness of such witnesses. They seem to rely largely on “eminence”, or “proximity to the events”. Neither are a good way of establishing what is scientifically or technically correct. This approach to "evidence" by the judiciary is literally killing people and has been for a while now.

That this has been the case for decades and seems not to be changing makes me think there is an "attitude" problem amongst the judiciary. Pretending there is no problem (and carrying on with cases regardless) when there plainly is a problem smacks of, what, arrogance?

And we've now seen a case go awry partly because of the brevity of the court process in which the judge plays an important role (or rather, doesn't play the part at all), and the judge has essentially said "Not my problem, go fix it yourself". If this is the general attitude of the judiciary to addressing shortcomings in how their courts are run, then we may all be screwed.

Furthermore, the users of the courts are obliged to use a court-controlled IT system, and the court process seems biased in favour of propagating human errors without any opportunities to check for them. It's a bit rich for the judge to pass all the blame on to a solicitor's clerk, in effect blaming the woman for having chosen an incompetent solicitor’s firm. Human error is not something one can reasonably choose to avoid. A court system that is instantly and irrevocably intransigent when a trivial human error is made sounds very much like maladministration of justice to me (no matter the merits of the outcome in this particular case).

Will This Come Back to Bight the Courts?

For all we know, the clerk isn't to blame; how likely is it that the clerk didn't screw up but the IT system (the court's IT system) is borked? That's far from impossible. Probably this clerk has been told in no uncertain terms that they've fouled up, but at the same time they've probably got no real record or recollection of what box they actually ticked, other than what the IT system now reports.

The situation the judge has created is one where, effectively, everyone using that IT system needs to independently record their actions on it or accept that their clients and themselves are the ones who live with the consequences of the system developers having made a mistake. This judge has made it clear that mistakes will not be rectified, so the only way of defending oneself (and one's clients) against system bugs (manifesting as “user error”) is to independently record your actions on the system. So, to what standard does that record have to be kept? An independent witness + video recording of the screen, all time stamped, lodge elsewhere, etc? If the IT system does make a mistake, what's it going to take to persuade the judiciary it's broken? One case? Two cases? A whole post office of hundreds of cases?

Just imagine if, after the Post Office scandal, the judiciary’s own systems were found to be in error, and the judiciary had been intransigent over the matter and had caused real harm to people in the process?

Data Protection Laws

One of the parts of law is that a business processing someone's personal data has broken the law if the processing is done incorrectly. Clearly in this case the woman has a case against her solicitors; they screwed up.

However, has the court itself broken data protection laws? The court has processed incorrect data. The court seemingly had no mechanism in place to determine that the data is indeed correct. The court provides the IT system and is in control of the court’s processes, without any option for an alternative. Furthermore, they’re refusing to correct the data record despite being told that it is in error. If a bank acted like this, they’d get sued to bits. Why should the judiciary be immune to criticism or consequences?

Torvalds intentionally complicates his use of indentation in Linux Kconfig

bazza Silver badge

Re: A stand against software leniency

Indeed.

It’s interesting that the language du jour Rust is widely applauded and has some entirely new and powerful ways of pointing out the one’s code is the product of a klutz who should not be allowed near a keyboard. That’s kind of the ultimate anti Web tech reaction.

bazza Silver badge

Re: Semicolons and curly braces, forever.

Problem is you then have to write down in comments exactly what the logic should be. Lose an indent accidentally off a line that’s supposed to be inside an if statement and you’ve no way of knowing unless there’s some other record of what the logic should actually be.

Self documenting code is impossible when something that you can’t see can be syntactically significant.

bazza Silver badge

I Hate Syntax Critical Whitespace Indentation

As per title.

If you’re the type of coder who likes it, go and write in Whitespace (as in the actual programming language) and then say it’s still a good idea. You’re already half way there if you like Python or Yaml.

It’s a really bad idea that something a computer won’t show or print and, more importantly, cannot be seen to be missing should be considered syntactically important.

Linux Foundation is leading fight against fauxpen source

bazza Silver badge

Re: "Depends on what the meaning of 'is' is..."

The media on which one can distribute source code has changed. GPL simply refers to "machine readable", which is a bit vague. It used to reasonably include punched paper tape, or floppy disk. I think that if one attempted to comply with a source distribution request by supply punched paper tape, you'd have to offer the machine you used to punch it to the recipient so that they can read it! Even something like floppy disk would be at the "cheeky" end of compliance, even though it's still just about an available technology.

Clause 10 of the OSI's definition of an open source license runs as "No provision of the license may be predicated on any individual technology or style of interface.". That's pretty explicit. I can see why one wouldn't necessarily want to tie a piece of software to be licensed for use on just Linux. But I think that, when it comes to source distribution it would be reasonable for a license to dictate by roughly what means it is made available (i.e. a publicly accessible Internet server).

Of course, the OSI's view is not the very last word in what an OSS license can be. As the term is not reserved, anyone can come up with any definition of open source they like; getting others to take it seriously is the tricky part.

bazza Silver badge

Re: Open Source and Profits

Agreed.

And, it's somewhat unimaginative. RedHat got going largely by being a company that sold professional support to users of their Linux distribution. That's largely why RHEL became the go-to distro for people wanting to run serious workloads; for a fee, you could (if you ran into trouble) pick up the phone and ask, "what gives?". (Though my experiences of making such phone calls were somewhat mixed...)

The entitlement fees / RHEL-specific kernel patches / playing hard ball with source licensing came later. That might be an indication as to how little money there actually is in being a support provider...

bazza Silver badge

>I object when they start open then try to take everyone's contributions as their property.

It all depends on the license.

For example, there's a lot of controversy about what the GPL2 actually says, arising from the tactic RedHat is employing. So far as I know, no one has sued them. Whether that's because of a fear of being out-lawyered or not I don't know. However, the way the GPL2 is written (pre-WWW) definitely does not oblige a binary distributor to put the source code up on a publicly accessible repo despite the expectations of many in the modern era.

Ultimately, it's up to contributors to study and appreciate the full consequences of the license a project is using. There's no point hoping that a project will forever "do the right thing" if the license does not actually define that. The degree of study will depend on the degree to which the developer cares. And one has to be very wary of projects that encourage copyright assignment to the project. If one actually does go along with that, the project owns the code (not the developer). though some like GNU Radio do sublicense it straight back without restrictions (which seems fair enough).

Why No Public Repo Mandate in OSI-Blessed Licenses?

One of the things that I find very odd is that that the OSI definition of an open source license is that "the license must not depend on a specific distribution format, technology or presence of other works." (from the GitHub guide to open source licenses).

Most people's expectations is that OSS is available on a public repository of some sort (git, svn, web, or at the very least a tar ball on an ftp site). I'm sure that most developers have that in mind when they contribute to a project. But, the OSI specifically bars a license from mandating such a thing.

This seems to be a massive weakness in licenses, especially licenses like GPL. RedHat are currently enjoying exploiting that.

GCC 14 dropping IA64 support is final nail in the coffin for Itanium architecture

bazza Silver badge

FMA is not about precision, it's all about time / compute performance. Intel were using FMA in Itanium to differentiate it in the super computer market place, and that worked to a limited extent. But, they blew it.

With an FMA instruction, a CPU (well, it's vector / SIMD core(s)) are a lot, lot quicker than a processor where the FFT butterflies have to be calculated as separate multiply and add. It's not just that two maths operations are merged into one instruction executed in a single clock cycle, it's a also whole lot more friendly to caches. That is, more work is being done per data loaded DRAM->L3->L2->L1->pipeline than without an FMA, so the benefit is multiplied for FFT lengths that exceed, say, L1 cache size.

For comparison, a 400MHz PowerPC 7410 (with Altivec) was quicker at 1K floating point complex in place FFTs than a much newer 4GHz Pentium 4. For the smaller FFT sizes that could be accommodated in L1 cache the 1GHz 8641D (which also had Altivec) was still quicker than the Nahlem-vintage Xeons. Itanium was likely equally performant having an FMA, but it was too difficult a chip to integrate into the kind of embedded hardware that was used in things like aircraft, tanks, etc.

Itanium wasn't attractive enough to the super computer manufacturers either. Fujitsu / Riken famously turned to customised SPARC CPUs for the K machine, AMD's Opteron line was quite popular also. The kind of applications that supers get used for (computational fluid dynamics, protein folding, etc) don't necessarily benefit from an FMA instruction anyway, so Itanium's "advantage" over the rest of the x64 line up that existed back then was much smaller, and the disadvantages (shite compilers) were stark.

The Cell processor was way quicker than any of them. It was only really when Xeons grew >8 cores, FMA and memory bandwidths close to 100GByte/sec did they start outstripping the Cell processor, but were still running a lot hotter. The entire high end military embedded systems market was poised to adopt Cell wholesale, only for IBM to drop the chip. One of the other attractions of Cell was that they'd majored on 1 clock cycle completion for all SIMD operations at the expense of precision. The reasoning was that for most applications (like DSP, or graphics for games, etc) speed of execution trumped outright numerical correctness; better to get an approximate answer now, rather than an exact answer in another clock cycle.

bazza Silver badge

Itanium was quite good for DSP only because if included a fused multiply-add (FMA) instruction in its SIMD / vector unit, which is good for FFT implementations, and x64 didn't have one. And because the Itanium came from Intel (ergo, it must be "the best"), it was supposed to be the one to use. To emphasise the point Intel didn't put an FMA into the x86/x64 line up until, what, 20213?

Trouble was that Altivec on PowerPC included an FMA, which made chips like PowerPC 7400, 7410, 8641D surprisingly competitive, especially against Intel's x64 line up. Itanium artificially extended the useful lifetime of 8641D because Itanium really wasn't embeddable. It wasn't until Nahlem came along that x64 (through brute force alone) started beating the 8641D, but only for FFT sizes that overflowed cache (Nahlem's superior memory bandwidth won out over the 8641D's slower memory subsystem).

Nahlem was good enough for a lot of actual DSP work (on thing like radars, EW systems), meaning that 1) one could finally move on from 8641D, and 2) one didn't have to go to Itanium. With the writing on the wall, Intel finally added an FMA instruction into the x32/x64 line up in about 2013, about 13 years too late in my opinion. There's a lot of kit that got built around PowerPC 7410, 8641D, it's still in service, and MLUs seem to be involving re-manufacture of ancient parts (something that is surprisingly cost effective) rather than port software to newer x64 based systems. This is happening even though some system suppliers have full API compatibility all the way through nearly 3 decades of product line up. Altivec really was the right core extension at the right time, with execution speeds well attuned to signal bandwidths, and those bandwidths aren't really changing that much; it's not like there's more spectrum available today vs 25 years ago.

As for the Cell processor; well, that was quite the beast. It took Intel well over a decade to make anything that outperformed that.

What can be done to protect open source devs from next xz backdoor drama?

bazza Silver badge

>If you're this sophisticated (ie a nation state) then you can just get your programmers hired by the closed source software vendors too.

The difference is that the hiring company can ask its government to do its own background check. For all intents and purposes that likely means a US corp asking the US Gov, and for critical roles within the corp the US gov probably is motivated. It’s a lot harder for a nation state to clear that boundary than it is to socially engineer its way into a one man OSS repo.

It’s a lot harder to confer a gov security clearance on all the key people in an internationally diverse OSS project.

One could argue that the way RHEL is headed it’s becoming more proprietary and more American. Maybe this is a reason why SystemD (definitely a RedHat thing) is endeavouring to supplant lots of existing stuff; it’s a code base largely under RedHat’s control.

Proprietary software with this attack within it would indeed have been harder to independently investigate.

bazza Silver badge

Re: No it’s not in trouble!

Cough heartbleed cough. Able to is not the same as has been.

The only reason why this particular attack was noticed was because the attacker messed up, not because anyone was reading it. And the place where the attack was lodged was extremely unlikely to be reviewed anyway. Whoever reads the build script carefully, or even understands them?

Unfortunately, what’s now been shown is that the measure of security of OSS systems is no better than an article of faith. You say it’s more secure. Fine, but with the greatest possible respect, I have zero data on your intentions. It is now evident that all of us have no data on the intentions of many of the myriad devs and their future successors of the source code packages that, presently, vast chunks of the world assume are good.

We don’t have any data on the devs of proprietary software either, but some one does to some extent.

bazza Silver badge

Re: Strengths and weaknesses

I think that there's no technological solution (other than what I'll term "far out" concepts such as an AI review-o-mat, advances in formal specification / automated testing, etc).

What would have helped here? It really does come down to either positively identifying trustworthy volunteers, or review processes that significantly hinder bad actors. Given that "security vetting" is near enough impossible for OSS to do all by itself, OSS is then reduced to enhancing the review process. There's already a shortage of willing volunteers, and it's even harder to find people to do review work (people prefer coding to review).

The danger is that some large corporate concern will step in and "take ownership" of the problem. If one does step in, they'll be wanting more ownership. RedHat has already demonstrated a willingness to do precisely that.

Yet Another Aspect

The other thing to remember is that "developer identity" often being not much more than a name and an email address. Such identity is readily transferable, or stolen.

For a purely hypothetical example, the only reason we have to believe it really is Linus Torvalds authorising Linux releases is because we're confident that his IT credentials are not compromised. If someone did learn his password, they'd be able to be "Linus Torvalds" for the purpose of sneaking dodgy code into the kernel.

Of course, if Linus was hacked we'd soon learn all about if via the media, and the damage would be repaired. But, would that happen to everyone? What if a well placed soon-to-retire dev passed away, and a bad actor who'd got their login credentials was simply waiting for that to happen? We'd be fully dependent on the family knowing what the dev's importance was in the OSS world, and knowing who to contact.

Companies solve a lot of this by having much closer relationships with their employees - pay, pension, HR, management, office location, teams, parties, family members knowing where the salary comes from, control of their company IT, etc.

bazza Silver badge

Re: Strengths and weaknesses

It's the permanent problem for anything where there is no or minimal revenue stream. It's a real demonstration of "there's no such thing as a free lunch.".

Given that a major part of the problem is personal trust, a big component of "resourced properly" is that required to establish that everyone involved is trustworthy. As we've seen, keen, enthusiastic and productive (things that we usually associate with "resources") are not in themselves enough. Someone else has to say, "this person is OK", and someone has to be the root of all that trust. It's exactly like certificates. There has to be a root Certificate Authority, and if you don't trust it then the certificates issue in its names are worthless. So it goes with developers.

So, who is that going to be that root of trust? A person? An organisation? Establishing that a person can be trusted is not really something that falls within the software development world, it's more a Human Resources / Security Department thing. Does the EFF have a HR and Security Department? Probably not. Does Linus Torvalds? No (with the greatest respect to his talents and capabilities).

The advantage that companies like Microsoft and Apple have is that they do have HR and probably something resembling a security department (especially if they're having anything to do with government(s)). And because each and every one of their customers is paying money, that is resourced. That doesn't absolutely mean that their developers are fully trustable, or that they adequately vet software they borrow from the OSS world. But, it's better than nothing (and if government security standards are involved, its probably quite thorough).

Another Thing Missing From the Debate

This episode saw the attempted introduction of an explicitly coded backdoor into pretty much all Linuxes, that would have given the perpetrator access to a lot of Linux boxes all over the world.

Global access being the presumed end goal, it's important to recognise that this can be achieved in other ways. It doesn't need an explicitly coded back door. The same level of access can be attained via simple coding errors, or slightly flawed code in the right piece of source code, or indeed a CPU flaw. These have been happening all the time.

The only difference between the two is that the originator of the deliberate backdoor gets pilloried, or lynched or something, whilst the developer who simply made a mistake is, well, everyone nods with understanding whilst being grateful it wasn't them. For example, nobody (including me) thinks bad things of Robin Seggelmann or Stephen N Henson, the pair who (according to the Wikipedia article) between them made the mistake that lead to Heartbleed and failed to notice the bug. However, it's entirely possible that someone else did find the Heartbleed bug and was carefully using it for years before the bug was (re)discovered and publicised. One person's innocent mistake is easily repurposed a someone else's global backdoor, with the same potential impact.

So, how does one tell the difference between an innocent mistake, and a deliberate mistake? One cannot. It starts taking government-levels of investigatory powers to be confident that the person who made the mistake hasn't got a private deal lined up in the background. Absent that, one can design review processes that make it difficult for a loan actor to succeed, and more elaborate ones to ensure that two cannot collude to get a "mistake" through to production, and so on (depending on one's level of paranoia). But these take more and more resources, and they're no use unless there is some independent audit of the activity under the process.

Thing is, with OSS and long-distance-physically-never-met teams of devs cooperating, it's potentially quite possible for quite a few devs to collude. After all, to the rest of the team the only thing that distinguishes one dev from another is probably their email address; those are not hard to get.

bazza Silver badge

Hype? Well, this one was found before it got disastrously far.

But how do we know no other attempts have been made undetected and successfully so?

My view is that if one is being totally serious about system security then at this point in time Linux is in deep trouble. Many would reject that view, but then again many would not be able to confirm that their business’s entire dependency tree genuinely originates from known fully trusted sources. Most of it relies on “well everyone else is using it, so it must be ok?”.

As ever, it depends on your requirements but right now no one really knows what threshold OSS reaches, and has no way of measuring it.

bazza Silver badge

Re: Open Source Quality Institutes

Not going to happen. If government subsidises open source, it is then competing against industry (or they don’t have that industry to begin with). Most governments have rules prohibiting that kind of thing.

Cloud vendor lock-in is shocking, but there's a get out of jail card

bazza Silver badge

Re: This is where standards come in.

Some standards have become very successful. Second sourcing is just a way of ensuring one comes about.

Just look at the PC hardware architecture. IBM accidentally created an open hardware standard that anyone could copy (once BIOS clones had been black box written), and even ensured that the CPU became something of a standard too.

POSIX has been a very successful standard too. As has Ethernet, TCP/IP, email; the list is endless.

The most successful standards are the ones that industry players agree to create amongst themselves. Even POSIX, though driven by DoD, was largely a combination of the various flavours of *nix that were around at the time.

The XKCD cartoon is amusing and applies to the phase when individual companies think they can get the whole market. Fast forward a bit and there’s only one effective standard left. Just look at networking. There used to be DECNet, Token Ring, Token Bus, Ethernet, TCP/IP, OSI, etc. Look what’s left…

The best thing governments can do is to refuse to buy any of it until it has become an open standard making second sourcing possible. If gov waits until the vendors are deeply invested and getting very keen on winning gov business, at that point it should refuse point blank until it is possible to second source. That would force the companies to share software and hardware designs, standardise overnight or they’ll all lose. If government refusal were public, that might attract other large companies to take the same position.

That only works if companies can’t afford to lose the business. But with AWS for Amazon and Azure for MS, they’re both becoming dependent on these products.

bazza Silver badge

Re: Terraform

If you go with Terraform, that’s just another vendor to become locked into, isn’t it, at a higher layer? Or have I missed something?

bazza Silver badge

Re: Why stop at cloud?

Operating System: already done, it’s called POSIX and was mandated by the US Department of Defence decades ago. And it still applies for non-corporate IT (ie radars, sensors, the lot).

Office Productivity: already done. MS’s xml doc formats are now public standards. Some of the things you can embed in them aren’t, such as Visual Basic blobs. Understanding the standard and reimplementing it is another matter but LibreOffice isn’t having to start from scratch. It can get the xml schema from the Library of Congress.

Email: long done. We’ve had multiple implementations of servers and clients for decades.

MS Teams: granted this is an area where things are disappointing. There is SIP. There is also RCS. There are standard for these things but the successful products have avoided them.

Engine cover flies from Southwest Airlines Boeing 737 during takeoff

bazza Silver badge

Re: Yeeeek

There has indeed been a spate of A320 cowlings flying off. It was down to it being difficult to see that the cowling was indeed properly fastened shut. Enough incidents happened that Airbus had to change the design.

A possibly naive observation is that one might expect Airbus - when faced with an obvious opportunity for improvement - would happily adopt it, whereas Boeing (of old) would not and would lobby Congress to reign in the FAA, get them off their back.

bazza Silver badge

This may not have anything to do with Boeing whatsoever. It's perfectly possible that a ramp maintenance worker forgot to make fast the cowling.

There was a spate of similar accidents with Airbus A320s, for exactly that reason. They kept happening because the worker had to get right underneath the engine to check for sure that the relevant fasteners had indeed been made properly. Busy people faced with crawling around on the wet ground tend to find short cuts... I think that, eventually, Airbus had to redesign the closing mechanism so that it was far more obvious if the cowling were not properly fastened shut.

bazza Silver badge

Re: Bolts for Boeing

>so they stuck to precision components based on the length of 3 barley corns

In name only. Today, an inch is defined with respect to the metre in the same way a millimetre - a fixed ratio. A millimetre is obviously 0.0010 of an metre, and an inch is 0.0254 of a metre. The latter was recognised and adopted by all countries using imperial units in 1964; prior to then, different countries had different definitions of an inch w.r.t. 1 metre, though all had adopted 1metre as their fundamental reference length with the Metric Treaty.

Airbus's decision certainly made sense; when the US licensed the English Electric Canberra (to become the B57 bomber), despite both countries working in "inches" they worked to different definitions of an inch, had different standard thicknesses of aluminium (and different names for the stuff). They basically had to re-draw the entire design package to Americanise the aircraft to make it manufacturable in the USA.

Had Airbus followed European / British design practises (like the Canberra), there was a fair chance that there'd be nothing at all about the aircraft that would be familiar to Americans, not just the use of metric tools.

The Canberra was pretty impressive; the US bought it because, for the demo flights in the USA, the RAF simply flew straight across the Atlantic Ocean non-stop. No other jet at the time was capable of that. Great aircraft, much missed.

German state ditches Windows, Microsoft Office for Linux and LibreOffice

bazza Silver badge

Re: Baby steps

It depends on what you have historically used GPOs for. All the organisations that I've been in have used them for helping end users, as much as restricting end users. Things like ensuring that certain file shares are automatically mapped, allowing the admins to re-target those quietly behind the scenes without having to tell anybody. Stuff like that. Ok, there's perhaps other ways of doing that exact task with Linux (e.g. some DNS server alterations), but it's that class of thing I'm referring to.

If all of that "help" is embedded as GPOs, and one is moving to an alternative arrangement that is not based on AD, then somehow that's all got to be ported. If the clients are going to be Linux, then its difficult; either the admins get creative, or very busy supporting individual users, or something, but by and large their GPOs are dead. The transition is most definitely not transparent / seamless.

bazza Silver badge

Re: Disappointing

Reasons:

1) Everyone else uses MS Office

2) No one has done a decent replacement for Active Directory

3) As machines tend to be shipped with Windows, that's supported. If you put Linux on them yourself, you're on your own. There are some machines that manufacturers support directly for Linux, but they're few and far between.

4) Users by and large are ingrained with Windows or Mac. Linux and its software comes as an alien shock, an important aspect when one considers that >90% of the organisation and its future staff intake likely isn't an IT savvy person who is able to make-do on Windows, Mac or Linux. Users are going to have to get used to menu bars again.

5) Desktop. The main Linux vendors are interested in selling support to server editions because that's where the bulk of the market is. Supporting desktop is not so profitable. So, the guarantees on consistency are less. For example, consider Ubuntu, Snap and KeePassXC. Nowadays, KeePassXC comes as a snap. That means when you go hunting for a keepass file, instead of having a sensible path (likely starting off in ~/Documents), you get dumped into /run/<gobbledygook random nonesense that changes every time>. And similar. It's a small point, but for a lot of users that's a real headache.

6) Accessibility. It is illegal to discriminate against disabled persons in most of Europe (and quite right too), and for the major OSes there is a pleathora of tools, etc. that help. Linux had some, but a lot of those have been tossed into the dumpster by the rush to Wayland. If a German state council insists "you must use this" and no reasonable accessibility aid is available for an employee (or prospective employee), they're basically breaking the law and can probably get sued by that person. Covering that off with a, "Ok, you can use Windows.", doesn't do the job; if that means they're using MS Office and everyone else is on Libre Office, they're still being discriminated against (they're not able to fully participate in the workplace. One spreadsheet with untranslatable formulas, or a presentation they can't edit, and suddenly they feel like everyone else is looking to ignore them). ATK has done something to repair the damage, but it's still encumbent on individual software developers to incorporate it rather than being something the OS can bring to all apps. Not all of Linux has done so, though it is cheering to see that FireFox and LibreOffice have.

7) Old data is not redundant data. An organisation that, suddenly, cannot efficiently work with its entire history of documents, files, is starting everything again from scratch. That's a lot of work. It depends on what's extant, but even simple things like spreadsheet formula can cause immense problems. It is potentially very difficult if you have complex financial spreadsheets with a bunch of formula; Ok, that's a really bad thing to have in the first place, but if it does exist it's going to keep on being important, and you don't want the numbers in the formula results to be different simply because it's been converted from Excel to LibreOffice Calc.

bazza Silver badge

Re: I use a Mac these days

>Plus of course Apple Silicon...

...which, in the past few days, has been revealed to have an unpatchable sidechannel attack on anything cryptographics (like, https) in the system. The POC is native software at present, but I dare say that someone will get it running in Javascript in a web browser before too long. If you thought SPECTRE and MELTDOWN were bad, you ain't seen anything yet...

bazza Silver badge

Re: Baby steps

>The active directory replacement is user transparent, nobody will notice it

Well, the admins will.

Ok, so one can join a SAMBA DC to an Active Directory Domain. Good luck getting it to sync GPOs with Windows servers. And then good luck mapping the GPOs to Linux (note: sssd does pay attention to two policy settings. There are alternatives for Linux from what used to be called Power Brokers). And good luck downgrading one's forest / domain to a more elderly version. And, good luck administering it without a Windows desktop so that you can run the standard management (RSAT) that Microsoft provide for this purpose, and that Samba intends one to keep using even if one is using Samba exclusively for the servers.

The reason why Samba exists is because Active Directory (and the control it gives admins) is actually very good - not just for security but for keeping stuff running - and no one in the Linux world has come up with a viable alternative. A lot of people forget that AD is a very good way of getting things set up for the less technically able users in an organisation, and replacing that with a "you're on your own!" Linux desktop risks experiencing mayhem.

Software engineer helped put Sam Bankman-Fried behind bars, say prosecutors

bazza Silver badge

Re: denied deliberately committing crimes

An acquaintance working in financial compliance within a London financial institution said that their worst nightmare was Excel.

The normal pattern of events was that traders would dream up an idea, they'd submit it to the right panel in the company, compliance would mull over the idea and pronounce it fit / illegal, and then the softies would code it up and then unleash it on the market.

Or, they could just hack something together for themselves in Excel and not tell anybody what they were actually doing.

bazza Silver badge

Re: denied deliberately committing crimes

Indeed so. When all that is left is remorse, it’s best laid on heavy and as early as possible.

There is an important lesson here for all engineers and software developers. As you go through your career you may end up coming across bosses with criminal intent. The dangerous thing is that the boss doesn’t necessarily know that themselves, and doing as they ask can get you into a ton of shit. You have to know something of the law and how it pertains to your work, regardless.

And, if the spidey senses start telling you something is wrong, don’t let it lie. Do something. That something in the first instance should be a discussion with the boss and a company lawyer.

If there is a real problem developing this can go two ways, and it’s important that the boss is almost always looking at a major financial impact.

The best outcome is that the boss accepts your advice and calls in a lawyer and listens to what they say.

The alternative is that the boss doesn’t do that. At this point, there are things you have to do regardless. You must secure the email train between the boss and yourself on the matter. That must be hard copy. You must go to a lawyer ASAP and lodge that hard copy with them, explaining what these relate to. You need the lawyer to notarise their receipt of the hard copy. This will cost some money, but it’s worth it

If your boss is particularly volatile, you may need to do this before raising the issue with them.

The reason why is because that your employment is nearly done at that company,, you’re going to resign or get fired and you need hard evidence that you were not complicit in what may be about to happen. That evidence is your internal communications with the boss on the matter.

You need to be able to give prosecutors instant and early access to the evidence at any point in time for the rest of your life (this being the only way of ensuring that their first talk to you is also their last).

A notarised contemporaneous hard copy is excellent hard evidence of your actions and the company response, and the lawyer’s involvement is proof to the authorities that you are extremely unlikely to have fabricated the evidence later on when the heat is on.

Guess how hard it is to get that evidence off a company server years later when 1) the boss already hates you, and 2) the company itself and the boss are now in the shit, and 3) it is by then potentially quite a while ago? Yes, it is extremely difficult for you to access exonerating evidence when it’s held by the shithead you stopped working for. Who is now looking for a scapegoat.

Malicious bosses can also be very nice bosses who build excellent teams with a ton of loyalty to themselves. It’s a good way of ensuring there are scapegoats available. It’s also a way of suppressing staff’s spidey senses. It’s more difficult to be objective about a dodgy legal situation when you like your boss a lot. So be on one’s guard. And beware of companies that make it very difficult to print emails.

So, when those spidey senses kick in, act first to protect oneself, not the company. If there turns out to be no problem and the company learns of what you’ve done, they have no grounds to dismiss you (lodging company information with a lawyer is by definition still properly safeguarding it).

The other benefit of going to a lawyer is that you can take advice on whistleblowing. There are likely some government organisations that need to know. Blowing a whistle via a lawyer is a better way.

Never ever go to the press or post it on social media unless you’re feeling very brave!

This may sound melodramatic but it really does happen. Look at Boeing. There’s software engineers who are not in jail because they played it properly, securing lines of evidence.

Malicious SSH backdoor sneaks into xz, Linux world's data compression library

bazza Silver badge

Re: entitled to distribute their source code ... in any shape or form they so wish

"Free Software" sounds like a daft name for a project.

bazza Silver badge

Re: More Details

From "that one in the corner",

"That should not NEED to be "extra-special" in any way, shape or form. You can certainly choose to move your build to another toolset - e.g. you want to use clang instead of VisualC - but any proper build system should take that in its stride and let you run both toolsets."

It's comparatively easy to get a repeatable build, on the same box unaltered. That's not really the point. And it's definitely not the point if a developer did their dev and test build on x64 and you're rebuilding for ARM (every single byte of the binary will be different regardles).

To recreate the exact binary that the developer built themselves means understanding literally everything about their dev environment; OS, libraries, compiler, exact versions of everything. Moreover, this configuration data would have a short life time. Before too long, something somewhere in the distro is going to have been updated having an impact on the relevant parts of the set up. It's really hard to reproduce the exact same net binary that someone else got from the same source at a slightly different time.

I say "net" binary, because what matters (so far as knowing for sure what is being run) is the binary that has been built and the libraries that are dynamically linked at run time. This is exactly the problem that has been encountered in this case with liblzma.

Of course, everyone knows this. That's why people create test suites too. Repeatable behaviour is about all one can hope for.

I know some projects have long lifetimes. But anyone insisting on being able to reproduce a binary byte-for-byte the same 25 years later is also accepting that they're missing out on 25 years of security fixes and improvements in tools, dependencies, etc. That suits some systems (who have likely also done a lifetime hardware buy), but not others.

It's also next to impossible to achieve. For example, 25 years ago the dev platform of choice for a lot of military system software was Sparc Solaris (cross compiling, for example, for VxWorks / PowerPC). You want to rebuild that software byte-for-byte exactly the same today, you've been scouring eBay for hardware for 15 or so years to retain the capability and you've been on your own so far as support from Oracle is concerned for as long. And you probably should not connect any of it to an Internet connected network.

Suppliers of system hardware these days endeavour to make mid-life updates as painless as possible as the more viable alternative (effectively forced into doing so through the DoD mandating POSIX / VME, and subsequent open standards), though it is not unprecedented for ancient silicon to be remanufactured (on the basis that yesteryear's cutting edge $billion silicon process is now pretty cheap to get made).

bazza Silver badge

Re: What about the culprit

I'm not entirely sure that the word "culprit" can really apply anyway. If the source code alteration was by a legitemate owner of the source code, and they weren't making any particular promises and weren't particularly hiding anything, the result is a long way away from being "criminal".

Admittedly, doing sneaky things with an overly complex build system to produce a dangerous result for anyone happening to make use of the library in a process with a lot of sway over system security makes them pretty culprity, and probably not a friend. But at the end of the day it's caveat emptor; there be dragons in them thar repos, a fact that doesn't seem to result in there being many dragon spotters. And obviously if someone has illicitly gained access to the source code, that's straight up computer-misuse illegality.

Having said that, the going rate is that more security flaws exist because of incompentent, careless or unwittingly flawed development rather than deliberate sneaky modifications (or at least, so I hope). Why, whilst this is all going on there's another article here on El Reg here about a root privilege escalation flaw in Linux versions 5.14 and 6.6.14. Going to the effort of sneaking attack code into a repo is probably harder and slower work than waiting for a zero day flaw to come along and jumping on it...

bazza Silver badge

Re: More Details

The FSF? Get lost. I think you'll find that the developer(s) and therefore the copyright holder(s) of a package are entitled to distribute their source code (which they own) for their package in any shape or form they so wish, thank you very much. They can also choose any license they wish to apply to their source code, and indeed they can choose to keep it closed source too. Fine and mighty though the FSF may be, it has nothing to do with them.

Regardless, that thing about "provably results in the same binary" is nonsense also. On the basis that you're referring to the scenario where a binary built from code licensed with GPL2/3 (or similar) has been distributed by someone other than the copyright holder who has also received a request for the source under the provisions of that license, unless extra-special care is taken with exact build system and dependency setup you do not end up with the same binary anyway even though what you do end up with may well be functionally identical. Some languages like C# even specifically require that the same source built twice results in a different binary.

Given that the only obvious route to prove one has built the same binary (a bit-wise comparison) is effectively not available, one is left with only the assumption that the included build system did its job as anticipated by recipients. Furtermore, if someone hasn't received a binary in the first place, then there's no possible means of proof anyway; there's nothing to compare their own build against. The very point of this article is that that assumption was wrong.

bazza Silver badge

Re: More Details

Doesn't this indicate that there's probably a crisis in security at the moment? It's almost inconceivable that this is the first ever attempt at dependency poisoning. How many others have been perpetrated unnoticed?

The way in which the Linux world is divided into myriad different projects doesn't help. Some projects are claiming to be the best thing for system security since the invention of sliced bread (cough systemD cough). But they may also pass the buck on the security of those dependencies whilst they also mandate use of minimum version numbers of those dependencies. Did they vet those versions carefully as part of their claim to bring security to systems?

The Linux and OSS environment is ripe for more patient attackers to get a foothold on all systems.

Build Systems Are Not Helping, and Developers Have Been Hypocritical

The build systems these days seem to be a major part of the problem. The whole autotools / M4 macros build system is hidesouly awful, and that seems to have played a big part in aiding obfuscation in this case. There is enthusiasm for cmake, yet that too seems littered with a lot of complexity.

Clearly something is very, very wrong when tools like Visual Studio Code consider it necessary to warn that merely opening a subdirectory and doing "normal" build things can potentially compromise the security of your system. It really shouldn't be like that.

One always needs some sort of "program" to convert a collection of source code into an executable, and in principal that program is always a potential threat. However, the development world has totally and utterly ignored the lessons learned by other purveyors of execution environments despite having often been critical of them. Javascript engine developers have had to work very hard to prevent escapes to arbitrary code execution. Adobe Reader was famously and repeatedly breached until they got some proper sandbox tech. Flash Player was a catastrophe execution environment to the end. And, so on. Yet the way that OSS build systems work these days basically invites, nay, demands arbitrary code execution as part of the software build process.

Unless build systems retreat towards being nothing other than a list of source code files to compile and link in an exactly specified and independently obtainable IDE / build environment, attacks on developers / the development process are going to succeed more and more. These attacks are clearly aided by the division of responsibility between multiple project teams.

Secure systems start, and can end, with secure development, and no one seems to be attending to that at the moment. Rather, the opposite.

How About This For An Idea?

One very obvious thing about how OSS source is distributed and built is that they all conflate "development build" with "distribution build".

When developing code, it's generally convenient to break it up into separate files, to use various other tools to generate / process source code (things like protoc, or the C preprocessor). Building that code involves a lengthy script relying on a variety of tools to process all those files. Anyway, after much pizza and late nights, the developer(s) generously upload their entire code base to some repo for the enjoyment / benefit of others.

And what that looks like is simply their colleciton of source files and build scripts, some of which no doubt call out to other repos of other stuff or includes submodules. So what you get as a distributee is a large colleciton of files, and scripts you have to review or trust that you have got to run to reproduce the executable on your system.

<u>Single File</u>

However, in principal, there is absolutely no fundamental reason why a distributee needs to get the same colleciton of files and scripts as the developer was using during development. If all they're going to do is build and run it, none of that structure / scriptage is of any use to the distributee. It's very commonly a pain in the rear end to deal with.

Instead, distribution could be of a single file. For example, any C project can be processed down to a single source code file devoid even of preprocessor statements. Building a project from that certainly doesn't need a script, you'd just run gcc (or whatever) against it. You'd also need to install any library dependencies, but that's not hard (it's just a list).

In short, the distributee could fetch code and build it knowing that they only having to trust the developer when they run the code (assuming the lack of an exploit in gcc...). And if you are the kind of distributee that is going to review code before trusting it, you don't have to reverse engineer a myriad complex build scripts and work out what they're actually going to do on your particular system.

If you want to do your own development on the code, fine, go get the whole file tree as we currently do.

<u>How?</u>

Achieving this could be quite simple. A project often also releases binaries of their code, perhaps even for different systems. It'd not be so hard to release the intermediate, fully pre-processed and generated source as a single file too. It'd be a piece of cake for your average CI/CD system to output a single source file for many different systems, certainly so if those systems were broadly similar (e.g. Linux distros).

<u>Benefits</u>

Developers could use whatever build systems they wanted, and all their distributees would need is gcc (or language / platform relevant equivalent) and the single source file right for their system.

It also strikes me that getting rid of that build complexity would make it more likely that distributees would review what's changed between versions, if there's just one signle file to look at and no build system to comprehend. Most changes in software are modest, incremental, without major structural changes, and a tool like meld or Beyond Compare would make it easy to spot what has actually been changed. It'd probably also help code review within a development project.

I suspect that the substitutions made in this attack would have stood out like a sore thumb, with this distribution approach. Indeed, if a version change was supposed to be minor but the structure / content of the merged source code file had radically changed, one might get suspicious...

Can a Xilinx FPGA recreate a 1990s Quake-capable 3D card? Yup! Meet the FuryGpu

bazza Silver badge

Yay!

I still occassionally use it, though I've reverted to stock firmware of late over RockBox.

bazza Silver badge

It's only a matter of time before someone cooks up a mod specifically for it.

Another mod that could be fun - you steer the game by steering the mower. Could lead to some interesting patterns in the lawn. Over enthusiasm would be indicated by a decimated veg patch, a scattering of rose petals and other flowers, a hole in the hedge and a flymo embedded in nextdoor's car.

bazza Silver badge

Of all the devices I've seen running Doom (including my iRiver MP3 player with a very low res black / white LCD display), that's topped the lot.

I wonder if they had to trim it down at all?

bazza Silver badge

You don't need a special compiler on Windows, and MS provide a "template". Just download the Windows Driver Kit, open up Visual Studio (the free community version will do), select the right type of projec to start with, and off you go.

Things have changed over the years, probably mostly in the area of MS making Visual Studio Community Editions actually useful. For example, if your company is contributing to an OSS project, any number of the staff can use the Community Edition F.O.C. for that purpose and can (if I've understood it correctly) freely distribute a binary for the project including via the Internet. I don't think there's any need now for an OSS driver project to go scrabbling around recreating MS's Windows Driver Kit and making do with gcc.exe.

Windows now has user mode and kernel mode drivers. User mode is pretty self explanatory - the guts of the driver are a user mode process (so, easier to develop and debug - a less crashy experience I should imagine). Unfortinately for this diy GPU project and for file system enthusiasts, neither graphics nor file system drivers can be a user mode driver.

The reason why there's not so many open source device drivers out there for Windows is because most device manufacturers also write their Windows driver (meaning that no one else has to). However, perhaps with a bit more awareness of how Visual Studio is now licensed, that could change. OpenZFS for Windows could be an interesting thing!

For those very few who do have a need to plug in storage with unusual file system formats, plugging it into a Linux box has long been the simplest approach (other than using the native OS for the fs). Recreating that ability in Windows would be neat, but probably interesting to only a very few people indeed! Linux does indeed support an interestingly eclectic set of file systems, but unfortunately it's not in a very good place with regard to some important fs's like ZFS and BTRFS; one is excellent but license incompatible, the other doesn't work very well.

bazza Silver badge

From the article:

"It speaks volumes when porting a game is easier than writing drivers on Windows."

Er, writing device drivers for any operating system is hard, low-level stuff that requires a complicated development set up. Porting a user land application is definitely always an easier task.

Windows is actually fairly good - there's a lot of dev kit and tooling available from Microsoft (for free), and because Windows has had a pretty stable device driver interface you're rarely forced into updating it because of the operating system's own progression. Having the right dev machine helps too.

Linux is not massively different, except that there's no guarantee that the device driver interface to the rest of the kernel will stay the same. This is what causes no end of trouble for hardware which the kernel project has not adopted for themselves.

There are some OSes where driver development has less of an impact on the running OS than it does on Windows, Linux, VxWorks, MacOS. For those which run device drivers as userland processes, you're just debugging another userland process. That doesn't necessarily stop a DMA being misdirected and causing havoc, though OSes like INTEGRITY take charge of that kind of thing too.

bazza Silver badge

He's certainly got to be worth a look at the very least.

The thing about independents like this is that they've come to the field with no preconceptions about how things should be done. They've probably done things, er, differently. That can be very valuable in an organisation that (depending on its own culture) has got 20+ years of fixed ways of doing things.

Good news: HMRC offers a Linux version of Basic PAYE Tools. Bad news: It broke

bazza Silver badge

Re: "for businesses with fewer than 10 employees."

>Unfortunately, Python isn't one of my weapons of choice; but I do know python3

>needs round brackets around the arguments to print, which always struck me as

>highly un-Pythonic .....

I'm no Pythonist either, but it always struck me that not having round brackets was highly Cobolic. Shudder. I don't actually know Cobol, but the mere thought that something might be a bit like it (right or wrong) is enough to make me come out in a cold sweat.

And yet, a younger French acquaintance of mine purposefully set out to be a Cobol programmer. Yes, he's a bit odd like that. He's always busy...

Boeing top brass stand down amid safety turbulence

bazza Silver badge

Re: Standard corporate response

The stock price for Boeing is, fortunately, not just a matter of impressing the market by shuffling the same stack of cards to form a management structure.

Firstly, the FAA has threatened to withdraw Boeing's Production Certificate within 90 days if they're not somehow amazed by a transformation within Boeing. If that happens, Boeing cannot deliver aircraft. I don't care how much shuffling then ensues, even the most optimistic of investors must realise that a company that is barred by its government from selling product is not a good share to invest in.

Secondly, with the FAA having raised the possibility of withdrawing the Production Certificate for the company, the role of overseas regulators and how they interact with the FAA becomes important.

For example, suppose the FAA does withdraw the certificate and then reinstates it a couple of days later (maybe for political reasons - FAA is beholden to Congress after all - or they did it to get the attention of the management). From an overseas regulator's point of view, the temporary withdrawal of the Production Certificate looks odd. Either Boeing are fit to have a Production Certificate, or they're not. If they've been judged unfit, an overseas regulator may want some pretty solid answers as to why, miraculously, two days later they're re-judged fit.

The problem for Boeing in this scenario is that if the FAA itself is perceived to be "playing games" with Boeing's Production Certificate status, then that is not exactly encouraging an overseas regulator to take the FAA itself seriously or believe one word of what they say. It then doesn't matter what the FAA has told Boeing. If the CAA / EASA / CAAC or any of the other regulators doesn't like what's going on inside the FAA, Boeing could find itself shut out of overseas markets. Boeing is depending on a lot of overseas regulators believing in the FAA; these being the same regulators that, having realised that the FAA / Boeing were speaking bollocks about the safety of the MAX after the Ethiopian Airlines crash, stopped believing in it.

Thirdly, the FAA effectively announced that the situation is dire enough that they're considering removing Boeing's Production Certificate. This amounts to them saying "Boeing is a dead company soon to be buried. Change my mind". The question then is, "What is 'Good enough'?". Almost by definition, it's impossible to quantify what is good enough.

It's got to be something especially major, just shuffling the management deck a little won't do it. Worse, it's got to be good enough to convince the US flying public, the overseas flying public, overseas regulators and the FAA. The FAA will have to be the hardest of all to convince. Otherwise, it becomes a case of FAA saying "good, carry on manufacturing", and someone else (EASA?) saying, "er, not good enough for us" and Boeing has no overseas market it is able to sell to.

On top of all that, there's this dreadful business of John Barnet dying. Regardless of the true facts of the case, the rumours surrounding his death are a real factor that Boeing would have to overcome. If the US public thinks the company committed a foul deed, well, that's the kind of thing that can turn into a real force in the market place.

My view is that there isn't really anything the company can possibly do that's "good enough" in the timescale set by the FAA that's going to tick the box well enough to convince everyone. It could be really difficult for the FAA to not suspend the company's production line(s), for a good long time.

In the face of those three problems, I'm slightly amazed that their stock isn't already "junk" status. They've effectively gone over the precipice, the FAA has stamped on one hand, they're clinging on with their other, and no one has yet seen a reason to not stamp on that last hand too. It is the most dreadful position any company can possibly be in; I just don't see how they can realistically come back from that inside the 90 days the FAA has given them.

The end of classic Outlook for Windows is coming. Are you ready?

bazza Silver badge

Re: PST files

This is an aspect of corporate email that most do not appreciate. Email can be a substantive record admissible in court. However to be admissible the record needs securing. A way of doing that is as you have said, export a PST.

The file can be archived onto media, and the person doing so / looking after the media can easily attest in court that the PST file is a complete unaltered contemporaneous record, unaltered since. The media can even be put in an evidence bag and sealed.

You can't do that with email stuck in a server. In fact, the email are probably not admissible at all, as it's likely difficult for anyone to swear that the server content "now" is a complete record of how content was "then".

An unviable alternative is to print everything.

Trying out Microsoft's pre-release OS/2 2.0

bazza Silver badge

Re: Pints' on me Brian

>Am I the only one who remembers OS/2 as "Oh Shit 2"?

I used it a lot, clung on to it as my primary desktop for far too long.

It was quite good for embedded work too. I used to install it headless (with a bit of manual trickery) on x86 VME cards. Bear in mind that at the time Linux wasn't a thing, and "standard" full fat 32 bit multiprocessing multithreaded OSes for VME cards were kinda pricey. OS/2 was a pretty good option.

Sandra Rivera’s next mission: Do for FPGAs what she did for Intel's Xeon

bazza Silver badge

Re: Dead End

Pretty sure they're not going to have $55billion's worth of advantage over CPUs.

As for latency, there's nothing in particular about an FPGA as such that gives them an advantage. They do as well as they do largely because interfaces such as ADCs are there on chip, rather than being at the end of a PCIe bus. If one put the ADC on a CPU hot wired into its memory system, that too would have a lower latency. CPUs these days also have a ton of parallelism and a higher clock rate.

As ever selection is a design choice in response to requirements. In 30+ years I've yet to encounter a project that has definitively needed an FPGA, definitively could not be done on a CPU. I've seen an awful lot of projects where the designers have chosen to use FPGAs fail, often badly.

To give an idea, a modern CPU with hundreds of cores and something like AVX512 available can execute 8960 32bit float point computations in the time it takes an FPGA running at a slower clock rate to clock just once. Given that things like an FFT cannot be completed until the last input data is input, there's a good chance a CPU with an integrated ADC would beat the FPGA with an integrated ADC.

bazza Silver badge

Dead End

An addressable market of $55billion? Pull the other one, it's got bells on. Xilinx were pulling in revenues of just over $3billion until they were bought by AMD, and I doubt Altera under Intel's stewardship has reversed their trailing market position. I'd be stunned if between them they were pulling in more than $7billion revenue.

The reason why there's an inventory correction going on is, I think, that a certain amount of AI Kool-Aid was drunk concerning FPGA's role in tech's latest bubble.

One really hard question both Xilinx and Altera have to face is, just how big is the market really? Taping out a new part these days is a very expensive business. To get a large complex part in production on the best silicon process is several $billions these days. I don't think the FPGA market is too far from the point where the cost of production set up exceeds the total market size. Xilinx, being part of AMD, is perhaps a bit immune in that AMD has some weight to exploit when it comes to getting time on TSMC's fabs. An newly independent Altera could really struggle. It feels to me like the whole technology is edging towards being unsustainable in the market place.

We shouldn't be surprpised if that happens. It's happened plenty of times before. There's many a useful / niche technology that's not been able to fund upgrades, and have been swamped by alternative technologies that enjoy the mass market appeal. Anyone remember Fibre Channel? Serial RapidIO? Both replaced by Ethernet.

FPGAs are troublesome, difficult, hard to program for, worst-of-all-worlds devices, the kind of thing one uses only if one absolutely has to. Thing is, there simply isn't that many such roles left where they're actually necessary. CPUs are very capable, and if for some reason the performance of many CPU cores all featuring their own SIMD / Vector units isn't enough, it's pretty simple to plug in a few teraflops of GPU. Even for the highly vaunted "they're good for radio" type work, FPGAs are often used simply to pipe raw signal data to / from a CPU where the hard work is done. I've seen projects go from blends of FPGA / CPU to just CPU, because the workload for which an FPGA was well suited is now a fraction of a CPU core's worth of compute. And with radio standards like 5G being engineered specifically to be readily implemented on commodity CPU hardware, the future looks bleaker not brighter.

At the lower end of the market, the problem is that it's actually pretty cheap to get low-spec ASICs made (if you're after millions). So, even if used in lower-tech devices FPGAs will struggle because if the product they're used in is successful in the mass market, it's worth ditching the FPGA and getting an ASIC made instead and making more money. So, FPGAs are useful only to product lines that are not run-away successes; doesn't sound like the kind of product line that's going to return $billions.

72 flights later and a rotor blade short, Mars chopper loses its fight with physics

bazza Silver badge

Re: "nothing short of jaw-dropping"

Many in NASA didn’t want the helicopter. It took the unignorable pressure from a Senator with the purse strings in his hands to get it included in the trip. It’s a tremendous success for him and for the engineers who did it, but it was not a glorious episode for some echelons in NASA who repeatedly tried to stop it happening, at least in the earlier days of planning this mission.

Starting over: Rebooting the OS stack for fun and profit

bazza Silver badge

Re: Replacing one set of falsehoods with a new set of falsehoods

Indeed. I was going to mention expanded Ram from the old days of DOS, which I guess is a form of bank switching for PCs

The one thing that might do something in this regard is HP's memristor. There were a lot of promises being made but it did seem to combine spaciousness with speed of access and no wear life. Who knows if that is ever going to be real.

Files are Useful

I think another aspect overlooked by the article is the question of file formats. For example, a Word document is not simply a copy of the document objects as stored in RAM. Instead MS goes to the effort of converting them to XML files and combining in a Zip file. They do that so that the objects can be recreated meaningfully on, say, a different computer like a Mac, or in a different execution environment type altogether (a Web version of Word).

If we did do things the way the article implies - just leave the objects in RAM - then suddenly things like file transfer and backup become complicated and interoperability becomes zero. The Machine and network couldn't do a file transfer with the aid of the software doing "file save as" first.

And if the software had been uninstalled the objects are untransferrable.

If one still saves the objects serialised to XML/Zip, one has essentially gone through the "file save and store" process which then may as well go to some other storage class for depositing there. There is then no point retaining the objects in RAM, because one then has no idea if the file or in RAM objects are newer.

bazza Silver badge

Replacing one set of falsehoods with a new set of falsehoods

The article seems to be based on the assumption that modern architectures are headed from a model of two storage classes to one.

Except, that it then proceeds brushes over the fact that in a new world we'd still have two different storage classes, despite briefly mentioning it. If you've got a storage class that's size constrained and infinitely re-writeable, and another that's bigger but has wear life issues, volatility makes no difference; one is forced to treat the two classes differently, and use them for different purposes. The fact that both storage classes can be non-volatile doesn't really come into it.

And also except that one is never, ever going to get large amounts of storage addressed directly by a CPU. RAM is fast partly because it is directly addressed - an address bus. Having such a bus is difficult, and the address decoding logic becomes exponentially more difficult if you make it wider still. If you wanted to have a single address space spanning all storage, there'd be an awful lot of decoding logic (and heat, slowness, etc). That's why large-scale storage is block addressed.

And whilst one storage class is addressed directly, and another is block addressed, they have to be handled by software in fundamentally different ways.

One might have the hardware abstract the different modes of addressing. This kind of thing already happens, for example, if you have multiple CPUs you have multiple address buses. Code running on one core on one CPU wanting to access data stored in RAM attached to another CPU makes a request to fetch from a memory address, but there's quite a lot of chat across an interconnect between the CPUs. So, why not have the hardware also convert from byte-address fetch requests to block addressed storage access requests? Of course, that would be extremely slow! It would very poor use of limited L1 cache resources.

Forgetting the history of Unix is coding us into a corner

bazza Silver badge

Re: What is unix anyway?

It's in the article: Unix is a standard for what API calls are available in an operating system, what kind of shell is available, etc. Unix is what POSIX is now called. It's a notional operating system that closely resembles a software product that was called Unix.

POSIX was created by the US DoD to make sure that software, ways of doing things, scripts, etc could be ported from one OS to another with minimal re-work. They also demanded open-standards hardware, for exactly the same reason. This is still in play today, and there's an awful lot of VME / OpenVPX-based hardware in the military world that is also used in other domains. The motivation was to get away from bespoke vendor lock-in for software / hardware, and it has worked exceptionally well in that regard. It's also the reason some OSes grew posix compat layers; DoD wouldn't procure anything that wasn't capable of POSIX (though they relaxed that a lot for corporate IT - Windows / Office won).

If one casts a wider net than the article does, one can see that OS/2 or Windows being considered "a Unix" is in not that odd. There's operating systems like VxWorks, INTEGRITY that also offer POSIX environments, and yet have their own APIs too. The OSes that are commonly perceived to be *nix are simply those that do only the POSIX API. Trouble is, even that's a bit uncertain. For example at least some versions of Solaris had its own API calls for some things beyond those of POSIX ( I seem to recall alternative APIs for threads, semaphores; it's a long time ago). Is Solaris a *nix? Most would say yes, but it wasn't just POSIX, in a similar way to OS/2 being not just POSIX. Linux is extensively diverging from just POSIX - SystemD is forcing all sorts of new ways of doing things into being. Do things like name resolution the SystemD way (basically a dBus service call instead of a glibc function call) and you end up with non-POSIX compatible source code.