* Posts by thames

1124 publicly visible posts • joined 4 Sep 2014

Just a reminder: We're still bad at securing industrial controllers

thames

Re: Isolate

Actually, all of the products listed are just managed Ethernet switches. The main reason that people buy stuff like this is that the mounting method and package profiles fit with other industrial hardware and the operating voltage is compatible with standard industrial voltages.

The specifications include a long alphabet soup list of standards and management protocols which most users probably don't understand. Most users probably just plug them in and if they work out of the box they don't bother configuring them.

I can't imagine a valid reason for connecting the management interface to the Internet for most applications these switches would be used in. If it is, then you probably already have much bigger problems than the software bugs in question.

Personally I am of the opinion that adding security features and protocols to most industrial hardware is a waste of time, and is even counter productive in most cases as it creates the illusion of security. Industrial control companies are never going to be security experts and industrial control system designers are never going to be IT security specialists. You are probably better off relying upon isolating networks and if you need connections outside the machine boundaries to add IT industry standard hardware which has been configured by someone who does that sort of thing for a living.

Adi Shamir visa snub: US govt slammed after the S in RSA blocked from his own RSA conf

thames

Re: So where would they move it to?

Some traditionally US conferences have been talking about moving to Vancouver specifically because of visa issues. You might want to add Singapore to your list of desirable locations as well.

I also wouldn't rule out London just on the basis of Brexit. That issue might be all consuming so far as people in the UK are concerned, but most of the rest of the world really couldn't care less about it unless they are running a business who are directly affected. Conference goers are more concerned about the availability of direct flights, hotel rooms, decent restaurants, low crime, and good conference venues.

Pretty much any potential location will have some drawbacks, and no destination will be completely free of visa problems for at least some attendees. It's all relative though, and at present holding international conferences in the US is becoming difficult enough that more than a few organizers are seriously looking for alternatives.

Huawei, your way, whichever way. We're cool with being locked out, defiant biz insists

thames

We know for a fact that the US is doing everything they claim that Huawei might be doing. The NSA monitors communications on a mass scale. The US routinely hacks into the communications infrastructure of Europe to spy on senior European politicians. The US NSA has direct access to major data links which connect data centres in the US, giving them access to unencrypted data stored there, and they do take advantage of it. US networking hardware gets equipped with backdoors in a targetted fashion. Under US law, American companies must hand over any data they have access to to the US security services on demand, and they can be imprisoned if they tell anyone about it. This has all been in the news over the past few years and so isn't in dispute.

Should the rest of the world therefore ban all US networking equipment and US companies from anything related to communications or critical IT systems? I mean if you are going to be consistent, then that is pretty much where your argument leads.

Meanwhile the US has declared Canada to be a "national security threat" because Canadian steel and aluminium might do something bad to the US. They also say that Germany is a "national security threat" because German cars might suddenly start goose stepping down American streets when Angela Merkel presses a button on her desk.

And now the US president has recently announced that he wants US industry to lead the world in 5G technology. I think it's pretty clear how the US determines what constitutes a "national security threat". It's called mercantilism, and it dominates US policy making these days.

Most of the rest of the world seems to be saying "thanks but no thanks", as they don't see any reason for making themselves poorer in order to bail out American companies who have fallen behind in the technology race.

If there is a genuine argument to be made for security in telecommunications hardware, then it is an argument which says that each country should only being installing kit made within their own borders by companies controlled by their own citizens and monitored closely by their own security services acting under the control of their own parliaments. Perhaps that doesn't sound like the best of ideas, but it's exactly where this anti-Huawei argument leads for those willing to be honest and consistent about it.

thames

Reproducible Builds

@El Reg said: "In the past GCHQ/NCSC had obliquely floated the idea that they weren't certain if what was being tested was what was being deployed."

This is a problem in the software world in general, where people conduct code audits but have no way of knowing if the executable binary built by someone else is made from the same source code as they audited.

Debian developers have been working on the issue of "reproducible builds". That is, to get the exact same binary time after time, so you can check if a pre-built binary given to you was made from the same source code as one you audited.

I suspect that Huawei will need to do something like that. Once they do that, then the auditors can post hashes which customers who care about these issues can check against a binary which they receive through their supplier support channel.

ReactOS 0.4.11 makes great strides towards running Windows apps without the Windows

thames

What is the purpose of this?

Viewed as a hobby, hat's off to them. As a practical project though, I think it's questionable. Windows is a monster of an OS to replicate, and they seem doomed to always be a decade or more behind.

I have to wonder if they wouldn't be better off focusing on being a being a better WINE. They could based on top of a specific Linux or BSD distro and come pre-installed and pre-integrated.

They might also look into coming out with a server version, also based on a sort of super-WINE. There are probably quite a few people who have legacy Windows server software, aren't tied into vendor support, and can run it in a VM.

The world has changed greatly since the ReactOS project started. A lot of software has moved to the web for example, where the browser and not the OS is the platform. For that which isn't, more people interact with Android on a daily basis than Windows. As a consumer OS, there's not much future in emulating Windows, outside of games.

A lot of businesses are locked into Windows for the reasons of legacy software, but most large or medium size businesses aren't going to run a large estate of ReactOS desktops without vendor support.

There might be a place for running specific types of legacy software on Windows for those who have otherwise binned Windows altogether and don't care about vendor support, but do you really need a full OS to do that?

Customer: We fancy changing a 25-year-old installation. C'mon, it's just one extra valve... Only wafer thin...

thames

Let me guess, the controller was a Siemens product, right?

@El Reg said: "Apart from anything else, all drawings and notes had been put in their possession, as per the original contract, and guess what? They'd lost them," said Geoff. (...) "First, the control sequence was not the same as what we'd been told, so we had to rewrite the program on-site to take this into account."

Been there, done that. You have to assume this is the state of affairs in any industrial retrofit project you go into. It seems to be almost standard practice in industry. You need to analyse the system and look for this problem well before the scheduled date of the retrofit so you aren't trying to deal with it on the fly.

@El Reg said: "We eventually realised the guy had discovered a way of bulk updating large blocks of data. He'd used this enthusiastically, blissfully unaware that it also corrupted variables in the 'gaps' that he wasn't using – but we were trying to."

Let me guess, it was a Siemens S5, right? There are instructions which directly overwrite absolute memory addresses. You can not only overwrite data, you can actually overwrite code as well. How do I know this? From having to debug a program written by one of the biggest equipment companies in their industry when everyone was convinced it was a hardware problem while it smelled like a software bug to me.

The problems with S5s was that as well as ladder they gave you access to a lower level instruction set that had powerful instructions comparable to assembly language, but which ran in an interpreter (except for some models which had a custom logic coprocessor running M5 instruction code, but that's another story). This looked sooo seductive to bored full time PLC programmers who would find "clever" ways of doing things instead of using less interesting but more readable methods. The more formal education the programmers had, the more they were inclined to do things like this.

The Siemens S7 series which replaced the S5s took away some of the more dangerous instructions, but added even more pointlessly complex low level ways of doing stuff. The result was of course loads of unreadable S7 code out there, which even the author struggles to understand. This more or less defeats the whole purpose of having a PLC in the first place, as the software is intended to be able to be understandable by non-programmer electricians. I think that some people see this unnecessary complexity as a form of job security.

It is comparatively easy to solve a complex problem with a complex solution. It seems to take least 5 to 10 years working in the field on a broad range of industrial problems before someone gains enough experience to come up with simple solutions to complex problems. Some people never seem to reach that point.

The best PLC programmers that I have met tend to be older electricians, while the worst seem to be younger electrical engineers. The former tend to think about how something is supposed to be maintained after they've walked out the door, while the latter tend to have trouble imagining that something they worked on could ever go wrong.

Happy graduation day, Containerd! Canonical has something for you

thames

Re: Its's just .....

I'm not sure what article you are responding to, but this one is about Canonical announcing their support for the industry standard, which is Containerd.

As for Mir and Unity, neither of these are in their current LTS release as standard features.

Europe-style 5G standards testing? Consistent definitions? Who the fsck wants that, asks US mobe industry

thames

Re: Stalling?

I think it has to do with they want to stall 5G adoption until they figure out how to get American companies at the heart of 5G standards.

I believe that I read a while ago that royalties on standards patents accounted for roughly 40% of the cost of a DVD player, while the companies that actually manufactured them made razor-thin margins. Most of those standards were owned by American or Japanese companies.

If IoT and 5G really lives up to their billing, then the big money from 5G is going to be made by the companies who will collect royalties on the standards for everything which incorporates them, which means nearly everything made for any purpose (look at the latest overpriced crap from Nike to see IoT shoes for example). The American government will see it as their duty to ensure that it is American companies collecting that economic rent on nearly everything made and sold. They have said repeatedly and clearly that they want the US to be the global "leaders" in all things.

Keep in mind that this is the same American government who claim that the Canadian and European steel and aluminium industries are a threat to national security and imposed massive import tariffs on them, and are preparing to do the same with automobiles.

If there were genuinely some serious threat posed by Huawei equipment, then I would imagine that we would be seeing some actual evidence for it by now. There are enough independent security researchers out there that could have found something if there were even a hint of something to find that was at all credible.

If the American government have some clear evidence, then let's see it. Just saying "trust me" isn't going to convince many people outside of the US about "national security threats" whether we're talking about bars of aluminium, cars, or 5G equipment. The American record so far doesn't exactly inspire trust.

thames
Boffin

Re: Tangentially related

Yes, the Americans have declared that Chinese 5G equipment is a "national security threat". They are also claiming that Canadian steel and aluminium are a "national security threat", and are preparing to declare that German cars are also a "national security threat".

Draw your own conclusions as to how the Americans decide what constitutes a "national security threat".

Boffins debunk study claiming certain languages (cough, C, PHP, JS...) lead to more buggy code than others

thames

Re: Too simplistic

Much of what determines what a programming language is selected for an application has to do with economics, and that in turn has to do with how it is being deployed and managed as well as what libraries are available to implement it.

I've just had a glance at the report, and while I have not read the whole thing, it's pretty clear the authors have pointed out some gaping holes in the original study. One obvious problem is that actual programming languages don't fit into the neat pigeon holes that the original study assumed. For example, functional languages may have procedural features and visa-versa, so users will commonly combine the two methods in the same project, making any conclusions based on a neat separation of the methodologies void. There are more examples, including the fact that large chunks of data the original study was based on was nowhere to be found.

The studies themselves were based on comparing the number of bug fix Github commits to the number of total commits. In other words, what proportion of commits were bug fixes. A "bug" was determined by scanning for the words "bug", "error", etc. in the commit message.

One pretty obvious problem that neither of the authors appear to have addressed is the relative maturity of projects compared to languages. Projects that have been around for a long time and are relatively stable are likely to be written in languages that have been popular for a long time, and it is possible to hypothesise that commits are more likely to be focused on bug and security fixes rather than new features. On the other hand, projects that are more recent have a greater probability of being written in languages that have become popular only more recently, and to be in a phase in which they are receiving more feature commits rather than bug or security commits. And if we examine how the languages cluster in the study, they seem to fit that pattern. Of course correlation doesn't equate to causation (as the second study points out), but it is yet another possible explanation which should be considered.

And as the study itself points out, there are many "bug" commits which are just fixing cosmetic, style, or comment issues rather than fixing actual functional bugs. The methodology is not able to distinguish these despite some "communities" being more obsessed about these issues than others.

What is noticeable is that most of the well established and popular programming languages appear to cluster pretty closely together while the "boutique" languages are in another cluster. It is not obvious to me how this could be explained simply by language feature rather than being a reflection of who is using them and how.

There are two interesting outliers however, these are C and Perl. C supposedly has an abnormally high number of bugs, while Perl has an abnormally low number of bugs (see figure 5). If we want to make any language superiority claims based on the study, then evidently if we want bug free software we should be writing it all in Perl. Or perhaps it's all just bollocks after all.

Bish, Bash... gosh! Good ol' Bourne Again Shell takes a bow as it reaches version five-point-zero

thames

Re: "Trusty command interpreter"?

Ubuntu works the same as Debian, as Ubuntu is a close derivative of Debian. There are two shells, the interactive shell and the non-interactive shell. If you simply open a terminal to type commands in you get the interactive shell, which is Bash. If you run a script with #!/bin/sh, then you get Dash. If you want your script to run with Bash, then you need to specify #!/bin/bash.

Bash has features which make it easier to use interactively as well as more advanced syntax with which to write complex scripts. On the other hand, since Dash lacks those features the executable is smaller and so it starts up faster, which is useful if you are running lots of small simple scripts (like in the pre-Systemd init system when this combo originated).

The main thing you have to look out for is when you specify #!/bin/sh in a script but then try to use advanced Bash features and get very unhelpful Dash error messages. This is not that uncommon if you google for scripting examples since so many people just assume that everyone uses Bash. If you are using a Debian derivative and get baffling syntax error messages in a third party script, it's worth a try to change the #! to specify Bash to see if that helps.

Generally, I prefer working in Bash because the more user friendly syntax for advanced features means that I am less likely to make errors which have to be debugged. For really basic scripts however it doesn't make much difference.

Dev's telnet tinkering lands him on out-of-hour conference call with CEO, CTO, MD

thames

@El Reg said: "Because the firmware on a lot of the devices wasn't updated for years, their telnet client had a number of quirks, such as it wouldn't ask for a password or would only ask for a password," he said.

This is exactly the sort of situation that "expect" was created to deal with. You send this string, receive that response, send this reply, wait for time outs, etc. "Expect" comes back with whatever exit code you define for each situation which you can analyse from the script you called expect from. The "expect" scripts can be stand alone, or you can embed them directly in bash scripts. The main script logic is done in bash (or something similar) while "expect" handles the interactive parts.

In this case I would have defined an "expect" script which looked for the anticipated response, and if the remote system responded differently I would have logged an error to a file and carried on to the next station. I could then have analysed the error log later to see what happened, and then added another "expect" case to the original script to deal with that situation.

I have used "expect" to deal with a somewhat different problem and found it works quite well. I believe that the most common use for it is automating log-ins on systems that can't use ssh keys or the equivalent for whatever reason.

I'm not saying that I wouldn't have made the same error as the original author, but if I had anticipated the problem then there's a tool which exists specifically to deal with this problem.

Top AI conference NIPS won't change its name amid growing protest over 'bad taste' acronym

thames

And here was I after reading the title thinking it would have been the Japanese who were complaining about it.

Apache OpenOffice, the Schrodinger's app: No one knows if it's dead or alive, no one really wants to look inside

thames

Counting is not so easy.

With respect to download counts, Linux users normally get their copy of LibreOffice from their distro, either as part of the default install or from their standard repos. I believe that most of the major distros ship LibreOffice, not Apache OO. Linux users have little or no reason to get their copy directly from the LibreOffice site. This means that a major part of LibreOffice's user base won't show up in their download counts.

Then there are derivatives and rebranding, such as NeoOffice, which can also make difficult to get accurate user base figures for either LibreOffice or Apache OO.

Decoding the Chinese Super Micro super spy-chip super-scandal: What do we know – and who is telling the truth?

thames

Re: My take?

There seems to be a general rule of thumb that when US intelligence departments leak alarming stories via compliant press contacts, it's usually the case that the US is already doing this themselves and are sweating buckets over the thought that someone else might be doing it as well. We saw exactly this in the run-up to the Stuxnet reveal, and we saw exactly this in the backdoors being installed in Cisco networking equipment.

I remember the same sort of vague but alarming stories claiming that foreign powers were infiltrating SCADA systems and could use that do destroy utility equipment. They even built a lab type setup of a diesel generator with attacked SCADA system and demonstrated it. Meanwhile the utility industry scratched their heads in puzzlement, because despite the alarm and panic in government, industry couldn't pry any actual details out of them so they could take preventive action and nobody was seeing it in the wild. And then the Stuxnet story came out and we found out the panic was about how the US (with the assistance of Israel) had infiltrated the SCADA systems controlling Iranian enrichment equipment and was using it to conduct sabotage and the US were afraid they would be hacked back.

To go back to the mysterious motherboard chips, if this was real, I would expect someone to present actual hacked hardware along with demonstrations of what it did. After all, if the story were real then it's not like Chinese wouldn't already know everything about it, so what's the point of hiding it?

And Amazon's and especially Apple's denials are pretty strong. If they were obfuscating the issue, then they would just release their usual vague waffle.

I suspect this story is complete bullshit. The use of a security company in Ontario Canada is also very interesting. At this very moment the US is putting lots of pressure on Canada to try to get them to ban Huawei equipment from important Canadian networks. It would not be surprising if this whole story were to be an exercise intended to pressure allies into stepping into line behind the US in freezing Chinese tech companies out of western markets in favour of equipment that has the backdoors of "friendly" countries in it.

Brexit campaigner AggregateIQ challenges UK's first GDPR notice

thames
Boffin

Anonymous Coward said: "Serious question but how are the ICO going to enforce the GDPR against a Canadian company?"

I imagine it would involve the ICO going to a UK court asking that the judgement be enforced, followed by the UK court filing appropriate papers with a Canadian court asking them to enforce the UK decision. AggregateIQ would then appeal to a Canadian court asking that it not be enforced, and then after some back and forth with lawyers, the Canadian court approves the UK request and the ICO gets their judgement approved.

UK law is considered to be close enough to Canadian law (closer than any other country) and the UK courts fair enough that the Canadian courts are not likely to question their judgements too much provided the proper paperwork has been filled out.

The ICO may have to wait in line however. Cambridge Analytica, AggregateIQ, and Facebook are already under investigation for the same or related matters by the ICO's Canadian equivalent, the OPCC (Privacy Commissioner) over violations of PIPEDA, which is Canada's equivalent of GDPR.

The OPCC web site mentioned six months ago that they are in contact with the UK ICO on their related investigation. It appears that the UK and Canada have been cooperating with each other on this matter for some time now.

GitLab gets it, grabs $100m to become $1bn firm

thames

Or Amazon. One of the major cloud providers likely will, in order to provide a seamless path from development to deployment and so increase sales to their cloud.

They don't need to make GetLab exclusive to their cloud in order to do that. They just need to be able to direct development to ensure that it supports all features of their cloud and that any operating assumptions built into GitLab mirror the ones in their cloud architecture.

Excuse me, but your website's source code appears to be showing

thames

And the problem is poor Wordpress configs

If you read the actual report it becomes apparent that the problem seems to be mainly from Wordpress or Wordpress-based systems. The author does a lot of analysis, but in the end it comes down to poor Wordpress installs. There seem to be a few other similar systems as well, so it's not solely Wordpress, but the big one is Wordpress. That's not to say that Wordpress is inherently bad, but it is very, very, popular, and very widely used by people who don't necessarily know what they are doing.

I suspect that most of the problem comes from hosting providers offering "one click install" options for many of the most common hosted systems from a management panel, while not making those standard options secure by default. They seem to be deploying via Git, and when Wordpress and similar systems detect they have been deployed from Git they disable automatic updates (presumably because they believe the administrator will want to handle that himself under those circumstances). If the hosting provider doesn't keep up with security updates, then that adds to the problems still more.

I suspect if the hosting providers were to fix their standard offerings most of this problem would go away so far as new installs goes. The major issue would then be whether they could fix what their existing customers have already got deployed.

Redis has a license to kill: Open-source database maker takes some code proprietary

thames

Re: Wait and see

Redis Labs are going to what is called an "Open Core" model. This is where the main software project is open source (in this case Redis itself), but add-on modules are proprietary (adding the Commons Clause to Apache is equivalent to). If you don't use the add-on modules then it doesn't really affect you.

A number of other companies do this, Oracle MySQL is perhaps the best known. There aren't however many well known examples because there simply aren't that many successful examples of Open Core as a business model. It really only seems to work in cases where the main software system has a very narrow and specialised market where there is direct contact between the software creator and the customer, and there aren't many third parties interested in creating add-ons. Perhaps the most successful "open core" example is Google Android where the base OS is open source but Google Play Services is proprietary. There aren't a lot of other successful large examples however.

I'm not sure this is really such a big change to what Redis was already doing however. The reason that most companies use an AGPL license rather than a standard GPL license is so they can charge customers a fee for selling them a non-AGPL version (unlike standard GPL, AGPL is less convenient to use on proprietary web services).

In the case of Redis, the base database was and remains BSD. Some of the add-on modules doing things like full text search however are changing from AGPL to Apache + Commons Clause. The latter apparently achieves the same intent (from Redis Labs's perspective) as the former when used in "cloud" operations as opposed to enterprise data centres which were Redis Labs' traditional market for add-on products.

Redis Labs' main problem is that their core customer base of business enterprises is to a large extent moving from their own data centres to cloud operations. Since the cloud vendors are increasingly providing the core software infrastructure as well as the hardware to run it on that removes a lot of their traditional customer base contact, and the ability to charge support fees along with it.

This follows general long term industry trends as lower layers of the stack, from hardware to operating system to databases, increasingly become commodities. Profitability increasingly comes from more specialised software higher up the stack which has not yet become a commodity, or from services which are inherently more difficult to commoditise due to economies of scale.

Older vendors selling legacy software or software/hardware combinations with a great deal of customer lock-in such as Oracle databases or Microsoft Windows or IBM mainframes are another profitable business model. However, all of these examples face eroding revenues as new development goes elsewhere and existing customers gradually drop away through natural attrition.

SUSE and Microsoft give enterprise Linux an Azure tune-up

thames

Re: At what cost?

From what I can tell it's just something to let the kernel know that it is running in a VM and to use the VM's direct interfaces for storage and networking rather than using emulated interfaces. This is something that Linux versions optimised for VMs have been doing for years.

Generally, when you are running a generic kernel on a VM you lose some I/O capacity if you are talking to it as if it were emulated hardware. Most VM makers offer a way around that so that the I/O systems can talk directly to the VM bypassing the emulation features.

About size months ago Microsoft started offering a version of MS Azure with hardware accelerators for I/O (google "TCP offload engine" for examples). Such things have been available for years in things like NIC interfaces if you run directly on your own server hardware instead of using "cloud" versions.

"Cloud" versions of course require additional support from the VM so that different cloud instances can share the hardware without stepping on each others toes. The new Suse version just has added modules to use the interfaces in Azure for this.

I'm not sure what is really new in this announcement, since according to Microsoft the previous version of Suse had this, as well as Red Hat, CENTOS, and Ubuntu. It might be just that there was a delay in support for this feature in the new version of Suse that came out recently but now it's there for people looking to upgrade their version of Suse.

Drink this potion, Linux kernel, and tomorrow you'll wake up with a WireGuard VPN driver

thames

Re: Why?

@Anonymous Coward said: "I had a ran a Linux VM that was optimised to be a VPN server (...) These days, you'd be lucky to get away with a 4GB USB stick."

Let's see how that compares to the actual size of a popular Linux ISO.

Ubuntu desktop: size is 1,953,349,632 bytes. That includes a full GUI (Unity), office suite (LibreOffice), web browser (Firefox), email client (Thunderbird) and all sorts of media players and other odds and ends. If you just want the desktop with web browser, pick minimal install and the other stuff will get left out.

Ubuntu server: is 845,152,256 bytes. That is just over a fifth of the size of the desktop ISO.

For Debian I just have the server net install, but the installed size won't be much different than the above.

FreeBSD (not Linux, but we'll list it anyway) is 2,930,397,184

OpenSuse is 3,917,479,936 bytes, although you don't have to install everything it includes.

Those are the ones I happen to have sitting around. The Ubuntu desktop will boot directly from the ISO so you can try it out without installing it.

If you want to know the install size, then as an example a Debian 9 64 bit server has a Virtual Box VDi size of 1.7 GB. However, that includes a C compiler and a bunch of other extraneous stuff that I use for testing software, so you can probably cut that down somewhat.

Spectre/Meltdown fixes in HPC: Want the bad news or the bad news? It's slower, say boffins

thames

Meltdown seems to be Intel specific, while Spectre is a more general problem relating to speculative execution.

For ARM, it will depend on the specific model. Some ARM chips are affected and some are not. There is a list somewhere of what models are affected.

For example, the Raspberry Pi Foundation have said that all models of Raspberry Pi are immune to both Meltdown and Spectre due to the model of CPU they use.

Generally, some of the top end ARM models are affected by Spectre, while the rest are not. What this has generally meant in practice is that the most expensive premium model mobile phones have a potential problem while the medium to low priced Android phones are largely immune. The bulk of embedded applications using ARM are also probably immune.

Hooray: Google App Engine finally ready for Python 3 (and PHP 7.2)

thames

Re: about bl**dy time

If I recall correctly, AppEngine was stuck on using an old version of Python because they had taken a snapshot of Python and heavily hacked the source code to introduce language run-time level checks and limits on what a user program could do in order to add sandbox style isolation without using VMs.

It sounded clever, but it had several major drawbacks. The obvious one was that they were stuck on one increasingly old and obsolete version of Python while the rest of the world moved on. Another was that they could only support a subset of the Python standard library and no user C modules.

The Java version they supported also had comparable limits, but I don't know the details of that.

"GVisor" replaces the need for a custom version of Python so they can now support up to date versions without having to hack the language run-time.

Most major Python projects such as Django, SciPy, and others have either already dropped or are in the process of phasing out support for Python 2.x, and many newer libraries have never bothered supporting 2.x in the first place.

While you may not want to try running something like Django on App Engine (I don't know if it is even possible), App Engine's lack of support for modern versions of Python was leaving it increasingly isolated in terms of third party library support. Since Google was running their own version of Python, the end of life date for mainstream Python wouldn't have affected them. However, lack of third party libraries, lack of new educational material, and just generally being out of the development mainstream would. This I think was the motivation for replacing their original solution with gVisor.

And the overall motivation for the update I think is that Google are now putting more emphasis on App Engine due to the current trendiness of "serverless", the latter of which was the basic idea behind App Engine to begin with.

On Android, US antitrust can go where nervous EU fears to tread

thames
FAIL

Two issues

The article is full of holes as it confuses two different issues.

One problem is that Google tells phone vendors that if they want to sell genuine Android phones they can't also sell any that use a forked Android. This is quite effective in preventing companies such as Samsung from coming out with their own Android fork which is mostly compatible with actual Android and forces them to make something completely different and incompatible such as Tizen. That is a much bigger gap to jump to create an attractive product.

The other issue is the services market, which includes mail, location, ads, etc. Google has theirs which only works with Android. Apple has theirs which only works with their phones. And there are several Chinese companies who are able to offer the same for their local market, but only with a forked Android. The fact that Google is only a minor player in China and China is the world's biggest mobile phone market probably goes a long way to explain why non-Google Android succeeds there.

The example the author is looking for is Russia. Google lost an anti-competition case in Russia and can no longer demand exclusivity of its applications in Russia. This includes search, where Google has to provide a window which lets the user select what search engine to choose. The case originated in a complaint from Yandex.

The story would have been much better if the author had discussed the Russian case and how that precedent might be applied to the EU.

No big deal... Kremlin hackers 'jumped air-gapped networks' to pwn US power utilities

thames

The usual pattern for this sort of thing is that it starts when the US do this to someone else. The US counter-intelligence department then find out what their colleges on the floor above have been up to and crap their pants over the thought that someone might do the same to them. They then stage a series of leaks into the press that someone else has been doing it to them in an effort to whip up enough publicity to spur the industry into taking some preventive measures.

Prior to the news of what the Americans did to Iran with Stuxnet, there was a long series of "confidential intelligence briefings" to selected newspapers and politicians about how US utilities may be vulnerable to being hacked. A demonstration using a specially set up diesel generator (simulating a power plant) was conducted which was supposed to show how SCADA systems could be infiltrated.

The utility industry just shrugged it off, as they weren't seeing any of this in practice. And then Stuxnet hit the news and we saw that it had done exactly the sort of SCADA infiltration that the Americans had claimed was the threat to US utilities.

And then there was the big campaign using the same PR techniques over how Chinese IT gear might have back doors in it. Nobody could find these back doors, but we were assured they might be there and it was a huge national security risk. And then it turned out that the American NSA was putting back doors in Cisco kit.

I could go on with more examples, but the pattern follows a well-worn groove by now. The US hacks someone else, they crap themselves over the thought that someone might do the same to them, they start a propaganda campaign via the channel of suitably compliant major news media to whom they give an "exclusive" in return for not asking the wrong sort of questions, and industry is left to wonder "WTF?" because the story is full of holes due to so many details being held back because of course the US doesn't want the target they had actually hacked to find out what had been done.

To address the story in particular, very likely the "air gapped" systems aren't actually air gapped. The utility has an "air gap" policy, but an exception was made for remote vendor support. The vendor isn't air gapped because they're too small to have a dedicated IT security team who could plan such a thing. And true "air gapping" probably isn't practical to begin with because the vendors are software developers who need to get software updates from Microsoft and their PCs need to connect to the Internet on a regular basis to validate software licenses, etc., etc.

And if software updates from the vendors to the utilities aren't conducted on a timely basis, ordinary bugs can crash the electric network just as surely as malicious action could.

Genuine security is probably possible, but it would require a complete overhaul of the industry and the relationships with vendors and the software development environments they use, and that simply isn't going to happen any time soon.

Oldest swinger in town, Slackware, notches up a quarter of a century

thames

Well Ahead of Red Hat

El Reg said: "2017 saw the distribution drop to 31 in page hit ranking, according to Linux watcher DistroWatch.com, from position 7 in 2002. "

Well, they're 28 on the list right now, and way ahead of Red Hat who are at 45, sandwiched in between Kubuntu and Gentoo. Number 1 on the list is Manjaro, who are so far ahead of the rest of the pack that nobody is even close to them.

Years ago Distrowatch had to write a complaint about "abuse" (their word) of the counters by fanbois who were gaming the system to try to push their favourite distro higher on the list. Or as DIstrowatch themselves put it "a continuous abuse of the counters by a handful of undisciplined individuals who had confused DistroWatch with a poll station".

Distrowatch rankings have nothing to do with how "popular" any specific distro is. They're just a count of how often someone clicks on the page that describes that distro. The average Linux user never has any reason to ever visit Distrowatch, which means that the ranking is simply an indicator of what caught the eye of the sort of person who collects Linux distros the same way that some people collect stamps.

GitHub to Pythonistas: Let us save you from vulnerable code

thames

Re: pickle

@stephanh said: "What do you consider a known vulnerability?"

A known vulnerability is something that is supposed to be secure against attack but isn't. Pickle wouldn't count as a vulnerability, because you are essentially just serializing and unserializing executable object code and data. This is something you do between different parts of your own application, not with data from outside. The docs as you said, make this clear. If your application un-pickles data from untrusted sources, the mistake is yours since you were explicitly told not to do that.

For untrusted data you would use something like JSON. If there were a bug in the JSON decoder which allowed someone to execute arbitrary code, then that would be a vulnerability.

Most programming language libraries have something to let you execute OS shell commands. That is potentially dangerous if you were to write your application such that anyone could execute arbitrary shell commands via the web interface. However, that wouldn't be a programming language vulnerability, that would a vulnerability in your program since you should not provide a feature that does this.

Something is a vulnerability when it can do something dangerous that wasn't in the documentation.

Python creator Guido van Rossum sys.exit()s as language overlord

thames

Re: Reinventing a more limited wheel

@rgmiller1974 said: "I'm curious about the example thames posted. Is ... really any slower than ..."

The interpreter doesn't cache the results of f(x), and I doubt it would be feasible to determine if it could do so in all cases. Static analysis couldn't determine that since function "f" could be written in another language (e.g. 'C') for which you might not even have the source code and dynamic analysis would run into similar problems.

The new syntax achieves the same result under the control of the programmer as well as being useful in other applications. Plus, you can see what is going on without having to analyse the behaviour of f.

thames

@AC said: "Any language that depends on differing amounts of whitespace to alter the program is stupid. "

For those who have moved on since the days of GWBASIC, everybody (other than you apparently) indents their code in a way which is intended to convey meaning about it.

Differing amounts of white space alter the meaning of programs in all programming languages - in the eyes of the programmer for whose benefit those visual cues are present. The fact that in most programming languages indentation level doesn't alter the meaning of the program in the "eyes" of the compiler is a major problem.

The Python compiler reads code the same way that a human would and derives meaning from the indentation level similar to how a human would. That eliminates whole classes of errors which would derive from humans reading it one way and the compiler reading it another. And once the compiler uses the same cues that the programmer does the block tokens become redundant and can be eliminated.

thames

Re: Here's a PEP

Just use #{ and #} where you would like to use brackets and you can put in as many as you want.

thames

Re: Reinventing a more limited wheel

I would be fascinated to hear how you would do the following in one line of idiomatic C using commas.

results = [(x, y, x/y) for x in input_data if (y := f(x)) > 0]

The major objective appears to be avoiding duplicating work unnecessarily when doing multiple things in a single expression. The previous way of doing the above would have been:

results = [(x, f(x), x/f(x)) for x in input_data if f(x) > 0]

I can think of multiple instances in which I could have used this feature in cases similar to the above.

thames

Re: Futuristic progression of Programming Languages?

A program written in Python can be a fraction of the number of lines as a program which does the same thing in C.

Time is money, or whatever other means you want to measure the value of time in. You can get a finished program in fewer man-hours. That matters in a lot of fields where being first to market is what counts, or where you are delivering a bespoke solution to a single customer at the lowest cost, or where you have a scientific problem that needs solving without investing a lot of time in writing the software part of the project.

Python isn't the best solution to all possible problems, but it is a very good solution to a lot of problems which are fairly prominent at this time. It also interfaces to C very nicely, which allows it to use lots of popular C libraries that already exist outside of Python itself. These are why it is popular right now.

There is no one size fits all solution to all programming problems. It is in fact considered to be good practice to write bits of your program in C and the rest in Python if that is what makes for a better solution for your problem. There is no necessity to re-write everything in Python in the manner that certain other languages require everything to be re-written in "their" language. The result is that Python has become the language of choice for a lot of fields of endeavour where you can reuse existing industry standard C and Fortran libraries from Python.

Van Rossum's "retirement" isn't a huge shock and won't make much difference. For quite some time other members of the community have been taking the lead in developing new features and Van Rossum's main role has been to say "no" to adding stuff that was trendy but didn't provide a lot of value. Everything should continue along find with the BDFL further in the background. Overall, it is probably a good idea to get the post-BDFL era started now while the BDFL is still around.

I think I'm a clone now: Chinese AMD Epyc-like server chips appear in China. What gives?

thames

Re: Contradictory

They can replace the built-in encryption accelerators and random number generators with their own. The way the US has been putting back doors into systems is to get people who work for them (either openly or clandestinely) on industry standards boards and get subtle weaknesses introduced into the standards. They also bribed American companies to implement these backdoored standards and then certified them as "secure" in order to get them adopted in the market.

These weaknesses make encryption easier to crack. You couldn't prove the standards had a back door unless you knew what the back door was, and independent cryptographers who thought things looked more then a bit fishy were dismissed as tin foil hat wearers. Then it all came out in a set of leaks a few years ago.

How that relates to CPUs is, how do you know that the encryption acceleration or random number generator built into an Intel CPU doesn't have a similar US government backdoor built into it? You don't, which is why Linux kernel devs don't trust the built-in random number generator for use in encryption. They only use it as one of a number of different sources of randomness specifically because of the threat of US back doors.

So, the Chinese can replace the encryption accelerators and random number generators in AMD CPUs with their own. They may possibly Chinese back doors instead of American back doors, but at least they know the US government won't be reading all their encrypted messages. That isn't an assurance that the rest of the world doesn't have.

Oh, and if the Americans have back doors in Intel CPUs, then the Russians and a number of other countries probably have managed to get themselves a copy of the same keys as well, one way or another.

Boeing embraces Embraer to take off in regional jet market

thames

Yes, Boeing had been talking with both Embraer and Bombardier about some sort of acquisition or merger. Talks with Bombardier fell apart, and Boeing already had a partnership with Embraer to sell and service their military transport jets (same market segment as the Hercules). A straight out purchase of Embraer was not in the cards though because of political opposition to it in Brazil.

There is widespread speculation that Boeing's plan was to get the US government to give them a monopoly on the US market for this size of jet to sweeten the deal enough to bring Embraer back to the table.

The Canadian government then played matchmaker to get Bombardier and Airbus back to the negotiating table and a deal was made (and is now in effect). The UK government got involved as well as large parts of the CSeries are to be made in the UK. PM May held high level meetings in Ottawa about strategy and then did some high level lobbying in Washington to try to get Boeing's tariff plan killed.

There is also an upcoming major arms deal in Canada where Airbus are now in an improved position for their Typhoon due to Bombardier's local industry links. Boeing meanwhile, who once were seen as having the deal in the bag, have been told their bid will have big negative ratings all over it due to being seen as not being very friendly to Canadian interests (a criteria for this was actually added to the formal bid process because of these events).

Overall, while Boeing may have possibly have now got their partnership with Embraer, they screwed up badly overall.

GitLab's move off Azure to Google cloud totally unrelated to Microsoft's GitHub acquisition. Yep

thames

Next Month's Le Reg Story

And next month The Register will report that GitLab is being bought by Google. Someone is going to buy them, and the top candidates would be Google and Amazon.

From here on, Red Hat's new GPLv2 software projects will have GPLv3 cure for license violators

thames

Re: I have a better remedy...

Changing the license to BSD would do absolutely nothing to resolve the questions being addressed here because the BSD license does not even attempt to address the issue of what happens if you were found to be exceeding the terms of the license.

The question of "cure" is with respect to what does someone have to do to get themselves into the clear if they were caught violating copyright law with respect to a published work. GPLv2 as it stands does not address this. BSD also does not address this.

GPLv3 however lays it all out clearly that you if you bring yourself into compliance with the license then "forgiveness" (in the legal sense) is automatically instated. Under copyright law, formal "forgiveness" is required in order for copyright infringement complaint to be considered closed. The measure being adopted by Red Hat tacks that aspect of GPLv3 onto the side of GPLv2 without changing the license itself.

When you "violate copyright" you are violating copyright law. The license is your defence as a user against being sued by the copyright holders. A license that is more explicit in this respect is to the user's advantage. A license which does not address this issue leaves it up to the courts and the lawyers to argue it out.

thames

There is no change to the actual license. If the original license was GPLv2, then that remains the license. What happens is they add another file to the project which says that in the event of a license violation, the "cure" procedure for copyright violation will be as specified in the new file. Since GPLv2 doesn't address what happens at that point there is no conflict with that license. Since Red Hat stated what they they would do in that instance, so far as a court is concerned they are as effectively bound by it as they would be if it was a clause on the license itself.

GPLv3 addressed a lot of issues in GPLv2 like this, and is in my opinion overall a better license and what I use in my own open source projects unless I need to conform to the license of an existing project. The GPLv3 drafting process also took in a lot of input from lawyers around the world to correct issues relating to legal systems which are different from that in the US, as well as many other matters.

The main objection that people had against GPLv3 was the provision that manufacturers of locked-down platforms had to provide unlock keys. The main objector back then appear\ed to be Tivo and other makers of things like home TV video recorders. These days it is cell phone and tablet makers who object to it.

I won't be surprised if eventually they end up with what is effectively a "GPLv2.9" - or a GPLv3 without the anti-lock-down provisions.

Microsoft will ‘lose developers for a generation’ if it stuffs up GitHub, says future CEO

thames

Re: puts a dampener on rival GitLab’s claim?

This is what I'm doing. Within the next couple of weeks I will be setting up an account at GitLab, but will still keep the GitHub repo. The project will simply be hosted in two places. If that works out well, then I may look for a third location as well. I will want to automate this with proper scripting first however so I don't have to do it manually.

My plan isn't to simply switch hosting providers. I did that once before when I moved from SourceForge to GitHub. What I intend to do is to have multiple mirrors where the project is hosted so that the loss of any one of them is not a major setback. There is no point in trying to do that after you have been presented with the choice of either accepting new terms of service or being locked out of your one and only account.

So I will be moving to GitLab, but I will still be at GitHub for now as well. This is what I would expect other people who are concerned about this to do as well.

The only question really is which one becomes the primary repo and which one becomes the secondary mirror. A lot of GitHub's value is in the "community" aspect of having the largest number of developers already active there. If the community becomes more dispersed then a lot of that value will fade away.

Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion

thames

Re: Shite

@J. R. Hartley said: "Wonder which new and exciting way they're gonna fuck it up."

They'll integrate it with Linkdin, Skype, Azure and MS developer tools.

Their press release said: "... bring Microsoft’s developer tools and services to new audiences."

Expect to see Github features appearing which hook your code repos directly into MS Azure for deployment. Your Github rankings will be reflected in your Linkdin profile. If you don't have a Linkdin profile, one will be automatically created for you based on your Github data. Skype will be integrated into team meetings for projects. MS Visual Studio will have deep integration into GitHub beyond just being a Git client.

So, you'll still be able to use Github via the web interface and via the command line Git client, but every possible Microsoft service that can be integrated into Github will be to the degree that a software developer could work through the life of an entire project without ever leaving the Microsoft walled garden.

Microsoft just paid a staggering amount for Github (three times as much as press analysts were speculating) and they will be looking for ways to make that back. Introducing a variety of forms of vendor lock-in in order to sell other goods and services is the obvious choice here.

I'm looking into what is involved in setting up a Gitlab account. I won't pull my open source projects from Github, but I won't use it as the sole public repo any more. Just like a lot of Youtubers have come to the realisation that they need to diversify their options rather than being at the mercy of Youtube's latest policies, I'm going to make sure that I can pull the plug on Github at any time if necessary just like I did with Sourceforge.

P.S. Don't be surprised if Amazon come out with some sort of response to this.

Welcome to Ubuntu 18.04: Make yourself at GNOME. Cup of data-slurping dispute, anyone?

thames

@AC said: "As a random example, lets say you're a manufactuer that has a line of custom Linux laptops. Want really good support added to them for nearly no cost? Well then, send in ten or twenty thousand entries for your stuff, randomising things to look legit and using fake source IP info."

Or just send an email to Canonical telling them that you are are a manufacturer who is planning on coming out with a line of custom Linux laptops and that you would like them to work with Ubuntu out of the box on launch. Then ask them if their developers would like some free laptops. They're happy to work with anyone who wants to support Linux.

However, just have a look at the type of information being collected. According to the story it just amounts to the following:

  • Ubuntu Version.
  • BIOS version.
  • CPU.
  • GPU.
  • Amount of RAM.
  • Partitions (I assume that is number and size of disk partitions).
  • Screen resolution and frequency, and number of screens.
  • Whether you auto log in.
  • Whether you use live kernel patching.
  • Type of desktop (e.g. Gnome, Mate, etc.).
  • Whether you use X11 or Weyland.
  • Timezone.
  • Type of install media.
  • Whether you automatically downloaded updates after installation.
  • Language.
  • Whether you used the minimal install.
  • Whether you used any proprietary add-ons.

There is basically two types of information there. One is some basic parameters such as RAM, CPU, GPU, hard drive size, etc. That tells you what you should be targeting in terms of hardware resources, and so whether your desktop (e.g. Gnome) is getting too fat for the average user (as opposed to the average complainer, at which point you are far too late to be addressing the issue).

The other is what install options people changed compared to the default install. If most people don't pick live kernel patching, then you know not to make that option the default. If a lot of people are selecting Urdu as the language, then you might want to make sure that language has better default support. Etc.

Ubuntu will publish this information publicly. Personally I am looking forward to the RAM and CPU type data, as that will give me information on what CPU features to target in certain software I have been working on. I have been relying on Steam data, but that may not be very representative of the science and engineering field which my software relates to.

thames

@doublelayer - They'll use the data to decide what ought to be the defaults for the next release. They will be making decisions based on actual data rather than someone's wild guesses. A major problem has been that developers often assume that the sort of hardware they have on their desks is typical of what everyone else has.

In the past they've had to make decisions on things such as "should the default install disk be CD sized so that it will work with PCs which have CD drives but not DVD drives, or should it be DVD sized so that the user is less dependent on having network access at the time of installation to install stuff that wouldn't fit on the CD?".

They've also had to worry about things like graphics support, what CPU optimisations to compile in as default (some packages have optional libraries for older CPUs), etc.

Apple know exactly what hardware they ship. Microsoft can simply assume that the non-Apple PC market is the same as the Windows market. Linux distros can't make these assumptions so they either just pull numbers out of the air, use opt-in surveys which are usually wildly unrepresentative of the user base, or do something like this.

Before this they had a detailed opt-in hardware data survey which so few people bothered with that it was pretty much useless. The new one collects far less information, but does so from a sample which will likely be representative of the overall user base.

I got 257 problems, and they're all open source: Report shines light on Wild West of software

thames

The article seems to be mainly buzzword bingo.

* unpatched Apache Struts.

* Heartbleed

* GDPR

* IOT securtiy

None of these have anything to do with license terms. They can be related to keeping your systems patched and up to date.

However, the real issue in that case is whether you are talking about vendor support of software you have bought, or whether you are talking about supporting software you have developed in-house (or via a contractor).

In the case of vendor support, the license is irrelevant to this issue. The real issue would be the quality of service provided by that vendor. Whether that vendor is Red Hat or Microsoft, the issue is the same.

In the case of self-support of something you developed yourself (or paid a contractor to develop for you), then you need to handle this aspect yourself.

In the general case of security patches for open source libraries and components though, if all of that came from the standard repos of a Linux distro then the distro manages all of this for you. They have security teams and their distro comes with an updating system that manages security patches. They can't make you apply those patches though, that is up to you being willing to do so and having the procedures in place which prevent the issues from being ignored.

This though is really just another variation on the vendor support question, with the license being irrelevant except that you now have a variety of competing vendors all supporting very similar systems to choose from.

S/MIME artists: EFAIL email app flaws menace PGP-encrypted chats

thames

Check the List

The authors have a list of email clients they tested where they state which ones had a problem, and which ones didn't.

My email client of choice - Claws Mail - was listed as not vulnerable to either attack.

Claws looks very old style, but it is fast, reliable, and has all the features I want. I have used Claws for years and highly recommend it.

Ubuntu sends crypto-mining apps out of its store and into a tomb

thames

Re: Got to give this punk some credit.

AZump said: "never saw a Linux distribution swap before Ubuntu, let alone suck up memory like Windows"

I'm more than a little sceptical about that claim. I'm writing this on an Ubuntu 16.04 desktop that has been been doing software development and web browsing all day long. Amount of swap being used - zero. That is typical for a system that I use on a daily basis, and I see the amount of usage on a regular basis as it is displayed incidentally to certain tests I run as part of software development.

About the only thing that pushes it into using swap is heavy memory usage from applications that are allocating a large proportion of available memory (e.g. very large arrays in the software I am working on). And that is exactly what happens in every other modern operating system since that is why swap exists in the first place.

If you want to make comments like that, I would suggest doing it on a non-IT related web site where you are less likely to run into people who actually use this stuff on a daily basis.

Prez Donald Trump to save manufacturing jobs … in China, at ZTE

thames
Boffin

ZTE is also a major customer of certain US chip manufacturers, particularly in ZTE's networking gear. For example Acacia gets 30 percent of their revenue from ZTE. Acacia's share price went into free-fall when the news came out. The same is true for a bunch of other American suppliers.

ZTE can source many components from other places, but will have difficulties doing so with some.

However, this situation is a long term problem for US companies who supply anybody outside the US. While their name might not be on the box, a lot of the value of what is nominally Chinese kit is actually made in the US, South Korea, and Japan. The Chinese assemble it and put a "made in China" label on it, but the majority is actually made elsewhere.

The Chinese government's current economic plan is to design and build more of this high-tech chippery in China. If the US is seen as too risky of a supplier, that will only accelerate this trend in China and the rest of the world to the detriment of US business and the US economy.

I should note that many European defence firms go to great lengths to avoid American suppliers because of the risk inherent in buying from the US. Look for "ITAR-free" suppliers as an example of this.

UK.gov expects auto auto software updates won't involve users

thames
Stop

It doesn't take much imagination to see how this could go horribly wrong.

Now waiting for a nation state to infiltrate the over the air update system and deliver a patch which bricks every vehicle in the country simultaneously, causing transport, the economy, and society in general to collapse with no practical means of recovery.

Meanwhile the government will defend their plans on the grounds that they just make policy and law, but it's someone else's role to be held responsible for the consequences of it when the government's plans invariably go wrong.

Pentagon in uproar: 'China's lasers' make US pilots shake in Djibouti

thames

Laser canon and sonic death rays.

As the story notes, loads of these incidents happen in the US all the time. They are caused by scrotes with laser pointers. I don't see why Djibouti would be any different and I suspect that the importation of laser pointers in terms of power and frequency sees a lot less regulation there.

This smacks of the American story about the Cuban sonic death ray supposedly being directed at their embassy personnel in Havana. That would be the Cuban sonic death ray that no other country finds credible. Canada has investigated it and come to the conclusion that the sonic weapons theory isn't plausible. The Americans none the less persist in blaming Cuba.

If you're a Fedora fanboi, this latest release might break your heart a little

thames

I have an AMD APU in the PC I am typing this on (CPU and GPU in one chip package). Before that I had always used NVidia graphics cards.

For my next PC I would definitely choose an AMD APU again. I have had zero problems with it in several years of use. It's fast, glitch-free, and reliable, as are the open-source drivers used with it by default (I'm using Ubuntu).

In contrast I always had some problems with NVidia graphics cards used with multiple Linux distros, especially when using the proprietary drivers.

Considering the AMP APU comes with CPU and GPU in the same chip package for considerably less money than I would have paid for a comparable CPU plus separate graphics card, it is pretty difficult to justify anything else for typical desktop applications.

I don't play video games so I can't speak to that field of use. I use mine for software development, full screen video playback, and web browsing. I have no complaints whatsoever about AMD APUs in my applications.

Leave it to Beaver: Unity is long gone and you're on your GNOME

thames

Re: New Linux poweruser here ...

I started trying to write a short summary of all the different issues, but realised there's really no way to cover even a fraction of them.

The short answer is that the basic idea is good, as it is more or less a clone of the init systems used by Solaris and Apple.

However, the implementation was sadly lacking, mainly because of what the Systemd developers were like. The Systemd developers didn't know how much they didn't know about all the obscure edge cases which exist in server applications, and wouldn't listen to the people who did know. When they made mistakes, they blamed other projects for having "bad" software, because well, Systemd is perfect so obviously the problem couldn't be there.

They also insisted that everyone else had to rewrite their own software to work "properly" with Systemd (mainly to do with network socket initiation on start up). The fact that this then made server applications incompatible with BSD and others without a lot of if-defs didn't go over well with the maintainers who were affected or with BSD people (the Systemd developers had no interest in working with the latter on these issues).

Debian had to ditch their project for a BSD based Debian distro version because they didn't have the resources to support two init systems (and all the resulting Debian specific init scripts) and the Systemd developers as mentioned above had no interest in working with the BSD people on this.

And since we are talking about Ubuntu in this story, I should also mention that the Systemd developers screamed much abuse at Ubuntu for not volunteering to be the guinea pig for the first commercial distro release of Systemd (no other commercial distro was using it at the time either). Ubuntu was bad, bad, bad, they insisted. The fact that Red Hat wasn't shipping it either at the time seemed to go right over the heads of the Systemd developers, the leaders of whom just happened to be Red Hat employees.

As for why Systemd got adopted by most distros is simple. It was backed by Red Hat and they have enough muscle in the Linux market to push through things they want. The same is true for Gnome 3 by the way.

If you are using a desktop distro that uses Systemd, or you are using bog standard server applications (e.g. LAMP, mail, SSH, standard database, etc.) then all of this probably doesn't make much difference. Your distro will have figured out the problems and fixed them. The distro that I'm using on my desktop to write this adopted Systemd a few years ago, and I didn't really notice any difference other than boot up taking longer (Systemd has the slowest boot times of any init system that I've measured).

If you are administering a complex server system, especially if you are using proprietary software that isn't packaged properly, then you have to deal with all the Systemd issues yourself instead of just hacking on an init script or installing a third party logging system. A lot of the complaints about it come from people who have to deal with this aspect of it.

thames

Re: Upgrade, but not right now?

@Notas Badoff: This is not a new policy, they did this with the 14.04 to 16.04 upgrade as well. Existing LTS users don't get upgraded until the first point release comes out (18.04.1). The point releases bundle up accumulated security and bug fixes so that new installations don't have to download them all again.

Normally by that time bug and security fixes related to a new release seeing first widespread use should be down to a trickle. This in turn means that LTS users will see fewer update notifications. If you are an LTS user, you probably care more about not having as many updates than you do about having to wait a few months before getting the next LTS. Non-LTS users on the other hand probably do want the latest stuff ASAP.

When the release does go out to existing LTS users, it won't go out to all of them at once, Instead it will be trickled out to smaller numbers of users at a time over the course of a week or so. Thus even after the LTS upgrade cycle begins, some of those users will be waiting for a while.

If you are an LTS user but really can't wait, then you can force an upgrade now if you know what you are doing (there is a package you need to install which automates the Debian command line process to make it easier).