* Posts by thames

1125 publicly visible posts • joined 4 Sep 2014

Leave it to Beaver: Unity is long gone and you're on your GNOME

thames

Re: Upgrade, but not right now?

@Notas Badoff: This is not a new policy, they did this with the 14.04 to 16.04 upgrade as well. Existing LTS users don't get upgraded until the first point release comes out (18.04.1). The point releases bundle up accumulated security and bug fixes so that new installations don't have to download them all again.

Normally by that time bug and security fixes related to a new release seeing first widespread use should be down to a trickle. This in turn means that LTS users will see fewer update notifications. If you are an LTS user, you probably care more about not having as many updates than you do about having to wait a few months before getting the next LTS. Non-LTS users on the other hand probably do want the latest stuff ASAP.

When the release does go out to existing LTS users, it won't go out to all of them at once, Instead it will be trickled out to smaller numbers of users at a time over the course of a week or so. Thus even after the LTS upgrade cycle begins, some of those users will be waiting for a while.

If you are an LTS user but really can't wait, then you can force an upgrade now if you know what you are doing (there is a package you need to install which automates the Debian command line process to make it easier).

thames

Re: Ooops they violated GDPR

Canonical are a UK company. I suspect they have heard of GDPR and know what data is personally identifiable and what isn't amongst the data they actually intend to store.

thames

Re: On the face of it

@I ain't Spartacus - "It's funny as a non-Penguiny person. I've not read as much about Linux of late, so was amused to see a review talking about people being sad to see the back of Unity."

The sort of person who is motivated enough to write a comment on an IT oriented web forum is generally not the typical user. There are loads of Unity users out there who are just using their PCs to get work done. Fans of the less commonly used desktops or distros seem to feel they need to slag off the major ones rather than promote what is actually good about their own. KDE versus Gnome flame wars for example go back to near the beginning of modern Linux desktop distros.

I ain't Spartacus said: "So when do I expect the article mourning the loss of systemd?"

Based on how these things tend to go, I expect we'll see that in about 10 years.

thames

Re: On the face of it

@K - Even the version numbers on your middle two examples are indistinguishable.

The reason that Ubuntu bailed out on Gnome 3 in the early days is that it had a very unstable UI that was not ready for prime time and the Gnome developers were no longer supporting Gnome 2. Quite a few people in those days thought that the Gnome project had committed collective suicide and would soon be an ex-parrot.

From that came Unity. It addressed the major usability problems with Gnome 2 (dock moved to the left and reduced use of vertical window space to work with modern display proportions, bigger dock icons, integrate the dock with workspaces, etc.) while keeping the keyboard short cuts and underlying assumptions as similar to Gnome 2 as possible.

After that the user facing stuff remained more or less the same with changes mostly just polishing what they had. The latter though did include a good deal of major work on the underlying bits and pieces to account for major changes in common PC hardware and driver support. The biggest example of the latter is the work they did for compositing desktops when the third parties Ubuntu had been depending on dropped work on their own support for older hardware.

And all that suited most Ubuntu users quite nicely. The Unity desktop worked and was based on sound ideas so why change it? Ubuntu started out as just a much more polished and more up to date version of Debian Gnome 2 and was very popular as that.

Several other currently popular desktops got their start in a similar way. Now however that the Gnome 3 developers have cut back on the crack smoking and have stopped changing how their desktop works every other release and have quite frankly copied some of the better parts of Unity, the reasons for continuing with Unity have to a large extent gone away and Ubuntu can go back to its roots of being a better (and with commercial support available) version of Debian.

Some of the major criticisms that I have of Gnome 3 at this time are the support for keyboard short cuts are not as good as with Unity (this is the biggest complaint I have), the dock is not as well integrated with workspaces or application indicators, and the non-traditional workspace concepts (such as variable number of workspaces and only linear navigation between them). I made very little use of Unity's HUD, so it's loss doesn't bother me much.

Most of the complaints about "Ubuntu" on forums such as this one seem to come from people who are using third party derivatives with non-Unity desktops (I'll avoid mentioning any in particular to avoid flame wars). These non-Unity desktops are put out by community members rather than Canonical, and simply don't have the resources to put the same degree of polish into them that full time distro maintainers do. I've tried some of them and salute the volunteers who work on them for their effort, but I'm more interested in using my PC than in experimenting with desktops. As a result I will be using Gnome 3 after the upgrade notification comes in.

Existing users of Ubuntu will get the upgrade notification in July when Ubuntu 18.04.1 comes out rather than on release day. This is the same policy as was used with 16.04.

thames

They had one non-LTS version, 17.10, which used Wayland. Other than that, every official mainstream version of Ubuntu right from the beginning used X.

Russians poised to fire intercontinental ballistic missile... into space with Sentinel-3 sat on board

thames

What goes up, must come down (in pieces).

And meanwhile in Canada today's news headline is that the %@!#$%# Europeans are dropping another one of their left over missiles on us again, left over toxic fuel and all. The Nunavut Territory minister of the environment said: "It is a concern for us," said Savikataaq. "No country wants to be a dumping ground for another country's spent rockets."

US sanctions on Turkey for Russia purchases could ground Brit F-35s

thames

Re: Garbage in, garbage out

The main value of Turkey to NATO these days is its position in the Middle East. American bases in Turkey are ideally situated to strike east into Iran or south into Iraq or Syria and Lebanon, and generally complement the US bases in Bahrain and Qatar.

The US bases in Turkey saw extensive use in the first and second Iraq wars, and in the war against ISIS in Iraq. Their key role in providing bases for aerial refuelling means that even aircraft based elsewhere depend upon them.

So long as the Middle East has oil, Turkey will be important to NATO.

thames

Re: "nd what's the problem with an ally (*) buying a potential adversary's kit?"

The S-400 system is not a specific missile and radar combination. It is an air defence system with a family of missiles and radars. What the Russians export is not necessarily the most advanced versions of what they used themselves.

As for why the Turks are buying them, they put out an RFP for an air defence system. Part of the requirement for any major Turkish defence contract these days is a degree of technology transfer to Turkish defence firms. The Turks are trying to build up their own defence industry. This by the way is why they are making parts of the F-35 as well as doing the engine overhauls. Turkey makes a major section of the fuselage, landing gear components, parts of the engine, electronics, sensors, and a whole range of other items. They are sole source suppliers of a number of pieces, so every F-35 built today is partly Turkish.

As for missiles, the Americans submitted a bid for the Patriot missile system, while the Russians submitted a bid for the S-400. However, the American bid did not include technology transfer, while the Russian bid did. Hence, the Russians won the contract. Toys were quickly ejected from the Americans pram - they wanted the contract, but not on terms the Turks were willing to grant it on. The only thing that will satisfy the Americans on this one is for the Turks to buy Patriot missiles on terms the Americans dictate.

As for stealth fighters in general, the Turks are designing their own, with British and other foreign help. The UK has its own sovereign stealth aircraft technology which is as good as anything the US has, which is why the UK was invited to be the only Tier One foreign supplier for the F-35 (which caused the UK's own stealth fighter project to be cancelled). BAE is supply extensive unspecified technology, and Rolls-Royce are supplying the engine technology licenses. The UK involvement has support from the highest political levels in the UK government. The Turkish fighter is scheduled to replace their F-16s and will supposedly first fly in 2023. The Turkish fighter will do the air-to-air fighting while their F-35 fleet will act as bombers/air support.

Kaspersky Lab loses the privilege of giving Twitter ad money

thames

Re: @Martin

On it's own it might be remotely plausible as a "security" action. In the wider context though, it fits in as American trade protectionism. Canadian steel and aluminum companies have also been labelled "national security risks" by the Americans. Bombardier is "bad" until they promise to assemble planes in the US, and then the trade complaint gets magically thrown out at the next hurdle.

I think that the head of Huawei said something along the lines of that getting blocked from the US market feels much more relaxing now that they know that they don't have to worry about keeping the Americans happy any more.

Aw, all grown up: Mozilla moves WebAssembly into sparsely furnished Studio apartment

thames

Re: Hypervisor?

@Charles 9 said: "Because Google's strongest platform, Android, runs on ARM, as does Apple's iOS the #2 mobile platform."

Google's response to that was PNaCl, which was supposed to be a portable form of NaCl based on LLVM intermediate code. That wasn't any more successful because LLVM intermediate code isn't really suited to that.

By that point everyone had decided that ASM.js was a much better solution from a technical and practical perspective so Google threw in their (P)NaCl cards.

thames

Re: Hypervisor?

WebAssembly isn't a binary executable. It's a language which the browser runs through it's normal JIT compiler before executing it. It is fundamentally no different from how all major web browsers currently run Javascript, except the browser doesn't have to do as much parsing before being able to use it.

To put it in simple terms, it's a development of Mozilla's ASM.js. ASM.js is a subset of Javascript which browsers can more easily analyse in order to execute it efficiently. It does this largely by jettisoning the dynamic features of Javascript in favour of using only those features which can be subjected to static analysis (everything can be resolved at compile time, nothing about how to run it has to be figured out at run time). As a result, C and C++ programs can be compiled to this subset of Javascript, which is then sent to the web browser to go through its normal parse, interpret, optimise, JIT compile phases.

WebAssembly is simply a more low level representation of the same sort of compiler output code that is ASM.js. It's not native executable code, but it does cut out a number of steps in the parse and JIT compile process. That means the web browser has to do less work before running the resulting code. Every web browser already had something analogous to WebAssembly in its Javascript compiler subsystem. However each browser had a different one with a different representation which wasn't compatible with how every other browser did it. WebAssembly provides a standard intermediate representation which is implemented in a compatible manner by every vendor.

Now instead of sending a browser normal Javascript source code (which may itself be the output of a compiler), they send WebAssembly and can cut out some of the intermediate steps. How the web browser handles it from there is up to each vendor. It could be interpreted, it could be JIT compiled immediately, or whatever. This should be chip architecture independent by the way.

Sending native x86 binaries over the web to execute in a sandbox on the other hand is what Google Chrome did with NaCl. That went over with developers like a lead balloon, and Google pulled the life support on it last year in favour of joining Mozilla in using WebAssembly.

Application publishing gets the WebAssembly treatment

thames

El Reg said: "The technology is a W3C standard, emerged from Apple and promises a secure sandbox running inside a browser."

That will come as a surprise to the people who actually developed WebAssembly. Here's one of the original announcements: https://brendaneich.com/2015/06/from-asm-js-to-webassembly/

Who: A W3C Community Group, the WebAssembly CG, open to all. As you can see from the github logs, WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks. I’m sorry the work was done via a private github account at first, but that was a temporary measure to help the several big companies reach consensus and buy into the long-term cooperative game that must be played to pull this off.

So far as I know, WebAssembly actually came out of primarily Mozilla's success with ASM.js, plus some of Google's work with the less successful PNaCl.

Linux Beep bug joke backfires as branded fix falls short

thames

Re: Of course it's not an important security issue

@Anonymous Coward said: "It's not on Windows."

Oh look, someone trolling anonymously. What a surprise.

Well guess what, it's not normally installed on Linux either, as you would know if you had actually read the story. It's a third party program that an administrator can install if he or she wants to, but very, very , few actually do.

thames

Almost nobody even has beep installed.

According to Debian, only 1.88% of users have beep installed. Only 0.31% use it regularly. Apparently "beep" doesn't even work on most hardware. I suspect that the few people who do have it installed used it in a bash script somewhere years ago and forgot it. I checked my PC (Ubuntu), and It is not installed.

The best solution is probably to check if you have it installed, and if you are one of the few people to who, to simply un-install it. If you are worried about some obscure script failing because it got an error when it tried to call beep, then perhaps symlink some fairly innocuous "do nothing" command, or possibly even a script which will write to a log somewhere to tell you when it was called.

If I need to have my speakers on my desktop make any noise I use "espeak", which is a text to speech utility. There are other noise making utilities as well which unlike beep actually work on modern hardware.

Here's the list of Chinese kit facing extra US import tariffs: Hard disk drives, optic fiber, PCB making equipment, etc

thames

The tariffs will apply to goods originating in China, otherwise there would be plenty of places in the world where they could be transshipped through and relabelled.

Plenty of US imports already arrive through Canadian ports, as many US ports are generally more expensive. Of course American port operators are crying this is unfair and want tariffs applied on port services.

Trump has opened a new eastern front in his trade wars before finishing the one he started with Canada and Mexico. Washington is now desperately trying to make trade peace with those two now that it turns out that China isn't going to surrender. Boeing's sales in China may turn out to be the Stalingrad in all this.

thames
FAIL

Even American military arms suppliers can't compete in the US market

Here's my favourite items from the list. Apparently, Chinese torpedo makers are selling their wares in the US market at unfairly low prices. American howitzer makers and makers of aircraft carrier catapults and arrestor gear are facing similar problems. If only the Pentagon didn't insist on buying the lowest price armaments sold at Walmart instead of buying from American suppliers.

  • Artillery weapons (for example, guns, howitzers, and mortars)
  • Rocket launchers; flame-throwers; grenade launchers; torpedo tubes and similar projectors
  • Rifles, military
  • Shotguns, military
  • Military weapons, nesoi
  • Bombs, grenades, torpedoes, mines, missiles and similar munitions of war and pts thereof; other ammunition projectiles & pts. thereof
  • Aircraft launching gear and parts thereof; deck-arrestors or similar gear and parts thereof
  • Air combat ground flying simulators and parts thereof

More seriously, I have scanned over the list and a lot of items look like they are there to pad out the length of the list. I suspect that a great many of the more mundane items are not made in the US at all and there is no US industry to protect.

Where the US may run into problems is with a lot of obscure components that go into products that are made in the US, and which will raise the cost of producing those items enough that the company simply closest up shop in the US and moves the production to Mexico to get around the tariffs.

Political ad campaign biz AggregateIQ exposes tools, DB logins online

thames
Black Helicopters

Let's all "hack" each others elections.

So everybody is "hacking" everybody else's election. A British company "hacked" the US election. A Canadian company "hacked" the UK referendum. Not in the story was news reported a couple of years ago that a US organisation "hacked" the Canadian election before all this. It sounds like the Russians are a bit late to the game.

Just when you thought it was safe to go ahead with microservices... along comes serverless

thames

Re: Is it just me

@regbadgerer - I haven't tried it yet, but from what I understand it's all about making the billing more granular for highly variable loads. Instead of getting billed for provisioning a micro-service which may sit about not getting used much, you get billed on actual usage. You do have to structure your application to work effectively with that however.

Whether that makes financial sense is going to depend a lot on your individual use case. It isn't for everyone, but it may reduce costs for some people who have highly variable loads. If you run high loads on a fairly consistent basis, then it's probably not a good fit for you.

The main problem with it has I think been the tendency to create new terminology in an attempt to differentiate it from micro-services. The basic concept though is to make your application better fit something that can be billed on a more granular basis. It probably has a place, but as another option rather than the one and only way of doing things.

YouTube banned many gun vids, so some moved to smut site

thames

@Dan 55 said: "So what's stopping them moving to Vimeo or Dailymotion?"

Several of the channels that I follow were in the process of moving copies of their content to Vidme, and then Vidme shut down. That was a short time ago, so I'm not sure if they are gearing up to look for another destination. Most of the people running the channels that I watch seem to know one another, and the move to Vidme was started by one individual with the others following.

I don't follow many shooting channels, as my own interests are more along the lines of history. People doing channels related to history and (non-pop) culture have been hit just as hard or harder by recent Youtube policies as people running shooting channels.

Some of the problem I suspect is that Youtube was trying to decide which videos to ban or demonetise by using AI. I've heard things from various podcasts interviewing people who have worked on AI systems for that purpose for Youtube. However, the AI doesn't seem to work in any sort of reasonable, logical, or consistent manner and its randomness is driving the content creators mad.

thames

Karl Kasarda I think is some sort of computer security consultant in his day job and very big on the "digital rights" movement in general. Of the two he's the one who is always looking for alternative video distribution networks as he doesn't like the idea of Youtube being able to shut down anyone they take a dislike to. He is also a bit more social media savvy, and the Pornhub thing sounds like more of a publicity stunt that he dreamed up rather than a serious effort at diversification. Since it got his name into the news, it sounds like it has been a pretty successful publicity stunt which also happens to line up with his views on censorship in general.

McCollum is a lot more laid back and lets Kasarda take the lead on things like this. On the other hand, he has his own Forgotten Weapons web site (rare, historic, and antique firearms) which is his main effort. His site has been around for a long time, as its own forums, and he sources his own ads for that site, so he isn't completely dependent upon Youtube to sustain his "brand". He does use Youtube as a video host and they do bring a lot of new viewers to him, but he wouldn't have to start from scratch if Youtube kicked him off their platform.

thames

The big point which has so many Youtube content creators of all types up in arms is Youtube's opaqueness and seemingly random application of their "rules". Creators who want to invest time and money into a high quality production will find themselves "demonetised" for no apparent reason. They will complain to Youtube, who will then reverse the demonetisation, but by the time it goes through Youtube's bureaucracy most of the potential views will have gone by, turning the video into a loss maker for the producer. Nobody at Youtube can give them a reason why they were demonetised or point to a policy which they may have "violated", or even seem to care about any of it. And I'm talking about content creators who have hundreds of thousands of subscribers and received awards from Youtube, not some guy with a few dozen views.

The end result is that content creators have become risk averse in terms of how much they are willing to invest in production costs, and the market is tilted in favour of creators who put little effort into quality. People who simply babble into a microphone about video games have much less at risk than people who have to purchase material or pay for travel to do a historical documentary.

Almost all of the Youtube channels that I follow now depend upon Patreon to make ends meet, as Youtube ad revenues are simply too high risk. None of them make a living from Youtube, but all have to try to at least cover their expenses somehow as they aren't wealthy enough to fund their productions out of their own pockets.

With respect to the latest changes, the firearms related channels can't get any sort of answer out of what "manufacture" of ammunition means and whether there are any clear guidelines for reviewers. Does this cover normal reloading using commercial components, or are they talking about improvised ammunition?

Anyone doing serious target shooting will hand load their own ammunition, as the commercial grade stuff simply isn't good enough for competition use. Anyone firing antique or otherwise old or rare firearms will also usually have to reload their own ammunition, as obsolete calibres are simply not available or the stuff that is available may be unsafe to use in older firearms. So is what they are doing "manufacture" of ammunition according to Youtube? Nobody knows, and there is apparently no way to get any sort of answer out of Youtube.

The majority of the content creators on the channels that I watch regularly have all said that they are actively looking for alternatives to Youtube and only stay there because that is where they can get new viewers they can attract to things such as their Patreon channel. Content creators are looking to decamp en masse from Youtube as soon as a viable alternative arises. The market is ripe for a competitor; the main barrier to entry being the ability to line up advertisers.

Developers dread Visual Basic 6, IBM Db2, SharePoint - survey

thames

"the majority appear to be straight white men"

That sounds to be pretty representative of software developers. Normally, a poll which is intended to discover what products software developers are using ought to be polling a representative sample of software developers, not a representative sample of for example trendy social media advertising consultants or PR flacks.

If there ought to be any concern about how representative the sample is, the concern ought to be with respect to how well the sort of person who answers Stackoverflow surveys is representative of the sort of experienced and knowledgeable software developers whose opinion on matters of what software is good or bad is worth listening to.

Woe Canada: Rather than rise from the ashes, IBM-built C$1bn Phoenix payroll system is going down in flames

thames

The History Goes Back Further Than That

El Reg said: "Launched in 2016, Phoenix was an IBM implementation of the Oracle PeopleSoft platform"

It was actually started well before then, in 2009, under the previous government. The contract was awarded to IBM in 2011. It was part of an overhaul of IT systems which were consolidated as a "cost saving" measure. It went live just as the present government came to power after the election.

None of the other projects which were initiated as part of this cost saving project were successful either. It became obvious soon after it went live that the project was in trouble. The opposition are of course blaming the government for not having pulled the plug on it immediately after being elected.

The Auditor General's investigation didn't really address whether IBM did a good job or not. Rather it focused on whether the government's response to things going wrong was adequate.

From what I can see, the major problem was that the project was rammed through, ready or not, prior to the election in order to claim the cost savings for electoral campaign purposes, and there was no "plan B" if "plan A" didn't come off perfectly.

It turned out though that payroll for such a broad range of employees was much more complicated than had been envisioned back when the project was started nearly a decade ago.

The main failing of the present government has been in persisting in trying to salvage something from the mess they inherited instead of pulling the plug on it earlier. Their response to that however is that there was no fall back position available. The Auditor General however noted that the government of Queensland had pulled the plug much sooner when they faced a similar problem.

The project which was supposed to save $70 million per year has turned into a persistent financial black hole which will continue to cost money for years to come.

Microsoft to make Ubuntu a first-class guest under Hyper-V

thames

Re: Microsoft's idea of system administration...

Anonymous Coward said: "Do sane people run ubuntu as a server though?"

Funny how there's all these anonymous posts on this thread making various claims about other company's products.

Ubuntu is used very extensively in cloud applications. Microsoft isn't putting so much emphasis on supporting Ubuntu because they've run out of other things to do.

Canonical have always placed a lot of emphasis on server and cloud applications. That is why they have a number of deployment and management tools focused on that area. The desktop version of Ubuntu is just a loss-leader intended to get developers using Ubuntu, with the intention that those developers will also use Ubuntu as their choice of server. People who have connections inside Ubuntu have said a number of times that Canonical's server business has been profitable for some time now. They've recently dropped the phone OS project, and Unity has been scaled back as they currently focus on profitability, supposedly to clean up the balance sheet in order to go for a stock market listing.

Ubuntu is based on Debian, but has the advantage of offering commercial support contracts for those who want them. Debian themselves of course do not, and finding commercial support for it is not as straightforward.

Huawei guns for Apple with Mac-alike Matebook X

thames
Linux

Ubuntu Mate for Matebook X

El Reg said: "Huawei guns for Apple with Mac-alike Matebook X" as well as: "The rest of the cruft is Microsoft's: the usual garbage of discarded kids' toys emptied over your desk."

Obviously the proper OS for the Matebook X is Ubuntu Mate.

https://ubuntu-mate.org/

When clever code kills, who pays and who does the time? A Brit expert explains to El Reg

thames
Boffin

Re: *A* Brit Expert

The whole premise of the theory is bonkers. A machine is not going to be held "liable" for anything. The police are not going to arrest your car and put it in jail.

The people who are held accountable for how the software performs will be determined the same way that the people who are held accountable for how the hardware performs. There are loads of safety critical software systems in operation today, and there have been for decades. There is plenty of established legal precedent for deciding liability. Putting the letters "AI" into the description isn't going to change that.

The company who designed and built the system and sold it to the public are 100% responsible for whatever is in their self driving car (or whatever). They may in turn sue their suppliers to try to recover some of that money, but that's their problem. Individual employees may be held criminally liable, but only if they acted in an obviously negligent manner or tried to cover up problems. The VW diesel scandal is a good analogy in this case, even if it wasn't a safety issue.

There are genuine legal problems to be solved with respect to self driving cars, but these revolve more around defining globally accepted safety standards as well as consumer protection (e.g. who pays for software updates past the warranty period).

The people who have an interest in pushing off liability from themselves are dodgy California start-ups who push out crap that only half-works and are here today and gone tomorrow and don't have the capital or cash flow to back up what they sell. They might try to buy insurance coverage, but the insurers may get a serious case of cold feet when they see their development practices. Uber's in house designed self driving ambitions are going to run into a serious road block from this perspective.

Here's how we made a no-fuss RSS vulture app using trendy Electron

thames
Linux

Liferea

Liferea seems to be pretty much the standard RSS feed reader on Linux. It's fast, configurable, and easy to use. It can also run external scripts to massage malformed feeds or even scrape web pages.

I'm not going to criticise Vulture-feeds - hat's off to you for actually building something that suites your needs rather than just regurgitating press releases. It certainly gives you insight into Electron that you wouldn't get any other way.

However, this bit really stuck out: "vulture-feeds weighed in at 368.9 MB". Liferea, which does far more, is "594.9 kB on disk" according to Ubuntu Software Centre. That's right, less than 600 kB. Electron is mind bogglingly huge.

Nearly all that sites that I read regularly I monitor via RSS. If a web site doesn't offer an RSS feed, then it may as well not exist so far as I am concerned. I read the articles in the web browser, but RSS is where I find out that the article exists.

I think that much of the source of the problem with "fake news" is that too many people seem to get their news spoon fed to them from Facebook or Twitter instead of getting it directly from reputable news sources followed via RSS. RSS is decentralised, which also keeps any one company from getting a choke hold on the supply of information. That of course is why the companies who do want a stranglehold on the web don't like it.

Australia joins the 'decrypt it or we'll legislate' club

thames
FAIL

decrypt it or we'll legislate

If the companies are going to have to do it one way or another, why not demand the government produce clear legislation with a detailed description of the means they propose, and then publicly poke holes in the logic of the legislation?

The "we want to force you to do it voluntarily" argument only exists because the people pushing the agenda want to have their cake and eat it too. They knowingly want weak security, but they want someone else to act as a whipping boy when ordinary people suffer as a result of it.

Next up - the government will legislate that all automobiles must be powered by perpetual motion engines, with heavy fines on any auto company who fails to produce one by next year. Well, why not?

UK.gov: Psst. Belgium. Buy these Typhoon fighter jets from us, will you?

thames

@wolfetone: "So why are the UK buying F-35's then?"

The UK currently flies two main fighter types - the Typhoon and Tornado. The Typhoon is optimised for air defence, but also does bombing. The Tornado is optimised for dropping bombs but also does air defence. Generally air defence planes can do ground attack very well in this era of guided weapons and smart bombs, provided they are equipped with the appropriate sensors and electronics. The earliest versions of the Typhoon left out the ground attack kit as an economy measure (since the buyers already had Tornadoes who could do that job anyway), hence the common myth that the Tornado couldn't drop bombs. Later versions included the kit for both roles by default, as do current production. However, the Tornado is still often use for those jobs because, well, they've got them so they may as well used them and get their wear out of them.

The Typhoon is still a relatively new plane and will form the backbone of the UK's air force for many years to come, but the Tornado is at least a generation older and is getting long in the tooth and has to be replaced due to increasing age and obsolescence.

The UK is buying the F-35 as a replacement for the Tornado. The 'B' version is being bought to also operate from carriers so it can be dual purpose. So far the UK has committed to buying 48 in total. That order could conceivably be extended to 138, but that decision awaits future approval by parliament. It is possible that the larger order may include buying some cheaper 'A' versions instead as strictly land based Tornado replacements, but that is up to the government of the day to decide in future. The F-35 is built by the equivalent of a consortium, and the UK has the second largest share in it (I think around 15%). The UK has various parts which go into it, Italy and Turkey will have final assembly lines, etc. The US of course as the biggest customer has the lion's share of the workshare.

The Typhoon/Eurofighter is also built by a consortium. The parts which go into the plane are built in various countries, but there are four final assembly lines, one each in the UK, Germany, Italy, and Spain. Each of the four takes turns in leading the sales effort to countries outside of the consortium and getting the largest share of the resulting benefits from it. It appears likely that the UK is the lead country for sales to Belgium. Going by attendance at recent meetings, early indications are that the UK is also the lead country for sales to Canada (who are also shopping for new planes).

Ubuntu wants to slurp PCs' vital statistics – even location – with new desktop installs

thames

Re: Network connectivity or not

I think it's about whether there was network connectivity at install time or whether the network connection came later. At the moment when you do an install they ask you if you want to download updates during the install or do it later.

I suspect they want to simplify the install procedure still further (it's already the simplest to install of any modern OS of any that I've tried) and are looking for what defaults to set and what to push off into an optional "advanced" menu.

At present they have an optional hardware configuration collection program which you can go through after installation to send information about your PC to them. I've used it a number of times, but I think it has too many questions and it inherently biases their data towards the sort of user who cares about what is in their PC. I think getting less and more basic information but getting it from a wider selection of users will give better results.

They've said they will publish this information in aggregated form on their web site. I'm in particular looking forward to seeing what proportion of people are using what sorts of CPUs and GPUs. I've been writing software recently which uses SIMD instructions, but it's very difficult to get a good idea of what SIMD level to target since the publicly available data sets are for games users, who are atypical so far as my software is concerned.

thames

Would that be the same Mint that pings Ubuntus servers with your current IP address daily to ask for security updates?

thames

Re: Location, Location, Location.

The location asked at install time is just country and time zone. It uses that to display the local time and to know which local time zone rules to apply, plus the installer uses it to guess what to suggest for language, keyboard, currency, etc. you probably want.

Fedora first started collecting this type of information at least 10 years ago, and RHEL, CentOS, OpenSuse, and Gentoo copied it from them. I think it got retired a a few years ago though because of lack of maintainers for the code and the server.

Debian created a system which tracked which packages you installed and which ones you used how often and a lot of Debian derivatives use it also.

Any non-server distro that wants to know your current IP address already has it anyway, since your PC constantly pings their server for security updates. That is true these days for any PC operating system.

You're decorating it wrong: Apple HomePod gives wood ring of death

thames
Joke

Lace Doily

Apple customers obviously need to buy themselves some lace doilies to set their collection of Apple things on. I'm sure their grandmothers could give them some other helpful decorating tips to complement the rest of their post-modern furnishings as well.

Joke icon required, because Apple customers are not exactly noted for having a sense of humour when their latest eye-wateringly expensive fashion purchase goes wrong.

Getty load of this: Google to kill off 'View image' button in search

thames

Re: Bad bargaining

I would rather see Creative Commons (and other similar licenses) photos at the top of results, with anything else located further down in a separate section. A lot of what amounts to spam from these companies appears in search results when you are looking for a clear photo (e.g. no watermark plastered across it) of something for non-commercial purposes (just to look at, for example). I want to see stock photos in my image search results about as much as I want to see "shopping comparison sites" in my text search results (i.e., not at all).

Oh, and as a note to journalists and blog writers, stop putting pointless stock photos at the top of your stories. It's a waste of bandwidth and it's a waste of my time and effort as it means the first thing I have to do is scroll down past an utterly pointless and irrelevant stock photo before I can start reading. If the photo is directly relevant to the story, by all means include it. A pointless picture of a model holding something irrelevant though provides no value to the reader.

If you want to really see the height of hypocrisy though, just have a look at almost all of the news stories condemning crypto currency miners for their alleged vast energy consumption. Almost all of those very same new stories will include very large format pointless stock photos which have no direct relevance to the story, but which consume vast amounts of energy in sending, transmitting, receiving, and displaying them. Pot meet kettle.

Nork hackers exploit Flash bug to pwn South Koreans. And Adobe will deal with it next week

thames

Does it even work on Linux?

El Reg said: "The Photoshop maker said that – so far – only Windows machines have been attacked, although Windows, Macintosh, Linux, and Chrome OS systems are potentially vulnerable."

I'm using Ubuntu 16.04. I just had a look in the user reviews in the Ubuntu Software Centre (software installation manager) and most of them are saying it doesn't work. I looked at quite a few reviews, but found only two who said it worked (the most recent from a year and a half ago), but they didn't have anything positive to say about it. I think the ones who did have it were using Ubuntu 14.04, so I have serious doubts that many Linux users these days have Flash installed.

I haven't had Flash installed in many years, and it is very rare that I see any web sites that make any use of it at all. For some years now the main laggards still using it tended to be ads, and quite frankly I didn't miss them at all.

If you've got it installed, you can almost certainly just delete it (if you can) without missing anything of value. For the very, very, few people who have a legitimate application for it, you're going to have to find another solution before too long anyway when Adobe finally pulls the plug on it and all the browser vendors blacklist it from being installed at all.

Ubuntu reverting to Xorg in Bionic Beaver

thames

Re: Waylaid

I use the open source driver in Ubuntu with my AMD APU with Radeon graphics, and it is much preferable to the proprietary drivers in terms of stability. I've never had much luck with the proprietary drivers from either NVidia or AMD, as I have a very low tolerance for crashes.

On the other hand, I don't play games, so I can't say much about performance. The desktop is fast with no lags or visible graphics defects, and that is good enough for me. I would rather have the greater reliability of the open source drivers over the theoretical speed improvements of the proprietary ones. AMD's current proprietary drivers are based on the open source driver anyway, with just some features added.

As for Wayland, they're like nuclear fusion. They've been a year away from being "ready" for a great many years now. I'll believe it when I see it.

Trans-Pacific Partnership returns, without Trump but more 'comprehensive'

thames

El Reg said: "As was the case throughout negotiations of the first deal, there's no text for the proposed treaty. Just what Australia, Brunei Darussalam, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore and Vietnam signed up for therefore remains obscure."

And not all that was agreed to is in the treaty text. Some of it is in additional agreements which sit outside the treaty but which override it. This is how the final issues were addressed recently.

The hold up in the treaty has been Canada's insistence on having some of the worst bits of it watered down or excluded. With the US gone, that left Canada as the second largest economy in the treaty, and so with additional negotiating power to get those changes made.

The US and certain other parties originally tried to keep Canada out of the negotiations, planning on presenting them with a fait accompli later and telling them to sign it (TPP was intended to replace NAFTA). Canada elbowed their way into the negotiations mainly to try to undermine them from the inside.

Australia seemed to be the main proponent for signing the treaty as is. Much hate was directed from their government towards Canada over delays caused by Canada insisting on changes.

There's a good chance that the worst aspects of the TPP have been de-fanged in the past year and it's now been watered down to a normal trade treaty.

We're cutting F-35 costs, honest, insists jet-builder Lockheed Martin

thames

Re: Sensor fusion?

The F-35 is far from the only fighter to combine sensor data to present the data in an integrated fashion to the pilot. Even the cheapest western fighter on the market, the Saab Gripen, does that in its new version.

"Sensor Fusion" is just part of LM's branding and marketing, along with "5th Generation" (which the F-35 isn't, if you go by the original Pentagon definition of what a 5th generation fighter would be).

PowerShell comes to MacOS and Linux. Oh and Windows too

thames

Re: binary pipelines

@RJG - He trotted out the exact same example a couple of years ago to attempt to show that Powershell was "superior", and I replied with a post showing him that the correct way to do it was with "stat". His reply to that was to waffle.

If he's still using that same crap example now, it isn't because he doesn't know there's a proper way to accomplish the same job. It's much more likely that he spent many hours coming up with the most convoluted way of doing something in bash and isn't willing to part with it. Either that, or it's the example on the talking points script (as opposed to a shell script) from his marketing department and he's been told to use it.

To repeat, he's already been told before that his example is bullshit and was shown how to do it correctly, and he read and replied to that, so he did read the post with the proper method. I would take pretty much everything else he's said on this subject with that in mind.

Oh, and I benchmarked his example and one using "stat", and the "proper" way is dramatically faster than his method as well.

I imagine that the reason that Microsoft is porting Powershell to Linux has nothing to do with any hopes that it will see widespread stand alone use. I suspect it has far more to do with supporting products that MS is porting to Linux such as MS SQL Server which may have some PS dependencies.

There was an "object shell" for Linux some years ago (well before Powershell existed) whose name I can't recall, but it died from lack of interest. It simply didn't solve any problems that people actually had.

It gets worse: Microsoft’s Spectre-fixer wrecks some AMD PCs

thames

Aren't Athlons from the late 1990s to very early 2000s? I'm surprised that given the RAM, graphics, and hard drive requirements changes since then that a PC from that era would run Windows 10 at all.

Amazon: Intel Meltdown patch will slow down your AWS EC2 server

thames

Re: maybe it's time to re-consider server-side inefficiency

I don't think that re-writing your web application in C is going to help much. It's system calls which are slowed down, which in this context means mainly I/O intensive tasks. The big hits will likely be in the database and web server itself, both of which will already be written in C or C++. They are also often running on different servers from the application processes anyway. If it wasn't worth financially while writing your web application in C before all this happened, it won't be now.

You would probably be better off looking for ways to decrease the number of times you have to hit the database and to reduce the size and number of separate files in your web pages. In this type of optimisation having a language which allows for faster development would be an advantage.

With Python by the way, the really CPU intensive libraries tend to be written in C to begin with. If not, there are often 'C' versions available. One thing that Python does really well is interface with C libraries. If you do want to look for CPU optimisations in this area, then looking at libraries is a good place to start and lets you have the best of both worlds.

As for switching back to CGI applications, given how those work I wouldn't want to guess how they are affected by these CPU bugs without some actual testing. They have a lot of per-call system related start up overhead, which may get a lot worse with the new Intel fixes and completely overwhelm any actual processing they do internally. That overhead is of course why we moved away from them to begin with.

Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

thames
Trollface

Re: I finally switch from AMD to Intel, and this is what happens.

I'm typing this from a PC with an AMD CPU and running Ubuntu. Aren't I feeling smug right now.

Given how many people are affected, I can't see Intel replacing the hardware for free. This is worse than the infamous Intel floating point math bug.

I'm now waiting for people to re-run loads of benchmarks after the patches come out to see how much performance was lost.

Why bother cracking PCs? Spot o' malware on PLCs... Done. Industrial control network pwned

thames

Re: Bandwidth?

The PLC will be located in an electrical enclosure, in most cases made of steel. RF isolation - in both directions - is a major function of the cabinet (physical protection, electrical safety, and limiting the spread of fire being the other main purposes).

Meanwhile there will be shed loads of RF hash put out by electric motors, solenoids valves, motor drives, and miscellaneous electronic gadgets. Picking anything meaningful up from a drone flying overhead is not likely.

The most plausible scenario will be the one that Stuxnet used. Just use bog standard Windows viruses and take over the PCs which inevitably get connected to PLCs either full time, or from time to time. It's rather curious how little mention Stuxnet got in the article, considering that it's the canonical example for control system hacking.

As for the S7-1200, that is the series that covers the low end of the PLC for Siemens. Typical applications for these don't get connected to any sort of network at all, let alone an air gapped one. I suspect it was chosen for the experiment simply on the basis of cost, as the larger more complex models of PLC can get rather expensive.

As for the supposed application, if you really want to know the topology of the network, just use standard off the shelf Windows viruses to infiltrate the business network and download the electrical drawings and PLC program backups from the drive shared used by the engineering and maintenance staff. Or just phone up someone and tell them you're a company quoting on upgrading one of their machines and ask them to email the materials to them. It's not like this custom built stuff is generally considered to be commercially valuable.

AI smarts: IBM pushes out 'faster than X86' POWER9 servers

thames

Re: Price-performance

I have an open-source software project that some users may be interested in running on POWER, but I've no practical way of testing it. It's a set of C libraries which can benefit from some serious CPU oomph, but I can't claim that it works on POWER.

* It runs on Linux, BSD, and WIndows.

* It compiles using GCC, Clang, and MS VC.

* It runs on x83 32 and 64 bit.

* It uses SIMD on x86-64 for a significant performance boost.

* I have an automated test pipeline set up for all of the above, with tens of thousands of tests.

* I'm working on setting up low cost test systems for ARM and expect to be able to claim that as a tested and supported platform.

But what are my options for POWER, other than splashing out on some eye wateringly expensive hardware despite having no paying customers lined up for it? Not good it would appear.

There's a bit of a chicken and egg problem here. I've written some open source software which can really make use of POWER's supposed advantages, and no doubt other people are in the same boat. However I can't call it a supported platform. Users can either take on the testing and any required porting themselves, or they can just use x86 - where my x86 specific SIMD optimisations may overcome any nominal performance advantages of POWER anyway.

I don't know what IBM can do to address this, but it's one of the disadvantages they are labouring under. SPARC is in the same boat. RISC V is going to have to address this question as well, if they want to break out into the general market rather than just embedded niches.

Russia threatens to set up its 'own internet' with China, India and pals – let's take a closer look

thames

Re: not gonna happen

The BRICS countries created their own alternative to SWIFT a few years ago. That has made them immune to US influence over international payments systems when dealing amongst themselves or with other participating countries.

I suspect if the US were to want to shut down the "ru" domain, they would go after the higher level DNS servers via financial "sanctions". Running a DNS server at the scale we are talking about takes money. Target the money and it doesn't matter what the engineering staff happen to think, they can try running a DNS system in the dark with no electricity and no Internet connectivity if they wish.

Really, if anything at all is surprising, it's that this wasn't done years ago.

Open-source defenders turn on each other in 'bizarre' trademark fight sparked by GPL fall out

thames
WTF?

Someone ought to look in the mirror.

So organisation 'A' is unhappy about organisation 'B' for being too litigious when the intellectual property rights of the people they represent are infringed.

And A's response when they think their own intellectual property rights are even slightly infringed by B? Sue! No hypocrisy there whatsoever I'm sure.

New UK aircraft carrier to be commissioned on Pearl Harbor anniversary

thames

Royal Navy ships are normally named after previous RN ships. This lets them carry over the battle honours from the previous ships of that name. "Battle honours" is a list of battles which ships of that name have taken part in.

Certain names are reserved for certain classes of ships. QE and PoW were previously used for battleships, and so are now used for aircraft carriers. Destroyers and frigates have a separate list of names which also get reused (towns, rivers, etc.).

The previous QE was the lead ship of a new class of WWI battleships that was probably as big a step forward from previous ones in terms of size of gun, fire power, armour, and speed as the Dreadnought had been from pre-Dreadnoughts. I suspect that the current use of the name is at least on part supposed to symbolise a similar degree of advance in capabilities which the current ship brings to the RN.

The previous PoW was similarly the lead ship a new class of battleships which came into service at the beginning of WWII, and also a great technical advance forward for the RN at to time. Although it wasn't as obvious an advance in capability due to arms limitation treaties, it brought modern ships to a navy whose capital ships were largely made up of survivors of WWI. In that sense the new PoW could be said to symbolise the regeneration of the RN as a global force.

There are no doubt multiple reasons for why those names were picked, but they weren't simply picked out of a hat and they do have precedent as the names of important historical ships.

thames

Re: About those aircraft

The helicopters have already been practising taking off and landing on the ship. The 13th F-35B has just been delivered. The F-35Bs are currently in the US while the crews are being trained by the OEM. I believe they're coming to the UK next year.

The UK "saved money" by decommissioning their last carriers years before getting replacements. That means that new crews have to be trained from scratch on what are pretty large, complex, and dangerous bits of technical kit instead of just transferring a functioning crew over from an existing carrier that is being replaced.

So the first stage is to train the new ship's crew on how to operate the ship. Simultaneously with that, the F-35B pilots, ground crew, and maintenance crews will train with their kit. Once both sets of crews are ready, they'll bring them together. It won't happen overnight, so I think it will be a couple of years yet before they're all ready with 2 dozen F-35Bs plus helicopters. The US may lend some of their planes with crews to allow the ship's crew to gain experience while waiting for their own squadron to be fully operational.

Once that is done however, with 2 carriers the UK will have continuous carrier coverage, as one ship will be ready to go while the other is in refit.

Remember CompuServe forums? They're still around! Also they're about to die

thames
Meh

A Relic of a Bygone Era

Companies used to pay to host their support and user forums on Compuserve. That for example is how Siemens used to do their on-line support and user forums for their industrial automation division. Siemens maintained them on Compuserve long after the Internet became the mainstream and walled garden networks such as Compuserve were otherwise a relic of the past (Siemens rarely saw a bad idea they didn't like). I can remember people having to buy Compuserve accounts just to get support from Siemens and for no other reason.

Aside from cases like that however, people cast aside Compuserve, MSN (the original incarnation), and several others I can't remember the names of, gladly when Internet service became generally available. With the Internet you could talk to anyone, anywhere, instead of just within your own providers walled garden, and not have to pay ridiculous extra fees to access Internet email.

Large corporations contracted their external email services through these companies. If you were "in network", you could send email to other companies readily enough. If your email needed to go out to the Internet, then you had to pay extortionate per-byte charges to use their Internet gateway. The real money was in these email services, the forums were just an extra bit on the side to encourage individual users to sign up and so create a critical mass of users. Internet gateway charges were kept high to try to keep their own user base inside the walled garden.

After a while having a Compuserve email address became the symbol of being a dinosaur. Large corporations also had their email service provided by them and other similar companies. You could tell which ones those were by the bizarre email addresses. I think that there was even a Dilbert cartoon about it.

Once enough users were on the Internet, the user base for Compuserve and their ilk started eroding due to a combination of cost and access to content outside the walled gardens. Without the email services which hauled in the cash, the whole business model fell apart. Even Microsoft were forced into a humiliating climb down and admit that MSN was never going to replace the Internet and so shut down MSN (later re-using the brand name for their Internet "portal", back in the days when those things existed).

Having gone through those walled garden days, all I can say about the people who think it's a great idea today to re-create those days in the new walled garden services of today is they are utter retards.

Windows on ARM: It's nearly here (again)

thames

@hammarbtyp: "The questions is why?"

The reason is that the Achilles heel of Windows when it comes to portability is that the only reason people buy Windows is to run proprietary x86 applications. Microsoft can port Windows, MS Office, MS SQL Server, Dot Net, and various other bits of software that they own to ARM, but each customer will always a handful of third party applications that haven't been ported and that they don't want to do without. Being able to run those poorly is better than not being able to run them at all.

It's a chicken and egg problem Third party developers won't support Windows on ARM until there's a market for it, but customers won't create a market for it until there's enough third party software to run on it.

There's work in porting software, since you have to fix the application bugs which have been there all along but haven't surfaced on x86 but will show up on ARM. Then you have to worry about performance tuning on a different architecture. Plus you have to set up the development, testing, and QA servers to support the new architecture and add it to your release pipeline. And you have to do all this while waiting for years for a significant market to materialise.

Linux distros have supported multiple architectures for years because they have the source code for the applications they distribute and can simply recompile them in most cases. Since portability has been a core feature of most "unix" style operating systems from the early days, most of these applications have had the porting bugs wrung out of them years ago. Debian has official support for AMD64, i386, ARM, MIPS, PPC, and S390x (IBM mainframe). They have unofficial support for others as well, including SPARC.

Microsoft is starting without any of these advantages. Their Itanic port died some years ago and the few third party software developers who bought into that will be shy of repeating that experience until they see a real market running Windows.