* Posts by thames

1125 publicly visible posts • joined 4 Sep 2014

MAC randomization: A massive failure that leaves iPhones, Android mobes open to tracking

thames

Since the phone can be tracked anyway, why bother?

Perhaps the reason that most of the Android manufacturers didn't bother implementing MAC randomisation is because as the story states, that doesn't help since it's possible to track it anyway using another technique. It's inherent in the chip sets.

Without addressing all the other tracking methods MAC randomisation just becomes security theatre. All it will do is give some easily pleased people a warm fuzzy feeling and a sense of self justification for having bought the phone they intended to buy anyway.

The answer is to turn WIFI off until and unless you intend to use it right then and there. Doing that tends to save battery life as well, so it's worth doing anyway. Making it easy to turn WIFI on and off as desired is something that is under OS control, so that's where a phone maker could make a difference if they wanted to.

Researchers offer simple scheme to stop the next Stuxnet

thames

Re: And now the bad news

They're called "soft logic systems". They're not very popular since they all, or nearly all, require a Windows PC to run on, with all the aggravation involved in that.

I've only ever seen them used in two applications. One was a complex testing machine where the main testing program was a conventional PC program, but used a soft logic system to control the clamps and conveyor system for each specific customer application. The soft logic system was from one of the biggest PLC vendors but it was massively buggy and only ran on an version of Windows that was no longer supported. It hooked deeply into Windows via a third party product to provide real time extensions, so the prospect of moving that to a newer version of Windows was nill. What a headache.

The soft logic systems don't sell much, so they don't seem to get the same degree of vendor testing and support that the regular PLCs do. You also end up on the Windows upgrade treadmill, with the added disadvantage of the soft logic system vendor's upgrade schedule being several Windows versions behind Microsoft's.

The other was a factory which drank the kool-aid on "PC based control" and put soft logic systems in everywhere. It was a lot of trouble for no benefit. The only thing they really needed the PC for was to act as a PC based HMI. There are better and cheaper ways of doing that with a regular PLC.

Most factories that want a PC to work together with a PLC simply network the two together and let them talk to each other.

The main complaint that people actually have about PLCs is vendor lock-in features which prevent you from interfacing a PC to a PLC without going through a massively overpriced, overly complex, and limited functionality gateway (generally an OPC server). Soft logic systems don't generally solve that problem, since surprise, surprise, they also replicate the vendor lock-in features.

thames

Re: re: write enable switch

@Anonymous Coward said: "What's a 'write enable' switch?"

This is a switch found on many PLCs which switches in between program, run, and run/program mode. In the first mode, you can change the program, but the PLC doesn't run. In the second mode, the PLC runs but can't be reprogrammed. In the third mode you can do both.

Not all PLCs have all three modes. The market is segmented based on cost and what part of the market the vendor is addressing.

Depending upon the PLC, you may be able to program it while it is running, or you may have to shut it down to make program changes. If the switch is in the right mode, you can change between stop and run modes remotely through commands from the programming software.

Sometimes it's a key switch which allows you to remove the key, but it may be just a toggle switch. The key doesn't really add much security over a toggle switch, since the keys are normally all the same for a particular product line.

The switch however is simply an input read by the PLC's microprocessor to tell it what to do. PLC programs run in what is effectively an interpreter. Originally they were simple byte code interpreters, although these days some sort of pre-compiler may be running behind the scenes in the PLC or the programming software. If there was a bug in the firmware you may be able to bypass the switch. For that matter, it is possible that some PLCs may simply rely on the proprietary programming IDE to obey the rules when told what state the switch is in.

The PLC may also have a separate flash memory card or module to store the program on. The program would run from RAM, but on restart it is reloaded from flash. Data may be stored in battery (or super-capacitor) backed RAM, or in flash. There are many combinations and variations of this, so you have to read the manual for any particular model to see how it works.

Overall, you can't rely on the switch as a security feature since you don't know what mode it was left in. The PLC will be inside an electrical enclosure along with all sorts of other electrical equipment for the machine, so the programming port is often brought to the outside of the cabinet via a cable so that the maintenance crews can get at it without shifting half a dozen roller conveyors and stacks of whatever crap production has piled up in front of the door.

Being able to see the program can help in troubleshooting, and in a factory time is big money. This by the way is the main reason why ladder logic is the dominant language used in programming PLCs. It gives you a graphical representation of the program logic, with the live data states highlighted. Simply find the rung associated with the output you want, follow the rung from top to bottom and left to right, look for the graphical symbol that shows a "false" logic condition (it's the one that isn't highlighted), and that's your culprit. It will be either a broken sensor or a program bug (or lead to another rung, which leads to another rung, etc., until you get back out to a real world input). This is why the bulk of the programming software sold goes to maintenance crews, not people who write new PLC programs from scratch.

thames

The researchers are a bit detached from reality.

The researchers are basing their assumptions on the premise that PLCs are used in the same way that commercial PC software is developed. They aren't in most cases, and the researchers' whole plan falls down on that fact alone.

Ladder logic is meant to look like wiring up relays in order to be familiar to people who aren't professional programmers. It lets the regular maintenance crews repair and improve equipment controls at all hours of the day or night. Most PLC programming is done by electricians or similar skilled trades, especially after installation. Input blown on the third I/O module and you don't have a compatible spare module? No problem, find an unused input point, switch the sensors wiring to that, and change the program. You're off and running again in 15 minutes. Something has changed in the parts from your supplier and the fix is to change a delay timer? No problem, that's a quick program fix. The original program has bugs which don't show up until the right conditions occur, and then they trigger all the time? Debug and fix on the spot. Etc. That's why factories use PLCs instead of writing their software in C on an embedded computer.

El Reg: "They note it's very easy to conceal commands that will go as far as bricking the PLC, using legitimate instructions to fool around with arrays"

That happens with ordinary bugs. I've dealt with this sort of problem, although the PLCs didn't get "bricked", just the program corrupted. Some brands (e.g. Siemens) are more prone to this than others because they allow programming methods more akin to a restricted form of assembly language than ladder logic. Ladder logic itself is pretty much bomb proof unless there are firmware bugs.

El Reg: "or create stack overflows (the latter is pure simplicity: create a recursive subroutine that calls itself)."

Any PLC that I've worked with has a hard coded subroutine call depth limit to prevent that, usually a pretty low number. That's why it isn't practical to use recursive algorithms on a PLC. A more plausible method is to create an infinite loop in in PLC that allows in-scan looping (many do not). However, PLCs have a watchdog timer to prevent that. If the scan times out, the watch dog trips and faults the PLC. This is PLC programming 101 level stuff. These researchers are not coming up with anything new.

El Reg: "First, companies should centralise their PLC software storage into a single location,"

Any competently run factory already does that. You need that so you can replace the PLC CPU when it craps out. It's routine repair and maintenance.

El Reg: "with all engineers submitting what they call “golden samples”

Most PLC programming is not done by "engineers" much like most business spreadsheets are not created by "engineers". This is why the PLC programming languages created by academics have a minuscule market share compared to ladder logic outside of a few niches.

El Reg: "Second, operators should (preferably automatically) run periodic checks that validate the software on PLCs with the central logic store. ”

The software to do this with the major PLC brands has been off the shelf for several decades at least. It's nothing new. Most factories don't do it because they don't have the centralised industrial networking infrastructure to make it feasible. Instead they rely on manual procedures to send program update files back to whatever network disk share they are stored on. Of course though if you do centrally network the PLC CPUs in order to do this automatically, you've now created a new security risk that didn't exist before because now this potential malware has a path to all the PLCs via the IT network.

El Reg: "and PLCs only take updates from those samples”

This will do nothing to alleviate a Stuxnet-like problem. Stuxnet worked by infecting the PCs used for PLC programming and used the software to download changes to the PLCs. The virus thus operated with the same credentials as an authorised person. Stuxnet was simply a normal Windows virus which was used for an unusual purpose. Network air gaps were bypassed by simply infecting the appropriate PCs. To solve this problem, you need to solve the Windows virus problem. Good luck with that one. Most of the major PLC brands have had username/password features for decades to limit who can change programs, but hardly anyone uses them because it doesn't solve any realistic problems for them.

Overall, there's nothing really new in the story. The recommendations to have "known good" backups of PLC programs is good practice from normal problems associated with hardware failures. I can think of a number of occasions where a production line was shut down because the factory didn't have a working backup of a PLC program when the machine puked its program and a new program had to be cobbled up at short notice by working long hours. The main threat here though isn't hackers, it's project managers who get given a CD with the PLC program (and all the drawings) by the machine builder and who proceed to chuck it in the back of a drawer and forget about it instead of seeing that proper backups are made.

Java and Python have unpatched firewall-crossing FTP SNAFU

thames

Re: Oops, especially for Oracle

@Anonymous Coward said: "I can't say I'm surprised about Python; ... the lack of static typing probably makes security testing much harder."

You may wish to consider how the Python version of the bug is much less severe than the Java version before jumping to any conclusions. The great degree to which Python is dynamic makes testing easier, not harder.

Relying on compiler type checking to find your bugs is horribly insecure. That's not what it's there for. It exists to give the compiler hints about the data so that it can generate more efficient code. With this type of application all the relevant user data are byte streams, and that is all the compiler would know. To find this sort of bug, you need to test, test, test, and then test some more using automated tests.

There is a bug opened on the Python bug tracker related to the post linked in the story. The bug initiator has looked at the FTP spec and apparently the problem is inherent in the FTP spec. The solution may be to raise an exception if CWD commands contain any blacklisted characters, but it's hard to know if that will work with all legitimate FTP servers.

The Java exploit sounds like the more serious problem of the two, as Java (unlike Python) is installed in a lot of web browsers.

For a vulnerable Python client, you would need to be using something like some sort of web crawler that uses urllib. For that sort of application you are more likely to be using either the "requests" library or something like "Twisted" rather than rolling your own using urllib.

If you are using "urllib" and you don't need to handle FTP, then at the application level you might simply replace the default FTP handler with something that raises an exception. That may be as simple as writing a class with the same interface as the FTP handler that simply does nothing other than raise an exception (to give you reasonable log messages). Then it should simply be one line of code to point the the FTP handler to your own class instead of at the default one. If you do need to handle FTP as well as HTTP, you could do something similar to insert your own filter before deciding whether to pass the request on to the FTP library. And if you don't need all the automatic bells and whistles in urllib and just need basic HTTP (or HTTPS), then why not simply call the relevant standard protocol library directly instead of going through urllib?

Since the bug seems to be inherent in the FTP spec (when combined with NAT and permissive firewalls), I would not be surprised if the same problem were to be found in libraries used with many other programming languages.

Meet LogicLocker: Boffin-built SCADA ransomware

thames

Re: Option: shut down the line.

Most machines in most factories can be shut down and have the PLC CPU module replaced. PLCs do fail and have to be replaced. It's a routine maintenance procedure.

If you have a process that is still running (since if it's not running, it's already shut down anyway) and absolutely must not shut down, then the system should be one that has redundant CPUs. These are off the shelf products made for this situation. You shut down one CPU and let it fail over to the other, replace it, then do the same to the other one.

As for the story's "a response plan could involve keeping backups of critical programs on the premise", any competently managed factory will already have that. Again, PLCs do fail naturally, and always have. Sometimes it's not even the hardware which failed, sometimes the program has become corrupted by voltage spikes, or drop outs, or some other reason.

The real problem which factories would face in this situation is diagnosing what is going wrong. Industrial networks don't normally have the sort of network monitoring equipment which would make this easy.

The best approach for concerned factory engineers and managers to take however would be to isolate each machine or node as much as possible to ensure that they have little opportunity to interfere with each other. This is just good industrial control design practice regardless of whether you are dealing with malware or not.

If you need to have systems communicating with each other, give them only very limited connectivity (via firewalls or other systems) rather than putting everything on a "flat" network such as vendors seem to like to show in their brochures. Many PLCs will fall over if you just accidentally bombard them with "too much" traffic, so again this is prudent system design for reliability regardless of whether you are concerned about maiware or not.

Make America, wait, what again? US Army may need foreign weapons to keep up

thames

Not Just the Fuchs

"The US military has in the past purchased foreign-made weapon systems, like German-made 60 Fuchs NBC reconnaissance vehicles"

Or the "Stryker" armoured vehicle, which is the mainstay combat vehicle of US infantry forces, and which is made in Canada?

There are also a lot of components which go into American weaponry which are developed in other countries but bought by the Americans for their own systems. Much of this goes unnoticed since the foreign suppliers keep a low profile.

The main problems the US military has at this time are two-fold. First, is that they've been so focused on colonial adventures (under the label of "anti-terrorism") that they have neglected the art of fighting wars against "proper" capable enemies.

The other is that when they do try to develop a new weapon, they tend to ask for such "moon on a stick" goals that the whole project collapses when reality eventually sets in. The idea of steady, incremental progress just doesn't sit well with them.

Facebook pimping for politicos despite fake news 'purge'

thames

Facebook, tanking credit for electoral victories? Eh?

"Brexit does not yet appear among Facebook's list of electoral success stories – although it does list ... along with victories by Canadian Liberals,"

Eh? A quick google of news stories related to the last Canadian federal election has them saying that digital media had little or no influence on the outcome of the election. For example: http://www.cbc.ca/news/politics/canada-election-2015-social-media-1.3277007 ( Social media's significance oversold amid election hype ).

Very few people (single digit percentages) in Canada do anything political on Facebook. Those who do are mainly people who are already deeply committed to a point of view and unlikely to change their minds regardless. There's also no evidence that social media had anything influence on voter turnout.

What decided the last election was that the Liberals, the party that has ruled Canada for most of the past century, got their act together again after years of backstabbing and infighting, and allowed voters a palatable choice for ejecting the deeply detested Harper Conservatives (the Harper faction being so deeply detested that prominent mainstream Conservatives came out openly against him and asked their supporters not to vote for the party).

Journalists rely so heavily upon Twitter for their news releases that they over-estimate the influence that social media has on everyone else. This isn't just my opinion, this is what journalists themselves had to say about it.

Facebook is simply hyping their supposed influence in an attempt to sell more ads. Shock! Horror! Purveyor of dubious ads and fake news makes dubious claims about the effectiveness of giving them more money!

Why do they list Canada's 2015 election as a success? Simple, it was a relatively well known recent election where the existing government got tossed out and the result wasn't something so controversial that Facebook was afraid to associate their brand name with it.

Facebook is part of the advertising industry, and it shouldn't be too surprising that an advertising firm makes ridiculous claims in an effort to sell more product.

US Navy runs into snags with aircraft carrier's electric plane-slingshot

thames

Re: EMALS

@Alan Brown - "I've been to at least one airfield fitted with CATOBAR facilities for training. Doubtless the USA has many more."

The UK has none. Building those wouldn't be cheap.

"The training argument doesn't hold water because the most critical part (putting it all into practice) need a ship no matter what technology is used and you don't want to be doing that with your active-deployment boat."

It takes a lot less at sea training time for VSTOL than for CATOBAR because it takes less training. And yes, you don't want to be doing that with your active deployment ship. That's the point, the UK will have only one ship in service at a time, the other will be in refit. There won't be enough crews to operate both carriers at the same time, so time spent on at-sea CATOBAR training comes straight out of active deployment time. The US has multiple aircraft carriers while the UK only has two, and there are still times when the US has no carriers on deployment because they're all either in refit, working up, or broken down and being repaired.

"nuclear powered ships have a lot more space aboard for facilities, fuel and accomodation,"

Er - that's a factor of how big you build the ship. If you want something bigger than the QE, then build it bigger. The UK built something big enough to hold all the planes they planned on buying for them.

"plus they don't need to refuel in potentially hostile areas or that space can be used to carry supplies for your support group."

The planes still need fuel and bombs, and it's those which get used up quickly on active deployment, not the fuel for the ship itself. Plus, the escort destroyers are all use gas turbines anyway so the fleet still has to be refuelled at sea (it's not like the UK has loads of spare ships).

The main reason the US carriers still use nuclear reactors is they need to generate the steam for the catapults and fossil fuel boilers are obsolete technology so far as naval vessels go. EMALS may solve that aspect when they get it to work, but the backup plan for the newest US carrier to to retrofit steam catapults if EMALS doesn't work out.

Oh, by the way, that's also why old fashioned steam catapults weren't an option for the QE class - no steam boilers to get steam from.

thames

Re: EMALS

@fishman - The US CVV wasn't anything at all like the QE class. For starters, it was to have steam catapults!

The QE by the way is not suffer any operational limitations by operating VSTOL aircraft instead of CATOBAR.

The things that the US navy are looking at with respect to the QE class are things like the number of crew to operate it. The QE will have a crew of 679 (aside from the air wing) as compared to 3,200 for a US Nimitz class (again, aside from the air wing). Crew costs are a major part of the cost of operating a navy, and the US naval budget is under increasing pressure. As I said in a previous post, operating costs are hugely important, and this is something the UK spent a lot of time and effort on when designing the QE.

thames

Re: Has surge recxently been redefined & I missed it?

I suspect it comes down to some daily maintenance is required at some point, so 24/7 operation isn't achievable in practice.

thames

Re: EMALS

The EMALS project was still very preliminary and nebulous when construction of the UK's aircraft carriers started, so how much it would cost and how to actually design it into the ship couldn't be nailed down at that point.

The UK therefore went forward with using STOVL aircraft (i.e. F-35B) as the low risk option.

After the Conservatives came to power, they looked at using EMALS as a cost savings. The idea was to save money by using cheaper aircraft. However, someone ran the numbers and found that EMALS with conventional aircraft was more expensive, not less, so the plan switched back to the F-35B.

The cost difference has more to do with the much higher training and qualification costs for catapult launched aircraft, rather than the initial capital costs. STOVL aircraft are much easier to land and take off, and you can do most of the training (and maintaining the qualifications) on land rather than tying up a ship. Furthermore, the UK plans to only have one of the two ships in service at a time, which means that tying up a ship on training and maintaining qualifications would take away from time on active service. It's much easier and cheaper to train a STOVL pilot than a CATOBAR (catapult and arrestor hook) pilot.

What is more the UK plans to have the RAF and RN operate with a common pool of pilots, instead of having a dedicated set of naval pilots. This means the ships can normally operate with a dozen or so planes, but "surge" to an air wing several times the size when the mission requires it.

Canada did some cost estimates for air force fighters which are illuminating. While these were not naval planes, the numbers shouldn't be too far off. They found that something like 80 - 85% of the cost of a fighter jet over its life time was due to fuel, wages, maintenance, etc., while only 15 - 20% was the initial purchase price. Cost analyses based solely on the sticker price are often way off the mark.

Overall, the UK probably dodge a bullet on this by avoiding EMALS. I have heard the Americans are looking at the UK's new carriers and comparing them to their own in terms of purchase price, operating cost, and capability, and concluding that the UK is getting a lot more value for money.

Creaking Royal Navy is 'first-rate' thunders irate admiral

thames

A few clarifications

With respect to the engines, Rolls Royce built the turbines, which as the bootnote mentions are working fine. The intercooler-recuperator was designed and built by an American defence company. This was a US-UK joint venture which was supposed to provide the future engines for all US and UK destroyers, and have a significant civilian market as well.

The UK ended up being the lead customer. When it turns out that the super-duper, revolutionary, transformational intercooler-recuperator turned out to be not so super duper after all, the US cancelled their participation in the joint venture and the RN was left holding the bag with the engines in the Type 45. The new Type 26 and Type 31 frigates will use bog standard conventional Rolls Royce marine gas turbines without the problematical gubbins.

With respect to the Harpoon missiles, they're both obsolete and soon to be life expired. The obsolete bit means that pretty much any modern destroyer or frigate could shoot them out of the sky before they posed much of a threat. They're only of much use against elderly and obsolete ships. The life expired bit means that their "best before" date on the explosive and flammable bits is coming up, which means you either have to bin them or spend loads of money rebuilding them. Given the RN's relative lack of cash, and the fact that the elderly missiles are not all that useful these days (see above), they decided to bin them. Keeping them would mean robbing from another budget and doing without something else they think they need a lot more.

The intention is to replace them with a newer model of missile. They are looking at buying the US replacement for the Harpoon, but that project is late and so is not available for now. There's also a project by a European missile company which could be in the running as well. The RN will hang onto its money in the mean time until there's something on the market to buy.

If they really need to go to war with another navy, they can tap cabinet for UOR funds (extra funds allocated for a war, rather than coming out of the RN's regular budget) and buy current model missiles off the shelf from someone and stick them on their ships pretty readily. Anti-ship surface to surface missiles are one of the easiest weapons systems to integrate. Most of them were originally designed as container systems that can be bolted down to a deck and plugged in so that they could be retro-fitted to many existing ships.

With respect to the RN overall, the biggest issues on the equipment side are related to being starved of funds a decade or so ago, and so being left with a set of ships and weapons which are rapidly reaching their end of life. It takes years to get a new ship class from power point to in the water, and its easy for the government of the day to put off naval spending so they can spend the money on pointless wars in Iraq and Afghanistan without raising taxes.

The other big problem is with personnel. The RN are struggling to retain staff. The big problem is that a lot of their technical staff with transferable skills can both double their salaries and be able to have a normal family life by simply signing off and taking a job working for a civilian company in power generation or something similar. Low salary caps mean that people quit, more work load gets thrown on fewer people which causes them to quit, in a death spiral of resignations. Current government policies which are intended to take all the fun out of naval life (e.g. restrictions on shore leave when abroad) remove what few life style attractions the navy had. It's all very well to say that the RN should have more ships. The reality is that they're struggling to man the ones they have right now. If they had more ships they would be tied up at the dock empty.

Put Firefox DE and Chromium in blender. Devs... Is it pure Blisk?

thames

Re: Do you want Firefox to perform better?

I've definitely never seen any problem like that happen with Ubuntu (Unity or before that Gnome 2), or with Mandrake before that (KDE or Gnome).

Run a JSON file through multiple parsers and you'll get different results every time

thames

Python Results

I checked the results for Python 3.5 (which is what I use), and I don't see much of a practical problem. The issues seem to mainly come down to the parser not rejecting invalid unicode, and the handling of extreme floating point numbers.

The author puts the test results into the following categories:

expected result

parsing should have succeeded but failed - Python 3.5 = 2

parsing should have failed but succeeded - Python 3.5 = 3

result undefined, parsing succeeded - Python 3.5 = 15

result undefined, parsing failed - Python 3.5 = 5

parser crashed - Python 3.5 = 0

timeout - Python 3.5 = 0

The 2 "parsing should have succeeded but failed" were two versions (big endian and little endian) of obscure utf16 strings which the author feels should have been interpreted as empty strings but which the parser rejected (at least that's what it did when I tested it). Nearly every other parser "failed" these tests for the same reason. When nearly everyone else does things one way even though the spec implies something different, you're probably better off going with the crowd. I can't say I can really argue with how Python handles it.

The 15 "result undefined, parsing succeeded" are also mainly obscure unicode conversions. You can say that JSON "shall be unicode" until your blue in the face, but if someone sends you Windows 1252 or ISO-Latin-1, having the parser simply reject it is going to cause you nothing but grief in the real world. You're better off getting the "invalid" unicode into your program and handling it according to whatever is appropriate for your situation (if in fact is actually is a problem for you). The best place to handle "bad data" (assuming the data is even a problem for your application) may be elsewhere in your program rather than in the JSON parser. The article itself admits that this is not necessarily wrong. Like in the above case, the majority of other parsers handle these in the same way.

He also very arbitrarily "fails" Python in this category because it didn't reject a deeply nested array (500 deep), despite the spec saying that there is no limit. Any limit is implementation dependent. Since any computer will have hardware limits to how much data it can handle, that limit is inherently arbitrary. The author is clearly wrong in this case.

The 3 "parsing should have failed but succeeded" were related to the parser handling NaN, -infinity, and infinity instead of rejecting them. Whether or not you want the JSON parser to accept those is up to the programmer as this is a parameter in the function call whether to raise an exception when it encounters them. If you don't want to be able to handle them, then disable it. The author admits that he "failed" python on these three tests simply because he feels that most people would set it to accept NaN, -infinity, and infinity. I have to give the author's decision on this a big WTF?

The 5 "result undefined, parsing failed" again seemed to be similar to the 2 "parsing should have succeeded but failed" cases. That is the parser rejected the data instead of silently returning empty data structures. Every other JSON parser also "failed" on this one. Again, from a practical standpoint I can't argue with the way that Python handles it.

Python 3.5 did not crash or have time-outs on any of the tests. This is a very big plus in my book.

The problem with articles that purport to test some feature with every common language is that the author usually doesn't understand all the languages themselves in any great depth, and he often accepts the design choices of his favourite language as being the "right" way to handle things.

Cheap, lousy tablets are killing the whole market says IDC

thames

And if the cheapie does the job, why not buy that instead of spunking out loads of money on something expensive? Vendors are going to have to get used to the idea that people are not going to spend $3,000 on a laptop, plus $1,000 on a phone, and then another $1,000 on a tablet, and then run out and do it all over again in 2 years time when the new models come out. There's a limit to how much money people have to spend and they have other things to spend it on than just another electronic gadget.

For most people who have one, a tablet fills the pretty simple role of letting them read the news or check the weather forecast, or whatever off the Internet would having to fire up the PC. You can do that quite nicely for well under $200.

Now this is the point where someone, usually posting as AC, will chime in with his esoteric use case requiring an eyewateringly expensive tablet. That's fine, but as the market share figures show, that isn't what most people want. It may be what you want, but what you want is evidently pretty irrelevant to what is actually going to happen in the marketplace.

F-35 'sovereign data gateway' will stop US reading pilots' personal data? Yeah right

thames

"Nerd Point for anyone who can name the last time that fast jets operated from a location without any IT connectivity for longer then 29 days."

Any war fought abroad other than the most recent ones?

And in future it will be "any war fought against anyone other than a few guys running around in flip-flops with AKs". The first things that are going in any future war against a major power will be all of the communications satellites and all of the undersea cables. So sorry, no Internet (or any other net) for you once the shooting starts. Facebook addicts may wish to take note of this also.

That of course doesn't even address all the operational meta-data which the US will be gathering about your planes, and who they will be passing the intelligence onto - possibly including to your enemy.

thames

Someone's been reading too many LM press releases on sales "successes"

El Reg - "Other nations buying the F-35 include Great Britain, the Netherlands, Australia, Israel, Canada, Norway, South Korea, Denmark, Italy, Japan – and Turkey".

Canada is buying the F-35? Has Lockheed Martin told the Canadian cabinet or air force about this? Because they've said they're not. Canada backed away from the purchase about four years ago. LM will however be allowed to bid in an open competition (already in progress)) against four other planes.

Canada is also replacing most of their navy at the same time. The rising costs of ships and planes has meant that one or the other had to give way. The navy was given precedence and a cheaper solution for the planes is required. All the whizzy "n'th generation stealth whatever" makes absolutely no difference whatsoever in the air defence role which the planes are being primarily bought for, so in the end it will come down to price.

The current defence minister has put a very high priority on replacing the existing planes ASAP, and there's a good chance that LM won't be able to meet the time line for full capability (in the roles which Canada wants) either.

So for a number of different reasons, LM's chances of winning are not rated very highly. At present though, they have no more right to claim Canada as a customer than Eurofighter, Dassault, Saab, or Boeing do.

Ubuntu 16.10: Yakkety Yak... Unity 8's not wack

thames

Tried it out yesterday

I tried it out on a live DVD yesterday, mainly to see what Unity 8 was like. Unity 8 seems to work fine on my hardware (AMD A8-5600K APU with Radeon(tm) HD Graphics × 4). Some of the effects looked nifty, but I prefer the Unity-7 UI in terms of making my life easy. In particular, I can't imagine living without virtual desktops, and I didn't see any in Unity 8. What is more, if the launcher bar is supposed to pop out somehow, I didn't see it. Instead there was what looked like a "launcher window" I'll call it. However, Unity-8 and Mir are still focused on mobile and tablet targets, and desktop trails after. This is an understandable, as Unity 7 is probably by far the best UI to be found on Linux today, so there's not a lot of pressure to change it. I use 16.04 on a daily basis, and I'm quite content to wait.

I didn't try the Snaps, so I can't really comment on them.

As for the (Gnome) "Software" package, I install command line and library packages using a GUI (Ubuntu Software Centre). It's a lot easier to search for and install packages that way. Using a GUI I don't have to Google a package name to find out the exact package name (which is often not the same as the common name) and what it does, and the integrated ratings system helps as well. "Software" is completely pointless if it can't handle everything. Ubuntu Software Centre handles things by letting you show or hide non-GUI items.

As for Nautilus, the reason that 16.04 shipped an old version is because at the time Gnome was in one of their "let's rip out all useful functionality" phase of "simplifying" things, and the latest version at the time was rubbish. If the pendulum has since swung back to making it useful again, then that's fine. I'll have to boot up the live DVD again and try it out.

I wish they had stuck with the old version of GEdit and Gnome Terminal for 16.04 by the way. The Gnome devs have ripped out the most useful GUI parts of the UI and forced everyone to memorise keyboard short cuts in order to "simplify" the UI to conform to the current Gnome group-think on UI design (which says that the way to make things easy to use is to simply not have any useful features).

As a correction, this is not the first Ubuntu release to not fit on a CD. It is in fact only marginally larger than 16.04, which also did not fit on a CD. What is more, 14.04 did not fit on a CD either (14.04.4 is 1.1GB, 16.04.1 is 1.5GB, 16.10 is 1.6GB). I'm not sure when that restriction was removed, but it was a while ago. The main reason for this was to accommodate more standard applications.

Oh, and as a tip for anyone looking to try out Unity 8 on a live DVD, the way to get to it is to boot up the DVD and then log out (using the gear symbol at the upper right). At the log-in screen, click on the Ubuntu logo beside the user name box, which causes an additional log-in option to appear. Select that, with user name of "ubuntu" and a blank password, and it will log into a Unity-8 screen. Explore the edges of the screen with your mouse to cause features to appear - this is the desktop equivalent to "swiping" the edges on a touch screen. Unity 8 is a mobile UI which has been adapted to the desktop. The interactions are not really "desktop-enough" in my opinion, which is I suspect one reason why they haven't made it standard yet. There are only a couple of applications available, as they have to recompile them to get the standard UI libraries to use Mir instead of X, and they didn't do that for all the apps (since it's just a demo).

All it all, it looks pretty good. I use Ubuntu with Unity 7 (16.04) on a daily basis, and I much prefer it to any other Linux distro or any other version of UI on Ubuntu that I've tried. I also prefer it to any version of MS Windows that I've tried. I don't have enough experience with a Mac to really do a detailed comparison there, although superficially I would say it's at least as good if not better than a Mac from an ease of use standpoint.

Bits of Google's dead Project Ara modular mobe live on in Linux 4.9

thames

Re: What other phones are modular and would need Greybus?

@Anonymous Coward - "why should drivers have to be committed into the main kernel source, by no less than Torvalds himself"

Torvalds didn't. Greg Kroah-Hartman did. Torvalds is the top manager, and Kroah-Hartman is the middle manager responsible for that area. If you want something like that in the Linux kernel, then you write it, convince Kroah-Hartman that it should go in, and Kroah-Hartman gets it approved by Torvalds. So long as Kroah-Hartman has a good record of success, Torvalds will give it a cursory glance and rubber stamp it. You know, just like any other really big software project?

As it happens, Kroah-Hartman also works for Google on that project (top kernel developers are in much demand by companies who can pay top salaries), so he has an inside track on approvals. He'll still be deep in the brown stuff with Torvalds though if he screws up on it.

It originated with Nokia by the way, and Google have extended it. Nokia originally created it to make it easy to integrate cameras from different suppliers.

Oh, and it's not a driver. It's a communications system that includes a lot of components which need to be controlled by the kernel. It offers low-latency, in-order message delivery, QoS, standardized device classes,, etc. If they could have done it with a driver, they would have just done it with a driver. A lot of phone drivers never end up in the main line kernel (they're "out of tree" drivers).

Saying that this could have been "added as a driver if only they did whatever" is like claiming that you can add USB to an operating system "as a driver" and got a standardized USB mass storage, keyboards, mice, printers, cameras, etc., and be able to plug any of those into a device without having to hunt around on the Internet for a special driver for each one.

Guess what? Even Microsoft - the king of "you need another driver for that" (YNADFT) - clued in on that bit when it came to USB.

Sometimes I really have to shake my head at the people who still pine for the days of when Windows need a special OEM driver for every single sodding different USB flash drive.

BART barfs, racers crash, and other classic BSODs

thames

It's probably RS-485 (or RS-422) with a repeater (to give them more than 32 addresses).

It's possible that the problem is the system uses a different set of communications parameters from the default, but the sign lost it's configuration (dead battery?) and someone will have to reconfigure it on site, which nobody has got around to yet.

RS-485 is popular in certain applications because you can the cable a long ways especially if you reduce the baud rate,

Industrial control kit hackable, warn researchers

thames

I just had a look at their web site to see what the product is. It's a bog standard remote I/O card. These sorts of products used to use RS-485 or proprietary media. Manufacturers have been switching to Ethernet in order to use standard chip sets, cables, connectors, and other hardware.

You don't put these things on the Internet. They're not that type of module. They're intended to be embedded in a machine (which can be a very large machine) on their own network. The reason they use a network connection is to reduce cabling. The "old" way of doing this would have been to run masses of individual wires from the valve or switch back to racks of I/O cards mounted in a central cabinet. That was expensive, labour intensive, and unreliable (try tracing a flaky connection or signal cross-talk from junction box to junction box some time - not fun). Then they went to proprietary networks, which were expensive, often unreliable, and poorly supported. Now you just run power and an Ethernet cable to the module. There's an embedded switch in each module so you can daisy-chain them, just like you would have with RS-485.

The web interface will be to let you configure the module for such things as address and a few other options. Of course if you have access to the network you can simply ignore the web interface and send standard industrial commands to it to do whatever you want with the I/O without needing any passwords. This is why I have to laugh at the drama in some of these types of stories. Security for these types of devices is supposed to be physical isolation. Don't hook them up to anything that isn't supposed to be able to talk to them. I very much doubt that most customers even bother to change the default passwords anyway. They're not the IoT.

For those who think this sort of thing is a big problem, then here's something for you to worry about. Did you know that you can plug a keyboard, mouse, and monitor into any desktop PC without any security authorisation at all? Astonishing, isn't it! Industrial I/O devices are a machine's equivalent to keyboards and monitors. If you decide to hook them up to the Internet, then it's up to you to provide the necessary security by some external means. Industrial I/O vendors are not in the security business and they shouldn't try to be. If you need security, go to a security specialist and add the security on as a separate firewall/filter/whatever box (there are companies that do this).

City of Moscow to ditch 600k Exchange and Outlook licences

thames

Re: It's ALL about the money... don't mention security!

@Buzzword - "What's the point in having secure software if all your hardware is built in the People's Republic of China?"

So are you saying that you don't bother with having any software security at all on anything you control?

I guess all the world's IT vendors and customers should just bin all their security staff, since in your view there's no point in having secure software. They can instead use the savings on something more important such as executive bonuses. Trebles all around!

Unimpressed with Ubuntu 16.10? Yakkety Yak... don't talk back

thames

Re: 16.04 Long Term Support

I Intend to stick with 16.04 (with the mainstream Unity UI) until the next LTS. If you jump off the LTS track into the intermediate releases, then you either have to follow each subsequent release until the next LTS, or else somehow jump back to the previous LTS (not sure if you can do that without re-installing).

The changes to the Gnome derived "Software" program sound good, but I've been sticking with the original "Ubuntu Software Centre" anyway, which already does everything. The only thing which might temp me to upgrade to a non-LTS is if I wanted to develop "Snap" packages and needed the new functionality.

I've been very happy with Ubuntu. It's been steady incremental progress since the transition to Unity. The change to System-d turned out to be a non-issue, as Ubuntu delayed touching that until everything had settled down. I haven't noticed any change other than that System-d seems to take significantly longer to boot than Upstart did.

Unity itself has turned out to be a very good UI (the best of all the ones available for Linux in my opinion, and better than any version of Windows) and I don't think it needs any changes at this time when being used as a desktop keyboard and mouse UI.

The major development work in Ubuntu at this time seems to be focused on server, especially anything to do with "cloud". With phones taking a greater share of the client side, and with Android so massively dominating phones, that is probably a reasonable direction to take.

RAF Reaper drone was involved in botched US Syria airstrike

thames

Re: "Non Enemy Troops"

@Voland's right hand - "I cannot be arsed to look it up now, there was a cute photo-essay in the Graunidad showing opposition weapons workshop where they were welding propane gas bottles to the solid fuel rocket engine off a Grad missile.

I suspect that you are mixing these up with "Hell Cannon". They are large high capacity bombs launched from home made mortars. There are loads of "hell cannon" videos on YouTube if you want to see them. The western allied FSA affiliated rebels fire them off in the general direction of "anywhere we don't control", with little idea of what is on the ground where they will land.

The shells seem to be typically made from large propane gas cylinders with fins welded on the back (so the fuse can operate when it lands), but the hell cannon themselves are all different, so there's no specific size or range they have. Accuracy of course is unimportant, since they don't usually have any clue of what is off in that direction other than that it's territory that they don't control. The crude aerodynamics of the shells would preclude accurate targeting anyway. They are sometimes used in the countryside against small villages, but most seem to be used in large cities such as Aleppo, because that's where the groups that use them are concentrated.

As for "barrel bombs" they have been around for decades. I think they got their name because the ones the Israelis used in one of their early wars were made from actual barrels. During the Balkan wars a couple of decades ago, they made them from water heaters. I believe the early Syrian government civil war ones were made from propane cylinders (like hell cannon shells), but the current ones seem to be constructed from scratch in factories. They're just normal (but cheap) bombs in that sense.

The Iraqi government reportedly chucks them all over IS controlled cities in Iraq, but of course that doesn't tick the right foreign policy guidance boxes so the media isn't inclined to report on it much.

Is it time to unplug frail OpenOffice's life support? Apache Project asked to mull it over

thames

Re: Two separate projects are a waste of resources

The only reason that Apache OpenOffice exists is because a few companies had an irrational aversion to any sort of copyleft license (e.g. GPL or MPL), and insisted on an Apache or MIT type license. Sun and then Oracle had been dual licensing it to companies like IBM who then sold proprietary derivatives of it.

Converting it to an Apache licence and handing it to the Apache Foundation was Oracle's exit plan to get out of the office suite business while still meeting their obligations to their proprietary licensees. Most other commercial contributors however saw no reason why they ought to be "donating" their time and money to companies like IBM who want to make proprietary derivatives rather than contributing back on an equal basis.

Apache OpenOffice has been a zombie project for a while. If they shut it down, Apache ought to give the trademark to LibreOffice instead of just abandoning it. If they simply abandon the trademark scammers could scoop up the trademark and using it to distribute malware. If the latter happens, it could damage the reputation of every other project associated with Apache.

Your wget is broken and should DIE, dev tells Microsoft

thames

Taking over the wget and curl names to provide something incompatible and far less capable was incredibly stupid. That in itself is a breaking change.

It's as if Microsoft kept substituting MS Paint every time you tried to run Adobe Photoshop, and then refused to stop doing it because now some people might be used to clicking on the Photoshop icon to run MS Paint.

The user response isn't a factor of open source. It's a factor of people having a communication platform to respond to problems which isn't controlled by Microsoft and can't be silenced by their PR people.

Microsoft has open-sourced PowerShell for Linux, Macs. Repeat, Microsoft has open-sourced PowerShell

thames

Re: "On Linux we’re just another shell"

@P. Lee - "Who's going to rely on that having a future? Unix people won't - they won't trust powershell to be on all unix systems."

Not just "won't" rely on it, but rather "can't" rely on it. Linux runs on a far greater range of hardware than MS Windows. As a result, core parts of the operating system have to work on hardware and in situations and under constraints that people at Microsoft have never heard of, let alone test or support. Therefore, PowerShell can't be a core dependency for most serious distros. It can only be a non-core optional package, which few people will bother to install.

@P. Lee - "MS control many of the apps running on Windows servers and they can powershell-ise them. What happens when you don't control the applications? "

It will only ever be of interest to companies, such as Microsoft, who already have "powershell-ised" applications that they wish to port to Linux from Windows. These will typically be proprietary "enterprise" applications. If you don't use that application, then there's no reason to install PowerShell.

In other words, don't think of PowerShell as something you would install for its own sake. Rather, it's just something that would get pulled in along with some other application.

thames

Re: Why?

Why? Because some of their big money products that they want to port to Linux are integrated with it, It's a dependency which they have to bring along with the stuff they do want.

I imagine that somewhere there's a Gantt chart showing what's required to get certain important products onto Linux instances in MS Azure cloud, and PowerShell is just one of the milestone dependencies.

I seriously doubt that they're doing it just because someone thought "wouldn't it be cool if PowerShell ran on Linux?" Somewhere there's a business plan, and this just happens to be one of the minor tick-boxes to make the plan work.

The "object oriented shell" thing has been done on Linux before, years ago. Nobody was interested because it just didn't solve a problem that anyone had. Bash did the simple shell stuff, and it was something that admins (as opposed to software developers) could work with.

For advanced scripting there was Perl, and later Python and Ruby, all of which were full fledged programming languages with good integration with the OS, and an absolute ton of libraries to build on as well. The big management systems such as Ansible, Salt, etc. are built on Python and Ruby.

Nobody in the Linux field is going to care the slightest about PowerShell, except as a dependency that will get installed along with some Microsoft "enterprise" product. And I don't think that the people inside Microsoft who are making these decisions seriously expect it to be any other way.

thames

Re: Why is ssh built in?

@Doctor Syntax - The reason that MS Windows resembles VMS so much is that the guy who was in charge of the project came from DEC. He wanted to create a "next generation" VMS, but the DEC board of directors weren't interested so he took his ideas to Microsoft, who were. As you can imagine, this ended up in court with MS having to open their wallet wide to compensate DEC in the end.

CP/M resembles TOPS-10 from DEC, because the developer was familiar with it. MS-DOS was in turn a deliberate 16 bit knock-off of CP/M created by one of Digital Research's (the owner of CP/M) source code licensees. This made porting application software from CP/M to MS-DOS was much easier. MS-DOS was successful because it was much cheaper than CP/M-86 or UCSD P-System (the other operating systems which IBM offered with their PCs), and IBM had lots of third application ports from CP/M lined up at launch.

The reason that MS Windows servers sold so well was that they were initially sold to the bottom end of the market at a time when businesses were looking to add basic file and print server networking capabilities to lots of small offices and to offices attached to factories and retail operations. MS Windows was comparatively cheap, and it worked on cheap commodity hardware. These businesses already had MS-DOS and MS-Windows 95/98 desktops, and Windows NT Servers worked with them more or less out of the box.

At the time your options in that market space were either Novell Netware or SCO, or to go with one of the big to medium size hardware/software package vendors such as IBM, DEC, HP, etc.

At the time, businesses were trying to break away from vendor lock-in. Users saw vendor lock-in as deriving from hardware, and didn't realise that proprietary software could present an even more difficult lock-in. MS Windows ran on commodity hardware and it was cheaper than the other proprietary software vendors. BSD was around at the time, but the developers weren't interested in running it on anything other than a "real" (e.g. mini-computer or unix workstation) computer.

Microsoft planned to inherit the market share which belonged to the proprietary hardware/software combo vendors by offering proprietary software on cheap third party commodity hardware. The hardware vendors would be squeezed by competition into accepting thin margins while Microsoft hoovered up all the profits into their own pockets. Hence, this is why they threw their toys out of the pram when Linux came along and spoiled that business plan by offering cheap commodity software on top of cheap commodity hardware.

thames

Re: Why is ssh built in?

@Flocke Kroes - It is possible that there are architectural limitations to PowerShell which require building ssh in to do everything they want it to do. PowerShell sits in its own little PowerShell universe.

As an aside, I've been using the port of actual ssh server (but not with PowerShell) on MS Windows in a software testing environment, and have to say that for what I need it has been hands down the best solution I've found. Everything else either wouldn't install (Windows 10 desktop), or needed massive farting around try to configure the networking options to handle the fact that it was running in a VM. Ssh simply worked (or at least it did once I realised that MS's installation instructions were wrong).

I've been running MS Windows instances in VMs, and then using scp to upload source code and download test result files to and from the VM, and it's worked great. I hope they continue to maintain it.

thames

Re: "On Linux we’re just another shell"

@gv = "Not sure I understand why anybody would use this in preference to sh, bash, dash, csh, ksh, zsh, et al."

I suspect it's not about getting people to use PowerShell in preference to bash. There's really nothing to attract Unix/Linux users there. PowerShell isn't really a "shell" in the way that Unix users are used to thinking as it's far too verbose.

Rather, I suspect that it's part of their plan for porting more of their bread and butter applications from Windows to Linux in order to run them as part of their "cloud" product. Supposedly, PowerShell has been knitted into their other server application product lines, which means that they now have a dependency on PowerShell. Porting those server applications to Linux without leaving too many holes requires porting their dependencies as well.

Microsoft is looking to the future, and the future isn't MS Windows.

VMware survives GPL breach case, but plaintiff promises appeal

thames

Re: Linux kernel copyrights

@DougS - There are two ways that GPL licensed projects tend to be run. One is copyright held by a single party, the other is distributed copyright (everyone retains individual copyrights). The latter is the more common one these days.

The FSF got into the game fairly early when "open source" was still considered to be a fairly novel idea, as was downloading software for free off the Internet. They also didn't have big corporate backing to finance lawsuits or a long list of favourable legal precedents, so they wanted to make any court cases as cut and dried as possible. Hellwig's current problem is exactly the sort of thing they were trying to avoid.

The FSF does not like to sue people. They simply want compliance with the terms of their software license. And we should remember that it's Hellwig with Linux kernel code who is suing here, not the FSF.

The case for distributed copyright is that free/open source software has more or less won the argument, and nearly everyone is using it these days. As such, it's grown well beyond the FSF.

When copyright in a work is widely distributed and the license is GPL, it's not possible for a single party to take it proprietary. Thus companies are more willing to participate in large projects because they know they won't be screwed over by someone else.

When copyright is held by a single party or the license is not GPL, as an outside participant you have to "trust" whoever is controlling the project to not screw over the other parties whenever there is a change of management or control. While the FSF may be fairly trustworthy, how far do you trust Oracle with their ownership of MySQL (which they own all the copyrights to)?

With Android, how far would you trust Google to not take it completely proprietary if Linux used a BSD license instead of GPL? They're gradually tightening the screws on the Apache/BSD licensed bits of it. If I was a major phone manufacturer, I would much prefer GPL licensed code as then I would know that I always had an escape hatch if Google went totally "evil".

Generally, a GPL license with widely distributed copyright is "friendlier" to end users, small developers, and businesses who have ongoing support obligations. If you want to create a "community" around a software project it's still the best bet because it doesn't allow anyone to get into a position of privilege with respect to anyone else.

thames

It's how much does he own

My understanding of the case is that the problem is not whether VMWare was distributing software in violation of the license terms. It was whether Hellwig could demonstrate that enough of the unlicensed code had copyrights held by him personally as opposed to other software developers. Without demonstrating that a sufficient threshold of unlicensed copying of software owned by him was reached, under German law Hellwig does not have a sufficient complaint against VMWare to proceed.

The court is not saying that the software in question wasn't Hellwig's. They're just saying that Hellwig hasn't shown to their satisfaction which of the code in question was his, versus which of the code in question belonged to other software developers who are not taking part in the lawsuit. Hellwig has to show that he has enough code in question to make a lawsuit worth the court's time.

Where Hellwig went wrong was to not have all his legal ducks in a row before launching the case. If he has a second go at it later, he may be better prepared. He has to present the right evidence at the right time, and he has to do it in the format which the court wants to see. It's a technicality, but that's what lawyers get paid for.

It's cases like this by the way which are why the FSF requires assignment of copyright to them for any projects which they own. That way they don't have any problems proving they have sufficient interest in the software when enforcing the copyrights.

With Linux kernel development on the other hand, the copyrights are distributed over a very large number of parties. That means that getting enough copyright holders together to agree on enforcing the license terms can be difficult.

US Air Force declares F-35 'combat-ready'

thames

Re: Perhaps

@MrXavia - "If only we had installed cats and traps on our carriers"

Those would be the new American cats and traps that have been even later and more over budget than the F-35 and still don't work?

The UK switched plans to use cats and traps for the new carriers, but then bailed out and switched plans back to ski-jumps and STOVL aircraft when they saw what a fiasco the new American cat and trap system was turning into. The F-35B is the lower risk option in this case.

F-35 targeting system laser will be 'almost impossible' to use in UK

thames

Re: None-story

@boltar - "We're not buying beta versions! Once they're paid for and flying over here there should be no more testing (other than pilot training) and debugging!"

Beta? They're barely frigging alpha! The Pentagon (under orders from congress) has put a hold on more orders until the software is finished, and as a result LM is whining that they're losing money and can't afford to pay their suppliers. UK orders are also on hold because they're part of the same lots the US has put on hold.

The planes will fly, and the UK can use the ones they've got now for pilot and ground crew training, but the planes are not ready for fighting a war yet. UK plans are to have the planes ready when the first new carrier is ready, with sea trials finished and crew worked up. The latter hasn't happened yet, so it's not a big deal from a UK perspective. The RAF plans on using the F-35B as a Tornado replacement (bomber) which can also operate from carriers, while the Typhoon (Eurofighter) will remain in use for air defence and air superiority with a secondary role in dropping bombs. Similarly, the US will still have the F-22 for air defence and air superiority.

The countries that are feeling the pain on this one are the smaller ones that only have one type of plane and need to replace their entire air force, and whose existing planes are falling to bits from age. Denmark is a good example. They're looking at a gap between their F-16s falling apart from age and their new, not yet delivered, F-35s being ready for full service.

As for the question of the laser targeting system, I don't know the details but I suspect it may be a safety issue until the targeting system has received full approval for service. At the moment they can't guaranty that it won't get confused when it sees someone else's laser and drop bombs on the wrong target, and that it won't point the laser in random directions and damage someone's eyesight. The Americans have big spaces to let things go wrong in, and article mentions that the UK has a few similar ones as well.

Windows 10 pain: Reg man has 75 per cent upgrade failure rate

thames

Re: Linux system upgrade may not be much better

By coincidence, I upgraded from Ubuntu 14.04 to 16.04 yesterday evening. It took roughly 2 hours in total. It was just a case of click on the upgrade button when it was offered, type in my password, and then let it go on it's own until it asked for permission to reboot (once) at the end. Everything worked fine without any problems.

Looking at the screen on my PC, there's nothing obvious to tell me that the upgrade has taken place other than that one of the icons in the launcher bar has changed, and Firefox now uses Gnome style scroll bars.

The PC is about 4 years old, and I built it from whatever inexpensive parts were available at the local computer shop.

P.S. My experience in the past with Linux has been that a fresh install is usually a lot faster than an in-place upgrade. However, with a fresh install I would have to re-install the extra packages that I want. An upgrade takes care of all that for me automatically.

Flame Canada, flame Canada ... Botched govt payroll computers spew smoke ahead of probe

thames

What will be even funnier is that the politicians who are responsible for this fiasco are now sitting on the opposition benches and will be harumphing about it at length when parliament resumes sitting.

What isn't funny is the situations that the employees face. Some of them haven't been paid in months. The situation is positively third world.

Plenty of fish in the C, IEEE finds in language popularity contest

thames

Re: D

"D ranks as 6th which is substantially higher than Ladder Logic"

Which when you think about it, is utterly implausible. Almost every factory in existence runs on ladder logic, and most individual machines have unique custom ladder logic programs. That's a lot of ladder logic being written, and re-written every single day around the world.

However, people who program industrial systems have their own forums and web sites, and don't generally frequent the same ones that the IT business does. It's two separate works, and seldom do the two meet (I know because I have a foot in both worlds). The IEEE doesn't look at the ladder logic forums, and so doesn't see much of that world.

thames

Re: Haven't heard of R

If you haven't heard it, it's because you have been avoiding the articles about "enterprise big data" (which is a quite an understandable reaction). Everybody has been packaging R with whatever big data system they are flogging.

It's an open source statistical language, and a GNU project licensed under the GPL. It is replacing the various proprietary statistical packages which people had been using before.

I'm sure that it's a useful thing, but I suspect that the IEEE's ranking has been tilted by all the marketing bumpf being put out by the enterprise vendors (Oracle, Microsoft, etc.) who are rushing to support it recently. I seriously doubt that there's actually more people using R than are using PHP or Javascript.

thames

R? Go?

Tiobe puts R at 17, with a 1.5% ranking. Go is not even in the top 50 and is lumped in with languages with too small of a market share to measure meaningfully.

A more "diverse" source of data such as the IEEE uses isn't necessarily a better one. Some of the sources which the IEEE uses are subject to being tilted by marketing drives, or by HR driven laundry list job ads or CVs, where whatever is "cool" this month gets spammed over the Internet whether it's relevant or not.

I find it extremely implausible to suggest that R is more commonly used or in more demand than PHP, Javascript, or C#. R has a very narrow use case, while the others I mentioned are very widely used. The IEEE ranking seems a bit lacking in usefulness.

HPE promises users Itanium server refresh next year. In Dutch!

thames

HP-UX on x86?

"HP-UX will let users keep alive apps they can't rewrite, running them in Linux-hosted containers"

It would be fascinating to know how Itanium applications will run on x86, even if they are running in containers. Are they counting on re-compiling from source, or are they doing something else? However they're doing it, it looks like HP wants to move their HP-UX customers to Linux before someone else does it for them.

An anniversary to remember: The world's only air-to-air nuke was fired on 19 July, 1957

thames

@Mark 85 - They were unguided rockets, not guided missiles. Air to air guided missiles were in their very early days when these were developed. I imagine they would be happy if one rocket took out one bomber, considering how much damage a single bomber could do.

Canada had them as well. The warheads technically remained the property of the US in order to get around non-proliferation treaty rules.

In the event of a war, the Soviet bombers would have come over the Arctic Ocean. There were three successive lines of radar stations to track them as they approached - the DEW, Mid-Canada, and Pine Tree Lines (going from north to south). The objective would have been to shoot the bombers down before they reached heavily populated areas.

China prototypes pre-exascale super trio with its own non-US chips

thames

Re: Foot, Point, shoot.

@Cuddles - "the other is Sunway which are supposedly based on DEC Alpha"

The idea that the Shenwei SW26010 was based on the Alpha was only speculation on internet forums. According to the co-founder of the Top 500 list (the HPC list of the top 500 super computers), from the information that he's seen, it isn't. It's a unique design of its own.

It is designed around very efficient and fast floating point performance, which is why it does so well in super-computing tasks. Whether it would make a good general purpose processor or be good for things such as web servers is something we don't know.

Linux letting go: 32-bit builds on the way out

thames

The 16/32/64 bit limits are due to Intel design decisions. You can switch between 32 and 16 modes, or between 64 and 32 bit modes, but you can't switch between 64 and 16 bit modes.

If you need to run old 16 bit software (e.g. MS-DOS programs), your best bet is to run it in an emulator.

And 64 bit Linux can run 32 bit binaries, you just have to make sure that the appropriate 32 bit libraries are added. It's done via installing the appropriate architecture, which then draws in all the 32 bit dependencies. I'm not sure if all distros can do this, but Ubuntu, and I believe Debian, can.

Usually you don't have to worry about this on Linux, as 32 bit-only software is rather rare. Most Linux distros went 64 bit many years ago. MS Windows is so far behind in this area because of third party software.

thames

Third Party Software

The posting from the Ubuntu developer Dimitri Ledkov makes it clear what the problem is, and it's something the Reg story skipped:

"The key point here is lack of upstream software support and upstream security support on i386, rather than actual hardware being out of stock and/or old."

The issue is that a lot of the 3rd party software which Ubuntu and other distros ship are dropping support for, or never officially supported, x86-32. This includes things like Google Chrome, Docker, ZFS, etc. That means no security support from the software authors, as well as normal bugs and functionality fixes.

This means that full-size distros are being boxed into a corner as they can't realistically support software which the original third party supplier isn't interested in having run on x86-32.

Right now, there's no definite plan on what to do, as Ledkov (Ubuntu) is asking for suggestions after tossing out some rough ideas of his own. He's suggesting continuing support for older 32 bit applications out until April of 2023, so it's not like anything is going to get turned off tomorrow. However, these are the time scales you have to think on when you're running a commercial distro.

In other words, this is being driven more by third party software suppliers than by the distros themselves.

By the way, I thought that Ubuntu dropped 32 bit x-86 server a while ago? They just been keeping 32 bit x86 desktop for people who have older hardware with limited RAM (I assume old laptops).

Lightning strikes: Britain's first F-35B supersonic fighter lands

thames

The carriers were supposed to get the new American electromagnetic catapult and arrestor system. However, that has turned out to be a complete fiasco so far, so it seems like Britain dodged a bullet on that one. The Americans will probably sort things out eventually, but the problems would have reduced Britain's new carriers to helicopter carriers only in the mean time.

However that was luck. The decision actually came down to someone in the MoD doing the sums and finding out that when you factor training and the rotation of pilots into the equation, the 'B' (vertical take off) version was much cheaper for Britain to operate.

"Conventional" take-off and landing on a carrier takes constant practice for the pilots to retain qualifications, and it ties up a carrier while they do so. All of that cost loads of money in fuel. salaries, and equipment hours. Britain plans to operate the planes and pilots from a common "pool" with the RAF, to provide more flexibility and to ensure the carriers aren't dependent upon a very small pool of dedicated naval pilots.

The short "rolling" take off and landing (they won't actually use vertical take off or landing) capability in the F-35B is nearly automated, and is simpler than it was with a Harrier, and vastly simpler and easier to learn than catapult and arrestor hook equivalents. The pilots can rehearse this on land airfields (equipped with a ramp for this purpose), which means that the carrier can be on active operations rather than tied up in training maintaining pilot currency. The carriers can operate with 12 F-35Bs under normal circumstances, but "surge" to several dozen more as circumstances require, and all without having to maintain a dedicated pool of specialised naval pilots.

So overall, given the UK's particular situation, they decided to go with the solution that saved significant amounts of money, provided more operational flexibility, and didn't tie up a carrier as much with training. But the money saving was the big one.

P.S. - When reading about costs in the press, keep in mind that the MoD does full life cycle accounting these days, which includes fuel and salaries, which together can greatly exceed the sticker price of a plane. You have to dig to find out what those are however, as the popular press often don't understand what those are and just publishes a "big number" and let's you assume that is the sticker price.

Looking good, Gnome: Digesting the Delhi in our belly

thames

Gnome Software

Gnome Software has copied a lot of features from Ubuntu Software Centre, but it still has a long way to go before it's a full replacement for it. When I tested it a couple of months ago it s till couldn't handle more than a small subset of the available packages.

To handle the majority of packages you had to install another GUI package manager such as Software Centre or Synaptic, or apt-get the package from the command line (assuming you know the name). And if you have to do that anyway, then why bother with Gnome Software?

People wonder why Ubuntu went off in their own direction with Unity. The reason was simple. Gnome was going off on a decade long wander into the realms of fiddling and experimentation, and Ubuntu saw what a train wreck that was going to be and hopped off at the next station. Whether or not you happen to like where Gnome 3 is going, it's pretty hard to deny that the way that the Gnome developers went out on that trip was appallingly bad.

Of course Gnome is controlled by Red Hat employees, and Red Hat has only a marginal interest in the desktop. For Canonical their Ubuntu desktop is a core strength and they weren't willing to risk that, hence their need for a "plan B".

Linux devs open up universal Ubuntu Snap packages to other distros

thames

Snap basically bundles all the dependencies in with each app, instead of sharing common files. This results in much large package sizes than deb or rpm. The main beneficiary of this will be with proprietary applications or other people who don't want to follow each distro's release schedule. Another advantage is that Snap adds more sandboxing, which may help more exposed things like web browsers, and it also limits what proprietary apps can get access to. Much of the work on Snap came out of the work that Canonical has done with Ubuntu Phone packaging.

Generally, it's a good idea to have this as an option. Deb and rpm will still be the standard way of distributing most distro packages. I can't honestly see much advantage for things like Libreoffice. I've no real desire to get the very latest bleeding edge version, so I can wait for the normal distro upgrade schedule for that.

Microsoft buys LinkedIn for the price of 36 Instagrams

thames

So they're going to data mine the information from your CV, your office software, and your ERP system, and use the result to sell you stuff? I'm not sure I would be looking forward to that.

Leaving the creepy data mining aside though, this sort of acquisition is probably the future direction of Microsoft. Their older products such as Windows are becoming legacy platforms which will fade away as they are undercut by open source commodities.

However, that's a lot of money to pay for a company that Reuters reported was losing money (net loss), not hitting growth targets, and having a tough time outside of the American market. If it was for a fraction of the price, it might make more sense. However, the new tech bubble has inflated the cost of anything "cloud". Microsoft might have been better off keeping their money in the bank and picking up companies cheap after the next tech market collapse. That's what the big oil companies are doing now in the oil industry.