* Posts by thames

1124 publicly visible posts • joined 4 Sep 2014

Per-core licences coming to Windows Server and System Center 2016

thames

Oh Joy!

According to the tweets in the story, licensing is based on the underlying hardware, not on the number of virtual CPUs assigned to the VM. So if you need to add a Windows VM instance to run a specific program, then you have to be careful about what hardware the VM is run on if you don't want to get boned by licensing issues.

Run a trivial Windows program that needs only one core on a server with 64 cores, and you still either end up having to pay for 64 core Windows licensing or else risk being labelled "pirates". I can see customers jumping for joy over this one.

Now you can tailor Swift – Apple open-sources the whole shebang

thames

@Dave 126 - ""Apple made Webkit, ResearchKit and Darwin OS open source"

I can't say that I've heard of ResearchKit before, but the actual authors of KHTML Mach, and BSD will be really surprised to hear this, seeing as none of them worked for Apple. Oh, you mean that Apple "innovated" new marketing names for them, yes they're very good at that.

Webkit is a re-brand of KHTML, which was part of the KDE desktop (and the Konqueror web browser), which is one of the major desktops used with Linux and BSD. It's open source because the LGPL license used by KDE says it has to be. Yes Apple developers have contributed to it, but so have many others and Apple got involved after it had been around and in use for a few years. I was using KHTML ("Webkit") long before Apple adopted it as the core of Safari. When Apple decided they needed their own web browser, they picked KHTML over GHTML (Gnome equivalent) due to better functionality, and over Gecko (Firefox) because the latter couldn't be easily separated from the rest of the browser (KDE was designed around reusable components).

Darwin OS is a mix of Mach and BSD, both of which were originally open source and did not come from Apple (they were mainly university centred projects). Yes Apple has added more stuff to them, but they did not originate them.

A lot of the open source stuff used by Apple has a similar history. They don't generally originate very much that is used by anyone else. They more typically just re-brand an existing successful project and the trade press repeats regurgitates the awesome "innovation" right from the press releases.

I'm trying to think of anything which is significant, open source, widely used by everyone else, and that Apple originated, and I can't think of anything off hand. Most of the successful software that Apple actually originates starts out as and remains proprietary. Whether or not Swift will be widely used outside of writing apps for Apple kit remains to be seen.

PHP 7.0 arrives, so go forth and upgrade if you dare

thames

Re: Not backwards compatible...

@Whitter - "Honest question: is PHP really the best option if one is prepared recode?"

I prefer Python, but if you've got a large existing code base in PHP you're probably going to find it's much easier and faster to port from PHP 5 to PHP 7 than to rewrite the whole application.

If you do decide to rewrite, you will find that for something that works the way you are used to dealing with, your options are limited to Perl, Python, Ruby, and NodeJS (Javascript). My own choice would be Python for its wide support, large selection of frameworks, and great flexibility.

Not a lot of new stuff seems to be getting written in Perl these days, Ruby is mainly used for Rails, and it's hard to say if Rails really suits your needs (Rails popularity seems to be declining due to changes in the way people are designing stuff these days). NodeJS is a framework rather than a language (it uses Javascript), and while it is "hot" at the moment it's hard to say whether NodeJS works the way you need it to work. What I can say about Python is that whatever it is you need to do, you can be sure that you can do it with Python without having to force fit your application into one specific framework.

If you do a complete rewrite, I would strongly recommend writing complete tests as part of the development process. I be there are a lot of PHP 5 web developers who are going to wish they had when it comes time to port to PHP 7.

thames

Re: Not backwards compatible can cause a lot of problems

@DrXym - "Python 3 has been out for 7 years now and the majority if scripts will still only run on 2.7. Why?"

Because 2.7 is still supported as a long term support release (until 2020), and loads of people have working software they have no desire to change. The success of Python has meant that a lot of users have large legacy code bases.

The biggest thing keeping 2.7 around had been that most Linux distros install it by default because they use it for their administration software. You could normally count on it being present without having to install a version of Python. For example, with my own Python software I used to say that I supported version 'x' of Ubuntu, Debian, etc., and then develop and test assuming the default version of Python.

However, most of the major distros are working on making 3.x their default version for the distro's own use (e.g. administration software), and will be making 2.7 an optional download only soon. That will give developers less incentive to keep providing backwards compatibility tor 2.x in future. Almost all the major third party libraries support 3.x, the exceptions mainly being ones which are no longer supported by their developers.

Anything new that I'm doing now requires 3.4 or newer as there is no point in creating something for 2.x now given how it is being phased out. A lot of other developers seem to be doing the same.

The main problems people have with moving from 2.x to 3.x seem to be unicode related. I suspect that PHP 7 users will find similar issues. With 3.x strings are now all unicode, rather than switching back and forth on the fly between unicode and ASCII like 2.x does. That has exposed a lot of latent unicode related bugs in existing application code, as well as problems where people were abusing strings to handle binary data (I'll admit to being guilty of the latter).

Opinion on the unicode switch seems to be split between the "why can't everyone just speak English" crowd who think foreign languages are a load of nonsense, and people who actually speak those languages who tend to be very strongly in favour of the change. Again, I suspect that PHP users will be going through the same process.

As for comments by people on white space and indenting, yes, neatly and consistently indented code in Python isn't just a good idea, it's mandatory. Developers whose self image derives from creative use of curly braces just have to get used to the notion that there's one official way of formatting code and Python enforces it mercilessly. The nice result of this is that the Python parser reads the code the same way that I do (by indentation), eliminating bugs cause by curly brace versus indent inconsistency.

I suspect that the transition from PHP5 to PHP7 will take years, largely due to hosting providers being slow to provide it and due to legacy code issues. As a result, PHP5 will probably be around for years to come. That however is the price of success. If everybody uses your code, that tends to create a great deal of inertia. It's why people are still running COBOL programs that were written before they were born.

Iran – yup, Iran – to the rescue to tackle Internet of Things security woes

thames

"Appropriate regulation" doesn't necessarily mean designing the protocols used. It can mean things like governments taking responsibility to ensure that companies meet certain minimum levels of security and accountability, just like they do with things like say automobiles or water heaters.

You can't prevent all mistakes from happening, and you can't even prevent all malicious or dishonest action. You can however ensure that the bad actors face the consequences and don't hide behind "terms of service agreements" and an army of lawyers.

If Facebook loses your cat pictures, well quite frankly I don't care because Facebook really doesn't matter. If however my thermostat and every thermostat in the country doesn't work unless it was connected to a server in bongo-bongo land (or California - same things really) and they shut down and now my house freezes while I stand in line behind 10 million other people waiting for a replacement, then I'm going to care a lot. People should be able to buy critical stuff and know that it is safe to use without having to analyse the technology behind it.

Entropy drought hits Raspberry Pi harvests, weakens SSH security

thames

Re: Hardware not a solution.

@Anonymous Coward - "Can't hardware RNGs be physically inspected and run through assorted tests?"

How are you going to physically inspect the hardware RNG built into the CPU in your PC or server? Anyone can download the source for Linux and inspect it, but pealing off the package on a sample of CPUs and reverse engineering what's inside is a huge task and not really practical.

You also can't really test if the numbers are truly random. You might think you could just take a sample of numbers from them and compare them for randomness, but according to the mathematicians who knows this sort of thing, it won't work. They might look random, but they could be slightly biased in a non-obvious way, but which an attacker who had the algorithm could break. Think for example if the NSA had Intel put a cryptographic back door in each Intel CPU. They wouldn't make the output easily predictable. Instead, they would make it weaker in a way known to them which would allow them to break keys with less effort. That is, they would effectively have part of the key and just have to use brute force to crack the rest.

This is why the Linux kernel developers have said they cannot rely completely on the hardware RNG. They mix hardware and software methods together. Software methods are more secure, while hardware RNGs can output new keys faster. If you mix the two together, you get what amounts to the best of both.

Some people have built hardware RNGs which use basic physics and which plug into a USB port. Those are probably as good as you can get. As for the ones built into CPUs, even if they don't have a deliberate back door in them, nobody really knows if they are free of design defects which weaken the randomness of the output.

thames

Re: Another Debian fail?

It has to do with the way they used it, not something they inherited from Debian.

thames

Hardware not a solution.

Using a hardware random number generator isn't an answer, because nobody knows if those are cryptographically secure. If they aren't, then you can't fix them with a software patch. You also don't know if someone has nobbled it with a cryptographic back door. In other words, from a security standpoint, they're no improvement over the software methods and may possibly be much worse. Current thinking in the Linux kernel group is to blend hardware and software methods together. Software methods can be audited for security, while hardware methods can re-generate themselves faster.

The specific problem here is related to the way that Raspbian is installed. It is however a much wider issue than that. It's a matter of fundamental mathematics that affects all operating systems under the right circumstances. If you are not careful, you can suck the "pool" of random numbers dry if you use them faster than they are generated. This is something that people doing server programming, embedded systems, and running anything in a VM have to be particularly careful of, but which most developers have never heard of.

Think for example of all those VM images you are running as part of your server consolidation project. Are the "random numbers" they are getting random enough, or are you getting something equivalent to what is happening in the story, leaving you with weak cryptographic keys? It's something to think about.

Windows 10 market share growth rate flattens again

thames

Looking at the data.

I had a look at the data, and here's the number in a nice summary table sorted by rank:

Windows 56.76%

iOS 16.50%

Android 15.08%

Macintosh 9.08%

Chrome OS 0.96%

Linux 0.72%

Windows Phone 0.44%

(not set) 0.29%

BlackBerry 0.14%

Nokia 0.01%

Samsung 0.01%

SymbianOS 0.00%

Xbox 0.00%

It's fascination that even on what we would assume to be a stodgy set of web sites as these, Windows is barely over half.

Here's some other ways of looking at the data:

Desktop versus mobile

Desktops 67.53%.

Mobile 32.18%.

By vendor

Microsoft 57.20%

Apple 25.58%

Linux kernel 16.76%.

Microsoft's influence has shrunk hugely, while for everyone else their increased penetration has been mainly via phones and tablets. It really shows that people who still think mainly in terms of the desktop are looking at a vastly shrunken part of the picture.

Mozilla: Five... Four... Three... Two... One... Thunderbirds are – gone

thames

Alternatives

I've been using Claws Mail on Ubuntu for years now. Before that I used KMail, Thunderbird, and various others. I'm very happy with Claws and I much prefer it to Thunderbird because it's so much simpler and much, much, faster. I am using it strictly for POP/SMTD mail but it does IMAP as well (although I haven't tried that feature).

It has shed loads of plugins, so check those if you want things like PGP integration or spam filtering.

It's a fairly traditional three-window GUI interface, but I like that in a mail client. It's fast, simple, and lets me get through my mail quickly. There is no feature that I want that it doesn't have. It focuses on one thing - mail, and does it well. It doesn't change much from version to version, and quite frankly I like that as well. It uses the standard mbox mailbox format, so it is compatible with a lot of other standard mail handling tools.

Overall, I highly recommend it.

Google to end updates, security bug fixes for Chrome on 32-bit Linux

thames

Re: I wonder...

With Ubuntu, Chromium is what is in the repositories so that's what you see in "Software Centre". If you want the closed source "Chrome", you have to download it directly from Google. A lot of people simply won't touch anything that isn't in the official repos.

If you go to download Ubuntu, they push the 64 bit version of the OS as the default. You can still get 32 bit Ubuntu desktop, but it's recommended only for machines with less than 2G of RAM.

I don't think they even make a 32 bit version of Ubuntu server version any more, not that you would put a browser on a server. People went to 64 bits on Linux desktops and servers long before it was common for Windows.

With Windows though, a lot of users are stuck with 32 bit because of compatibility problems with proprietary software. That just isn't a significant problem on Linux because almost all the open source software has been ported to 64 bits years ago.

I can't really blame Google for phasing it out. It's a minority desktop (Linux) on a minority architecture (32 bit x86) with a minority version (Chrome versus Chromium). I can't imagine they were getting too many downloads.

Visual Studio Code: The top five features

thames

Just Tried It

I just download MS Visual Studio Code, and my overall impression of it is that it's a bit shit. It has a very limited feature set, so it can't really compete with something like Geany on features. I would put it as being closer to something like Gedit, but perhaps not quite as easy to use.

I tried editing a couple of existing files from a project that I've been working on. It seemed to know how to handle 'C' syntax OK, although I just had a brief play with it so there may be some problems there which would turn up with more extensive use.

Its attempt to work with Python though was a massive fail. It couldn't even do automatic indenting with a simple 'if' statement. It's definitely not up to snuff in that area. Given this experience with a very widely used language, I would take their syntax support of many other languages with a very large grain of salt. In other words, you would have to try it out in each case before assuming it will work.

There aren't really a lot of editing features. I opened up source code files to see if any more hidden features would appear, but no, it just doesn't have any. I saw no obvious sign of code-folding or any other of the sort of features you would expect in something like this, just very basic text editing with some syntax colourising and auto-completion, and some very shaky partial auto-indenting (e.g. see the Python example mentioned above).

In terms of overall looks, the side bar takes up an excessive amount of screen space, so I would consider that to be poor UI design. I don't personally like the black-on-black default look as I find it to be too dark and depressing. I suppose that might reflect how the average Microsoft feels about life (and no doubt I would too if I worked there), but I would prefer something a bit less gloomy. That is a personal preference I will admit.

I tried it running on Ubuntu 14.04. There is no deb package or any sort of installer to add it to the menus or launcher. It's just a zip file (not even tar/gzip) that you unpack into a directory as a self-contained binary. To be honest, I'm not sure I would have installed it if it had a proper package, as God knows what it would do or might screw up. The menus integrated with the Ubuntu top bar, but I suspect that is more due to Ubuntu's success at intercepting menus rather than anything Microsoft might have done.

Overall, I see nothing to recommend it. If you just want a very basic editor, something like Gedit or the equivalent will be simpler while also being less intrusive. Even Gedit will give you syntax highlighting. If you want something with loads of text editing features and even better language syntax support, something like Geany beats MS Visual Studio Code hands down while still being very fast and small.

I give it a thumbs down. It's not overly bad, but it just doesn't have anything to recommend it over much better equivalents which can be found elsewhere, including free ones.

thames

Re: A pleasant (very) lightweight IDE

I've been using Geany for a while now, and I quite like it. The screen shots on that web are quite old though, and the actual appearance depends on the theme applied (the current one in Ubuntu 14.04 looks quite different).

Geany and no doubt many others sit somewhere in between a traditional editor and an IDE. You could probably say the same thing about EMACS though once you add the sort of extensions which many people favour. They offer a certain degree of work flow automation without the massive size and slowness of IDEs that try (and always seem to fail) to do everything. I only use a fraction of the features in Geany as it is.

I know that some people like automatic code completion (Microsoft's brand name for it being Intellsence), and Geany has this. However, I turn it off because I find this sort of thing to be extremely annoying. I'm a pretty good typist, and I can normally type something out in full far faster than my brain can take in and select the options being presented by the code completion system. As such, code completion just slows me down. On the other hand, I find syntax aware auto-indenting to be quite handy, and Geany does that to my satisfaction.

I can say that Geany has syntax support for a much wider selection of languages than Visual Studio Code does at this time. There are dozens of them, covering everything from Ada to VHDL. It also supports data formats such as HTML, XML, Restructured Text, LaTeX, Yamal, etc., etc.

This is a field which is already really well covered on Linux, and I suspect Apple Mac as well. I'm not all that sure just what niche Microsoft is trying to fill here, unless it is to try to keep their brand name in front of developers who are switching away from Windows to other platforms. It's a pretty competitive field though, and users expect fast start up and response.

I just downloaded the MS Visual Studio Code it and had a look at the license. It includes such interesting licence terms as:

It has a built in timed kill-switch to stop it running at the end of next year.

"The software will stop running on 31/12/2016 (day/month/year). You will not receive any other notice. You may not be able to access data used with the software when it stops running."

It does automatic personal data slurping (such a favourite of marketers these days):

"The software may collect information about you and your use of the software, and send that to Microsoft."

OpenBSD's native hypervisor emerges

thames

Re: what's stopping me

@Nate Amsden - "I have no interest in KVM or Xen either, vmware has worked very well for me"

That sort of argument leads nowhere as I'm sure that lots of people could say that they have no interest in VMWare etc., etc. So long as OpenBSD users are interested in their new hypervisor, it's relevant to them.

If you look at really large scale deployments, these types of systems are not generally running VMWare. VMWare has its spot in the market, but the market is changing in a direction which is not favourable to VMWare.

The big cloud operators (including companies running services on their own hardware) don't generally run VMWare, and the hyper-converged system vendors often like to have their own system underneath as well rather then just slap VMWare's stuff in a box. If OpenBSD can find a place in that market, then they've got a pretty interesting target to aim at.

thames

Re: what's stopping me

@1980s_coder - I suspect that the OpenBSD hypervisor is aimed at users operating on a much larger scale than the typical VMWare customer. Large hosting and "cloud" operators are the ones who are operating at really large scale these days, not the average "enterprise" data centre.

The management tools are what will really matter, and so the OpenBSD hypervisor will need to integrate with the existing open source orchestration and management tools which are intended for operations which have grown far beyond the abilities of "point and click" tools to cope with complexity. When you hear those announcements, then you'll know that they are getting to the stage where it's ready for commercial use. Until then, it's probably more suited for playing around with. Feel free to fill your boots though, I'm sure they would like to hear about bugs relating to operation on different hardware.

Why Microsoft's .NET Core is the future of its development platform

thames

Re: .NET Native?

I suspect the main problem was resolving the library dependencies without hauling in hundreds of megabytes of unused Dotnet stuff. That goal is helped somewhat by simply not supporting an awful lot of Dotnet and then emulating half of Windows underneath.

thames

Re: .NET Core is not yet ready for prime time

I suspect that Dotnet Core will never replace the full (legacy?) Dotnet. It may get used in some cloud deployments that were written from scratch to use it, but most of the existing business applications will either have existing dependencies (often through third party code) that require the full version, or the application developers won't be willing to foreclose the option in future versions of using stuff that requires the full version, and so will say the Dotnet Core is "not supported", even if it does work.

I think the main thing that Microsoft is looking for is to have a cut down Dotnet running on Ubuntu, Centos, etc., that they can use as a low priced option for specific workloads on MS Azure to compete with Amazon on price.

I'm not sure how well this will work. Mono was a huge flop on Linux servers, with there being little to no interest in C# or Mono amongst Linux/BSD developers, aside from some demo projects (now largely abandoned). I can't see why this would be any different for Dotnet Core.

On Linux, there is a Java community who do everything in Java and there is a "native" Linux community who do everything in Python, Ruby, NodeJS, Go, C++, Rust, etc. The Java community use Linux, but they largely sit in their own isolated Java world. They will still go on using Java until the Sun expands to engulf the earth. The "native" Linux community by and large have zero interest in either Java or C#. They are not going to start using either.

So for who will want to use Dotnet Core, that basically leaves traditional Microsoft developers who want to start dipping their toes into the open source world, or who are looking to shave some of the cost off off cloud hosting costs. In other words, it can be looked at as giving Microsoft a lower tier offering for price sensitive markets that they can use to try to persuade developers to stay with them by offering development tools that are at least someone familiar to them.

Microsoft makes Raspberry Pi its preferred IoT dev board

thames

Re: Self-foot-shooting again, Microsoft?

Seeing that outside of these pages I seem to hear nothing about Microsoft's IoT Pi OS, and even less about the Intel Galileo, I suspect that the number of people who will be affected by this can be counted on the fingers of one hand.

Mixing ERP and production systems: Oil industry at risk, say infosec bods

thames

And?

SAP and OPC are full of security holes. Who would have guessed?

I can't offer advice about SAP, but on the PLC/RTU/meter side the biggest problem is the huge dog's breakfast of proprietary protocols, which OPC tries to paper over.

Someone (Tofino) does a firewall appliance for Modbus/TCP which can let you control which registers and commands you want to allow through. Stick it in your control panel next to the PLC, configure it, and you're done. Someone did an open source Modbus/TCP firewall which I think is based on the standard Linux packet filtering (I think Tofino just packages that up and adds a front end). That pretty much deals with the issue of accessing arbitrary memory addresses.

That is only for Modbus/TCP though, as it's an open protocol. You're pretty much screwed so far as the proprietary protocols are concerned, since the industrial control vendors have no clue about security, and third parties can't come and play in their proprietary protocol playpens since the proprietary aspect is there for vendor lock-in and no other reason.

Chinese sat-snaps to help boffins forecast Antarctic sea ice

thames

Northern Hemisphere Ice Forecasts

Canada already does sea ice forecasts for Canadian waters, and releases them on the normal Environment Canada web site. It's required for routine navigation on the east and northern coasts, Hudson Bay, and the Great Lakes (the Pacific coast is warm and doesn't freeze).

If there is anything new being reported in this story, it's that apparently nobody is currently doing the same for the Antarctic, or at least perhaps not to the same extent. However, there will be plenty of existing scientific work from northern latitudes which can be applied to the Antarctic.

It's a commendable effort, but it's not ground breaking science, so I'm not sure what they mean by "to develop scientific methods for forecasting sea ice conditions". The scientific methods already exist, the problems will more likely revolve around applying the Chinese observations to the local conditions to produce useful ice forecasts.

Cops gain access to phone location data

thames

I find the battery lasts a lot longer if I leave the phone turned off anyway.

Untamed pledge() aims to improve OpenBSD security

thames

Re: Stupid idea

@DougS - "This pledge() thing sounds like combination of MAC and assert() as far I can tell"

How do you use MAC and assert() to let a program drop privileges dynamically as soon as it no longer needs them?

thames

A Simple Idea

I haven't followed the latest developments in this, but in the earlier discussion this wasn't intended as a universal cure-all. Rather, it was a simple solution for programs that had simple use cases. If it can't solve every program's situation, that's not a strike against it, because that wasn't in the plan.

OpenBSD itself ships a lot of software as part of the standard distribution, including a lot of things such as their version of standard unix utilities. Being able to harden potentially hundreds of commonly used packages with just a few lines of patch to each one is an extremely attractive goal. Even if third party developers don't take it up, it is felt that it will give OpenBSD itself a nice boost in security with much less work than implementing something like SELinux.

The basic idea behind it is that many programs need certain privileges when they start up that they don't need when running. If they can drop those early on, then they can greatly reduce the ability to use a bug to create a useful exploit, such as a privilege escalation. If for example netcat doesn't need to write to files (I don't know if it does or not, but let's use that as an example), then by dropping that privilege you make it impossible to exploit say a buffer overflow in netcat to overwrite a file which would let an attacker escalate a multi-step attack to the next stage.

By making this part of the program itself rather than an externally imposed "policy", it is possible to drop more privileges than would otherwise be the case. This is because the program can often arrange to do certain things during initialisation, such as reading configuration files or opening ports, that it doesn't need to do once it is running. I don't think this sort of dynamic dropping of privilege can be emulated with a static policy. And of course since this is baked in, it can't be turned off accidentally via misconfiguration. It also doesn't stop you from adding external static policy based security on top of it as well if you want to (and if it is available). This sort of "do something privileged to start up and then drop to a lower privilege state" has been pretty standard unix philosophy for a long time. This just makes that dance more widely applicable.

Some people don't like it because it isn't the answer to all problems. However, the OpenBSD developers feel that it is a nice and simple answer to a lot of problems that can be implemented without a huge effort.

I'd like to see how it works out for them, and if successful, possibly adopted elsewhere, such as in Linux.

TPP: 'Scary' US-Pacific trade deal published – you're going to freak out when you read it

thames

Eh?

El Reg - "commentators took incomplete text and fretted that the TPP would still override Canadian law. In the end, the final text shows that it doesn't."

The reason the final text says it doesn't is because when the public pointed this out, the treaty was changed. There are expected to be many more such problems which will only be discovered later. This is the problem with negotiating these things in secret. The people who have the relevant expertise are not consulted, and so huge mistakes are made. The original clause as it was written would not have stood up to constitutional challenge in Canada, and the same is likely true for many other countries as well.

El Reg - 'But, desperate to find something wrong with it, it has been conflated to the fear that your personal data "may be put at risk."'

The federal government in Canada has neither the legal nor the constitutional authority to negotiate this, these powers largely reside in the provinces. If the provinces say "no", then that treaty provision is meaningless. The provinces were not consulted, and they therefore have no obligation to agree to anything.

El Reg - "The Canadian government is also being sued by energy company Lone Pine Resources because it banned fracking as a way to protect its water. "

The Canadian government did not ban fracking. Fracking is currently happening in Canada. However, again, that is a provincial matter. Some provinces allow it, and some don't. The federal government has zero say in the matter. Lone Pine is exactly the sort of example that undermines your argument. It's a Canadian company that re-incorporated in the US in Delaware - a state noted for its corrupt civil legal system - to attempt to win in trade "arbitration" what they couldn't win in a proper court. If your ideal of justice is "crony capitalism" as practised in Russia, Nigeria, and China, well then I guess this just your cup of tea. I would rather have democracy and honest and trustworthy courts.

However, we just had an election here in Canada. The new party in power did not take part in these negotiations, and indeed the previous government had very questionable legal or constitutional authority to finalise those matters which took place on the past few months while there was an election campaign under way. The new government has said they are under no obligation to accept the treaty as is and they are expressing no opinions on it until they've had a chance to have a good look at it. If the treaty had been negotiated openly, it would be on much firmer legal and constitutional ground.

I could go on to point out loads of other holes in the story, but the above provides a few good examples. Things are not all wonderful and clear, and the treaty could yet fall or countries could drop out, or it could be subject to major re-negotiation once people have had a chance to actually look at it.

The entire process was badly, and corruptly managed, with certain private parties with financial interests in the outcome being provided access and input, while others were denied the same. All the spin from the PR companies in the world are not going to change that one bit.

Windows 10 is an antique (and you might be too) says Google man

thames

What people wanted, and didn't want

Well, being sort of like Windows XP was what a lot of people wanted. What most users are complaining about isn't the UI, it's the intrusive data collection and monitoring and reporting everything you do back to Microsoft, and ramming upgrades down your throat which they don't like. That bit isn't like XP.

WoW! Want to beat Microsoft's Windows security defenses? Poke some 32-bit software

thames

Re: I call foul. You require two rarely used-together malware-magnets: Windows and Adobe xxx

Well ironically, the Adobe Flash plug-in (along with the other usual suspects) is one of the things that had been holding back browsers from going 64 bit on Windows. The plug-ins authors couldn't be bothered to get their crap to work properly in 64 bit so users stuck with 32 bit.

Third party vendor legacy lock-in is the reason why Windows was so far behind everyone else in changing to a 64 bit desktop. I switched to 64 bit with Linux not long after having the hardware that could support it and I had zero problems doing so. That was so long ago I can't even remember using 32 bit Linux on a desktop any more.

Embedded or mobile hardware is pretty much the only place you'll find 32 bit CPUs these days.

Raspberry Pi grows the pie with new deal allowing custom recipes

thames

Someone did a Mini-ITX adaptor board for the Raspberry Pi a couple of years ago. You bolt a Raspberry PI Model B onto it and plug it into the adaptor board, which also has a few additional peripherals integrated into it. There's also room to attach a 2.5" HDD.

Since the Raspberry PI is a fraction of the size of a Mini-ITX board, the PI only takes up a corner of the Mini-ITX board.

I've got a Mini-ITX, but to be honest when you look at the cost of Mini-ITX versus a Raspberry PI, I have to wonder how much longer the Mini-ITX (or Nano-ITX) will be around.

thames

A bigger board would be more expensive because even empty board space costs money. The designer put a lot of effort into minimising cost. Re-arranging the entire board would probably be a lot more work than the sort of customisation they are talking about.

RoboVM: Open source? Sorry, it's not working for us

thames

They Weren't Really an Open-Source Company Though, Were They?

The headline should more properly be - "Software company takes code proprietary when sells self to another proprietary software company".

This shouldn't be too much of a surprise. Xamarin's business is based around selling proprietary mobile phone development software and services at eye-watering prices. They're not going to give away their new acquisition if they can also sell it at similarly eye-watering prices.

From what I can gather, RoboVM themselves were trying to follow a similar path as Xamarin, that is being an "open-core" company rather than being a true "open source" company like Red Hat. You give away some part of the product, but then sell proprietary add-ons. It's a variation on the "give away the razor, sell the blades" strategy. They didn't have much success with this because they didn't attract enough customers for their proprietary product line. I believe that Xamarin is going to dump RoboVM's proprietary add-ons and just use the compiler.

As for the formerly open-source compiler itself, I believe it just compiles Java to native code. They're hardly the first or only Java compiler to do this.

I suspect that RoboVM found that they were getting into a very crowded market and developers were demanding a lot more for their money when it came to proprietary development platforms (even with open-core) than RoboVM had the time or money to deliver. It's a much more mature market now than when Xamarin got into the game. Xamarin's business today seems to revolve more and more around "cloud", testing, and other services, and less on just an IDE and compiler. RoboVM's Java compiler will probably slot in as just another supported language in that proprietary service line up.

Most mobile developers however aren't interested in third party proprietary development systems. They'll just continue to use either the development system provided by the platform owner, or a free, open source system.

Verisign warns new dot-word domains could make internet unstable

thames

Re: theregister.science???

I never knew it existed. If I hadn't seen it in the story (and to be honest, I didn't pay any attention to it until I read your comment) I would have assumed that it was a fake spam or malware site squatting on The Register's name. However, it just redirects back to the regular The Register site.

But that of course is the whole business model of the "dot word" domain name industry. It doesn't expand the name space to any significant degree, because it's the bit to the left of the dot that matters. It just means that every well known web site now has to buy up umpteen different versions of their brand name in order to protect it.

It's an extortion racket. "Nice brand name you've got there. Be a pity if anything were to happen to it."

Why hasn't anyone taken ICANN to court and gotten a restraining order against them with respect to their brand names instead of just paying blackmail? Maybe with the explosion of domain names, victims will think it worth while going that route.

Ubuntu 15.10: Wily Werewolf – not too hairy, not too scary

thames

@Steven Raith - "as I have an AMD A8"

I've got one as well! AMD A8-5600K APU with Radeon(tm) HD Graphics × 4 (that's cut and paste from the "Details" GUI app). It's a great CPU plus GPU and very good value for money. I just use the default non-proprietary drivers and I've never have any problems with it. I highly recommend it.

thames

Re: Not an upgrade

@Khaptain - Well, that's the nature of timed release. The non-LTS release goes out every six months. If there's nothing radically new that's in shape for release, the release goes out without it.

You'll get the newest version of whatever applications are available, and that's what most people will care about anyway. It's a new kernel, so you'll also get new hardware support.

In terms of applications, they now have:

- UbuntuMake, which makes installing developer tools and frameworks easier.

- Steam Controller support, if you happen to be the type who likes games.

The updated language tools include Python 3.5, and a newer GCC.

For some people, the above would be a very worth while upgrade.

thames

Yes, with longer support for the non-LTS they were trying to support too many versions simultaneously. If you just want to use Ubuntu without upgrading all the time, just use the LTS . You will have up to five years of support. If you upgrade from LTS to LTS, your desktop OS and applications won't be more than two years old at most.

If you go on the non-LTS schedule, then you upgrade every six months. This track is for people who want the very latest stuff ASAP, but still want a desktop that doesn't fall down the stairs too often.

I think it's a good compromise between something like Arch, where every day is upgrade day, and Red Hat / CentOS, who upgrade at a rate that seems like once a century.

thames

Re: Not an upgrade

Well, "minor updates, bug fixes, speed improvements and application updates ", that's pretty much the definition of a typical upgrade. I use Ubuntu every day, and I really don't want them to change much. It lets me get my work done quickly, easily, and reliably. I don't want them fiddling around with the UI for no good reason. I think the improvements that Unity has over Gnome 2 (better workspace and window handling, better launcher, better keyboard short cuts, etc.) made that change worth while for me, and until someone comes up with some genuinely better idea, that's what I would want to stick with. I hope that when they eventually switch to Mir, I don't notice the change.

Ubuntu follows a timed release schedule. Twice a year a new release comes out, and every year and a half or two years an LTS (long term support) release comes out. If a feature is ready, it goes in the release, If it's not ready, it has another chance in six months. There's no push to have some feature put in "ready or not" just to make the "new features" list longer in order to hype the version and drive sales (since the software is free anyway).

I'd say Ubuntu have pretty much nailed the right balance of features for the desktop for now. Most new stuff at present seems to be going into the server versions (server, cloud, container, etc.) and mobile (Ubuntu Phone).

If you want to try something "edgy" and experimental, there's other distros out there that do that sort of thing. Most people though are happy to just get things done and that's the market Ubuntu desktop serves.

Canonical rolls out Ubuntu container management for suits

thames

Re: Mir/Xmir

Mir is for mobile, which is where you can find it now. It will eventually replace X on desktop Ubuntu, but not yet.

XMir is to let software which uses X directly emulate X under Mir. There's not a lot of that, since most go through GTK or Qt, which will have Mir support compiled in directly.

The reason for the Mir versus Wayland versus X is that Canonical wants to focus on producing something that works well on mobile but can be extended to the desktop, while the Wayland backers are mainly interested in something that works well on the desktop but may be extended to mobile. In other words they are starting on the problem space from opposite ends with different priorities. Android uses something called Surfaceflinger, but I don't think it can do complex multi-window, multi-desktop stuff for use on desktops and Google doesn't seem interested in following it up.

Wayland is backed mainly by Red Hat and Intel, the first of which tends to think that time spans of a decade constitutes "rapid innovation", while the second isn't interested in anything that doesn't sell more Intel chips. Canonical thinks there is only a narrow window of opportunity for GNU/Linux (as opposed to Android/Linux) to get on mobile, and aren't content to wait for anyone else so they took their own ball and are running with it.

Microsoft promises Clang for Windows in November Visual C++ update

thames

Re: Standards? From Micros~1?!?

Having recently written a set of cross-platform (Linux, BSD, MS Windows) C libraries, I can say that working with MS VC was a major pain in the arse due to their incomplete and "uniqe" implementation of the standards. Going from GCC (for Linux) to Clang/LLVM (for BSD) was easy and seamless. Sometimes one compiler would warn about potential bugs that the other would miss (this goes both ways), but if the bug was fixed, both compilers were happy.

Going to MS VC though required lots of #ifdefs to cover the non-standard MS way of doing things, or the missing (not implemented) C library features.

These sorts of problems make porting major open source projects to MS Windows a major headache and requires a lot of work just to deal with the compiler differences. Patching Clang onto the front of Microsoft's back end may solve some of these problems.

It would be interesting to know how Microsoft intends to deal with C library differences. Is the resulting code going to use the Microsoft C library, or will it use the Clang/LLVM C library? And what happens when you link a program compiled with Clang/LLVM with a closed-source binary compiled with MS VC? Will it work? Will there be library clashes?

I won't be at all surprised if somewhere down the road Microsoft bins their own compiler and transitions over to Clang/LLVM.

By the way despite the waffly language of the article, Apple didn't create Clang/LLVM any more than they created Webkit. In both cases (and many others) they started using an already existing open source project.

Temperature of Hell drops a few degrees – Microsoft emits SSH-for-Windows source code

thames

Ugh!

I just had a look at it. I picked a file at random - auth-passwd.c and looked at the Microsoft version versus the equivalent from the portable OpenSSH project. The Microsoft version is nearly twice as long (378 lines versus 216), with the addition of huge #ifdef hacks, converting back and forth between UTF8 and UTF16, and all sorts of other horrors.

If I look at the altered source in all the files as a whole, there are 363 #ifdef or #ifndef sections related to what has been hacked in for Windows support. Other platforms (e.g. Linux, Apple, AIX) manage with just a couple of special cases.

I'm not sure the OpenSSH developers would want that sort of thing hacked into their upstream source base. If it was me, I would certainly turn it down until the MS developers figure out how to separate that sort of stuff out in a minimally intrusive way.

I'm really hoping they can get this working, as getting the server daemon working would be really handy for me when doing automated testing of software in VMs. So far though, the stuff that Microsoft has been working on looks like it needs a lot of work to meet the quality standards of what I would expect to be required to be accepted into the mainline branch of a major open source project.

Ireland moves to scrap 1 and 2 cent coins

thames

Re: Works in Canada

People in Canada were glad to get rid of pennies. A lot of people were emptying their pockets of pennies at home because they were too much bother to carry around, so the pennies were accumulating in drawers instead of circulating. I would not want to see pennies come back.

The coins that seem to get used the most when paying for things are loonies ($1), toonies ($2), and quarters. Nickels and dimes are what you get back in change and have to make an effort to get rid of when they accumulate too much.

Personally, I would have been fine with dropping an entire decimal place. Nickles (5 cents) will likely go the way of the penny eventually. The only problem with that would be what to do about quarters (25 cents), perhaps simply declare them to be worth 20 cents?

Attacker slips malware past Ubuntu Phone checks

thames
Linux

Yay!

Someone wrote a trojan for Ubuntu Phone! This must mean it's a big time major OS like Android and Apple! Hurray! When can I buy an Ubuntu phone here in Canada?

Inside Mandiant's biggest forensics breach battle: Is this Anthem?

thames

Re: Amazing!

It makes you wonder what else they got wrong in the article. I'm going to take a guess that the upgrade was to Python version 3.4, seeing as 3.5 only just came out. At least "3.4" has got a "4" in it somewhere.

Come on El Reg, you're an IT news site. At least make an effort to learn what the current major versions are for the top programming languages used in the IT industry!

Chinese dragon Alibaba ramps up cloud war with second US data center

thames

"Concerns"

"However, the US has a somewhat problematic relationship with Chinese companies – having banned Huawei from bidding for US government contracts because of concerns over spying."

Yes, I can see that the US government would be concerned if Chinese companies wouldn't help them to spy on people.

Google and pals launch Accelerated Mobile Pages project

thames

Some Actual Numbers

Firefox has some very good web developer tools. Here's a brief summary of what happens when the article web page is loaded: All sizes are in uncompressed form with an empty cache.

Totals:

194 requests

2,974.75 KB

22.43s

Of that:

HTML: 16 requests, 312.91 KB, 22.41 sec

CSS: 2 requests, 72.37 KB, 0.35s

JS: 44 requests, 1,564.87 KB, 14.48s

Images: 124 requests, 1,024.60 KB, 20.45s

Flash (I don't have Flash installed): 4 requests, 0 KB, 2.75s

Other: 4 requests, 0 KB, 0.01s

Of the HTML, here are the top domains in terms of size along with transfer time:

Twitter 50.16KB 0.117s

Facebook 62.54 KB 0.343s

theregister 67.34 KB 0.208s

For the Javascript, 181.13 KB of that came from theregister.co.uk

For the images, the slowest were from the following domains:

cs.meltdsp.com 0.34KB 5.110s

pixel.eversttech.net 0KB 5.100s

cm.dpclk.com 0KB 10.096s

A lot of the images seem to be very small tracking pixels. The same is true for the Javascript, much of it is very tiny (likely cookies). I currently have 50 cookies set just from loading the web page and logging into the forum. Only a couple of those belong to El Reg. Making visible page images smaller or active page Javascript more efficient isn't going to change anything that matters here because they're not the slow or large parts.

Here's the thing that nobody seems to want to admit. Loading a web page from The Register isn't the slow part. That is very fast. It's also not very big. The problem is the ad networks. There are multiple ad bids, ads, tracking cookies and pixels, and other crap being loaded, often taking a very long time to do so (e.g 5 to 10 seconds each in several cases).

Fiddling with The Register's web page isn't going to do anything significant. For Christ's sake, Twitter and Facebook load as much HTML to do their tracking buttons as El Reg does to display the actual article! And there's roughly 3 dozen other parties all loading their crap, very slowly, into the page.

Fiddling with the size of the images won't help either. Many of them seem to be used as tracking elements by the ad networks, and even very tiny ones take forever to load if they come from an ad network rather than The Register.

The real problem is that ads are served from third party networks, and those ad networks don't care if The Register is slow, it's not their site after all. Instead, they spaff loads of crap into the page, very, very slowly.

The real solution isn't going to be fiddling with the margins. It's going to require restructuring the ad business in order to put the content publishers in control so they can optimise the entire process, just like they do in physical print publishing. I'm not sure how to do that, but I don't see any other way.

Microsoft, the VW family sedan of IT, wants to be tech's new Rolls-Royce

thames

May they get their wish

"Microsoft, the VW family sedan of IT, wants to be tech's new Rolls-Royce"

You mean they want to go bankrupt, be bailed out by the government, and then have the original product line flogged off to another company? Yes, that sounds like an appropriate fate for Microsoft.

But Microsoft as VW, let me see, what has VW been in the news for lately? Ah, yes, caught cheating their customers and selling products whose performance didn't match up with what was claimed for them. OK, yes, I can certainly see the analogy there.

As an aside, I saw a new model Rolls Royce in front of my grocery store a while ago. I was not impressed. It was definitely not up to the standard of the older ones.

Worker drones don't need PCs says Microsoft, give 'em phones instead

thames

Re: What about a keyboard and mouse?

@DougS - "though they would have to bring back the 'fat binary' capability of OS X to build applications in both x86 and ARM"

Why? You'll just download everything from the Apple app store anyway. They just have to give you the correct version for whatever architecture you're running and they can do that when they handshake with your hardware. It's not like they're distributing everything on CDs any more.

Most of the major Linux distros support multiple architectures without bothering with "fat binaries" (I've never even heard of such a thing for Linux). When you use whatever shiny GUI "app store" client they use (e.g. "Software Centre" for Ubuntu), you just click on whatever program you want without worrying what your chip architecture is. The same goes for the command line on servers. The computer knows what it is, and it knows which version of the repo to look in.

If Linux distros have been able to do this for a couple of decades, I imagine that it shouldn't be beyond the ability of Apple to copy it (excuse me, "innovate" it).

Ubuntu 15.10: More kitten than beast – but beware the claws

thames

Re: How do you know there's scrollable content?

How about (with 14.04) you look at the right hand side of the window and look to see if the orangy-brown indicator bar is present, how long it is, and where it is in the window - just like with a traditional scrollbar. For Christ's sake, it's right there in his own screenshot on the second page (right hand window)! The only thing that appears and disappears on hover is the "handle" (as he calls it).

In 14.04 the visual indicator works in exactly the same way as visual indication of "traditional" scrollbars does. If you can't use the 14.04 style scroll indicator to tell if something is scrollable, then you can't use "traditional" scrollbars to do that either because they work the same way.

I'll take a guess that the author's actual experience with the mainstream version of Ubuntu was limited to taking the screen shots.

Now you can be tracked online by your email addy. Thanks, Google!

thames

Why would you do this?

"So if you buy a bike and give the store your email address"

Why would you give the store your normal e-mail address? If you're going to give them an e-mail address at all for some reason, why wouldn't you set up a special account that's only used for sign ups?

"Then, as you surf the web with your Google account cookie in hand"

Why would you log into your Google account all day if you're not actually using it at that moment? I use a separate browser for checking web mail than I use for normal browsing. That way I can leave the e-mail account open as long as I want while still closing the general use web browser whenever I want without having to check what I have open in various tabs.

Oh, and when you're just browsing and reading, turn cookies off. Turn them on to log into a forum to comment, post your comment, and then log out, turn cookies back off and erase all the cookies immediately. This is easy to do with Firefox.

Trusting random strangers on the web with access to your cookies is like trusting random people on the street with access to the contents of your pockets. On the street you can put a bit of money in one pocket to pay for small purchases while keeping the rest stuffed well down in another. On the web you can do your general web browsing with one browser and view your e-mail accounts in another. Just use some common sense with this sort of stuff.

US eco watchdog's shock warning: Fresh engine pollution cheatware tests coming

thames

Re: Light vehicles only?

@Gene Cash - "So the huge SUVs, Hummers, giant Ford King Cabs, Dodge 1500 diesels, and other "look at the size of my dick" trucks all get a pass?"

Those are "light trucks" in American terminology. Heavy vehicles are the large commercial vehicles which haul freight around. Heavy vehicles are tested for emissions on the road. Light vehicles are tested on a dynamometer. Its relatively easy to tell if a vehicle is on a dynamometer because only one set of wheels is actually turning.

"I've seen month-old Ram diesels pump out smoke like a Chinese refinery. You can't tell me they're in EPA spec."

Or maybe they "passed" in the same way that VW did? I think we've only just seen the tip of the iceberg in this one. I would not be surprised if virtually everyone was doing the same thing.

The EPA wasn't doing on road tests for light vehicles because of limitations in their funding.

NASA rover coders at Intel's Wind River biz axed – sources

thames

Re: Topical

@Yet Another Anonymous coward - "So what do you do if you have a load of highly skilled OS engineers and your very rich customers with massive expensive support contracts are all demanding Linux."

Yes, but are the customers demanding it from Wind River, or are they going to other companies? It could be that Wind River's customer base is drying up.

That's the thing that customers like and vendors hate about Linux, there's very little vendor lock-in.

To be honest, I was very surprised when Intel bought Wind River. If I was a customer at that time, I would have been planning my exit strategy ASAP. Intel is rarely the first choice for CPUs these days in embedded markets, and having your OS supplier bought up by a company desperate to get (back) into the embedded market can only be bad news. It would raise serious questions about Wind RIver's continuing commitment to the CPUs which make up the bulk of the market, and it would potentially raise problems when working with the chip designers of ARM, MIPS, Power, etc because of concerns about maintaining confidentiality.

This latest news isn't going to help their credibility with customers.

Given all these factors together, I'm not surprised to see Wind River's future starting to look a little uncertain.

Michigan sues HP after 'botched' $49m upgrade leaves US state in 1960s mainframe hell

thames

@Kubla Cant - "I should have thought that Michigan's requirements are pretty much the same as the other 49 states"

Who are also in a dispute with HP over similar mainframe replacements issues, as mentioned near the bottom of the story.

"Hasn't this problem been solved already?"

Not by HP, it would appear.

thames

Re: Gratuitous: Maybe Carly could ride in on her white horse and get this company better,

"I also have to wonder if the fine folks in the Michigan Agency of Bullocks and Computer Machines weren't at least partially complicit"

It takes epic levels of incompetence on all sides for this sort of Olympic level of screw up. However, given that there are multiple states all experiencing the same problem upgrading similar systems with HP though, I suspect the main culprit is HP.