* Posts by thames

1124 publicly visible posts • joined 4 Sep 2014

Canonical accused of violating GPL with ZFS-in-Ubuntu 16.04 plan

thames

Re: Wasn't this already settled?

The "tainted" bit was just to tell the kernel developers that there was a proprietary close source module loaded into the kernel, so that the kernel developers wouldn't waste time working on bug reports for problems they had no source code for. I think the main reason for this originally had to do with the numerous bugs in NVidia's proprietary video driver (which was the number one cause of Windows crashes too), but they made the solution generic to cover other cases as well.

The basic idea is that if you submit a bug report and the "tainted" bit is set, the kernel developers will tell you they aren't going to waste their time on it. If you can reproduce the problem without using proprietary closed source drivers, then they'll accept the report.

The above doesn't affect ordinary users since they are normally getting their software from their distro, not direct from the kernel developers. Your distro is the one to file bug reports with. The kernel developers on the other hand are dealing with the distros and with various hardware vendors, and the latter in particular are often trying to push bugs they created off onto someone else to solve for them.

Building a fanless PC is now realistic. But it still ain't cheap

thames
Happy

First Fanless

The first PC I built didn't have a CPU fan. Of course that's easier when you're using an 8MHz 8088. It had a plastic package and didn't even have a heat sink. CPU cooling only came in when they started cranking up the megahertz on later processors. The power supply of course made plenty of noise on its own, and the floppy drives made quite a racket when they were running. There was no hard drive of course, given the prices in those days.

That PC is long gone, but I've currently got a Mini-ITX with no CPU fan and a fanless brick-style power supply. However, it still isn't silent because the hard drive makes a very audible hum on its own, which the numerous cooling holes in the case let out. I don't use it much these days, as it's too expensive to upgrade to make it a worth while up to date desktop.

I've also built "quiet" conventional PCs. The biggest thing that I've noticed is that cheap power supplies, such at the type which often come with cases, tend to be the noisiest component. Get a good quality PSU to get something quieter. Also, if you aren't using top end, hot running CPUs and graphics cards, you may not need a separate case exhaust fan as the fan in a good quality PSU that isn't being run to the limit seems to do an adequate job of that. The actual CPU fan seems to not make much noise, at least the AMD ones don't (Intel might be a different story).

My suggestion for an ideal quiet conventional PC for typical use (not high end gaming) would use an AMD APU, as that gives you good graphics without a separate cooling fan for a graphics card. Use a good quality power supply with the largest diameter but slowest running fan you can find without getting into anything too obscure. You might want to use a case fan to be on the safe side, but look around for thermostatically controlled ones that will stay shut down unless they're really needed. Use an SSD, as conventional hard drives make a surprising amount of noise. A solidly built case can also help, as a cheap tinny one will vibrate and bring noise to the outside.

After you've done all this, look for ways of keeping the PC further away from where you sit. The inverse square law suggests that if you double the distance from your ear, you cut the noise energy that reaches you by a factor of four. Perceived noise of course is a bit different, but the basic idea is that the further away something is (without getting ridiculous), the less you hear it. If you can put it under the desk without exposing it to getting kicked or knocked about the by the vacuum cleaner or sucking in dust off the floor, that also has the advantage of putting the surface of the desk between the PC and your ear. However finding a good location is often easier said than done.

And of course as many people have already suggested, look at the Raspberry Pi running Ubuntu. If you've got an occasional edge case that needs more oomph, well then fire up your conventional PC for that. The Pi is so cheap that cost isn't a barrier to having one of those as well as a conventional PC.

Microsoft finally ties the knot with Xamarin, snaps up mobile app biz

thames

Goodbye Mono

Goodbye Mono, as I can't see Xamarin continuing to support it now that they're part of Microsoft. They will promote their version of DotNet instead.

I can't say that I'm actually too surprised at this development. Now we can watch as Xamarin's/Microsoft's products are increasingly tied into the rest of Microsoft's development tools, cloudy stuff, etc., and see all the platform independence slowly drain away, leaving no real reason to use it in the first place.

Congratulations to the founders for cashing out though.

Good news ... and bad news for Skype-using Apple fans and small biz

thames
FAIL

Muppets

"Unfortunately, there is no OS X app for Skype for Business yet; it's not due out until at least the summer. ... In the meantime, Mac-powered businesses will have to make do with Lync for Mac 2011, which is rather old and not terribly well-liked by OS X users"

What a bunch of muppets. They want to be a major force in the mobile applications market for businesses, but they can't get their act together to coordinate the different parts of their own business. Any even marginally competent management team would make sure that they don't phase out one service before providing the replacement.

"The free voice and chat service now includes ... "

A whole bunch of stuff that 99% of users neither need nor want.

Patch ASAP: Tons of Linux apps can be hijacked by evil DNS servers, man-in-the-middle miscreants

thames
Linux

Updated

The updates for this on my desktop came through this morning, and I updated while reading El Reg and I've had no problems. There's no reason not to update now, so far as I can see.

Ransomware scum infect Tinseltown hospital, demand $3.6m

thames

Re: Whare are the NSA / GCHQ whe you need them?

From what is said in the story, it sounds like the PCs are used to schedule tests and send out results. Air gapping them would have the same effect all the time as the problem they are experiencing now temporarily. Talk about an own goal.

Putin's internet guru says 'nyet' to Windows, 'da' to desktop Linux

thames

Re: Futile gesture

@Philip Clarke - "Do you remember when encryption could not be exported?:

Yes I do. I remember that the US became a backwater for encryption development as companies in the rest of the world were freed from competition with American companies. Today, the overwhelming majority of encryption products come from companies outside of the US. If you have followed the IT news in the past few days, you would have seen articles on this.

thames

Re: Futile gesture

@Philip Clarke - "Not if they are American or British and sanctions are in place"

So, err, why would the Russians government want to hire Americans or Brits? They could just, you know, hire Russians. There's loads of Russian software developers already in Russia. There's a number of Russian Linux distros already in existence, so there's definitely people there with the requisite experience. I'm struggling to see why this would be a problem.

"Not on any installation of Oracle, MySQL, MongoDB, Libreoffice or Openoffice or any Apache foundation project with Sanctions in place"

Oracle DB is probably in the same boat as MS products. As for MySQL, MongoDB, Libreoffice, etc., US "sanctions" would be irrelevant in Russia, and the Russians can decide for themselves how to apply their own counter-sanctions. I'm really struggling to see what the problem would be here.

"With sanctions in place as the FSF is headquartered in the USA, any distro is probably legally termed as being exported from the US if it contains the kernel source or otherwise."

Er, and? I'm pretty sure the Russians could compile their own kernel without phoning up Richard Stallman personally (not that the FSF is exactly involved in the kernel much anyway beyond providing the compiler).

As I said, there are already Linux distros in existence in Russia. It's not exactly rocket science, which by the way is something the Russians do on their own as well. All that is happening here is that the Russian government will be using Linux, and either hiring their own support staff or letting tenders to Russian companies to do the support. They're already supporting an IT infrastructure which handles the government of a country with the population of the UK, France, and the Netherlands combined. Supporting a Linux distro is not going to be a major problem.

I won't be surprised to see this happen. The only thing holding them back up to this point will have been the lack of high level support for pushing through the actual change through the government IT system. If the backing is there from the top however, it will happen.

I've been through the equivalent of this at the corporate level when one large multinational buys up another one and declares that their new acquisition must comply with the new corporate standards. What, your rat's nest of Lotus 123 spreadsheet macros won't work with MS Excell? Not my problem, they're your spreadsheets, so they're your problem. Deal with it.

I will expect to see the Chinese doing the same before too long, by the way.

SCO slapped in latest round of eternal 'Who owns UNIX?' lawsuit

thames

Re: Back to the Nineties

The current company is actually "SCOG" (SCO Group), which is not the same company as the original SCO. The history went something like this:

* SCO and IBM entered into a joint venture to develop and sell a version of proprietary Unix. This version was supposed to cover the market from the small x86 box sector up to large RISC servers. IBM would contribute the large system expertise, and SCO would contribute the x86 box expertise (including marketing channels). Both parties hoped this would establish their joint brand of Unix as the industry standard.

* Meanwhile, Linux was starting to make inroads into the unix market, particularly at the lower end. Companies involved in selling and supporting it included Red Hat, Suse, and Caldera.

* SCO eventually saw the writing on the wall with regards to Linux, and decided to give up on the joint venture with IBM. They sold their unix business off to Caldera, who planned on converting the SCO Unix customers into Caldera Linux customers.

* When SCO pulled out, IBM exercised an option in the contract which let either party terminate the joint venture if the other party sold its interest. SCOG's current lawsuits against IBM revolve around this event.

* SCO sold themselves to Sun, who wanted them for their remaining software products. Sun of course were later bought by Oracle.

* The dot-bomb hit, tech stocks collapsed, and Caldera couldn't get free money from the stock market any more. Meanwhile Caldera was also not having a lot of luck selling their brand of Linux to their new customer base. Customers were opting to switch to Red Hat instead. They decided they needed a new business strategy. At some point in this Caldera themselves came under new ownership in circumstances which are as hazy and odd as their subsequent behaviour.

* Caldera renamed itself the "SCO Group" (SCOG), brought in new management, and went on a sue-world-plus-dog campaign using financing from certain companies who had an interest in seeing Linux dead.

* SCOG represented themselves as "owning" Unix. Novell, who in fact did (and still do) own Unix, having bought it from AT&T, disabused them of that notion in court. SCOG was just another Unix licensee like IBM, Sun, HP, etc. They also had a contract with Novell to act at the outsourced agent for collecting license royalties from the other licensees, and it was this role which they tried to use to represent themselves has having legal standing to sue others. Novell took them to court over this issue.

SCOG's lawsuits with IBM revolved around several issues.

* One was the termination of the above mentioned joint venture. This is still active.

* Another was over IBM's JFS file system, which although written and owned by IBM, SCOG claimed that IBM could not put this into Linux because of "wave hands and shout loudly" reasons. Unfortunately for SCOG, there were two different versions of JSF. The one which went into Linux actually came from OS/2, not AIX. This was the closest which any of SCOG's cases came to actually involving Linux, and it was something which would have had little impact outside of IBM even if SCOG had somehow won.

* IBM counter-sued SCOG over various issues that I can't remember, other than that they were narrow technical matters that would have been open and shut cases that would have been difficult to dodge if they came to trial.

As a footnote, when SCOG declared bankruptcy, they had to list all their creditors. Imagine the amusement we had when one of these creditors was a well known "independent industry analyst" who had been in the lead of trumpeting how solid SCOG's case was, and how Linux was doomed.

thames

Re: Goddam

@ratfox - I think that SCO spent about $100 million on the case. About 80% of that though was money from a certain large software company who wanted to see Linux dead, and most of the rest was from another software/hardware company who also wanted to see Linux dead. The details came out when an outside business consultant who was handling the transactions didn't get what he felt was the full commission he was owed, upon which somehow or other one of the emails outlining the deal got leaked.

We can assume that IBM and Novell also spent a fair bit of change as well.

In addition, SCOG also sued various retailers, manufacturing companies, utilities etc. over what they claimed were "Linux" issues, but were actually based on SCO Unix licensing claims (most of the trade press at the time could be counted on to reprint SCOG's bullshit verbatim). SCOG picked on former SCO Unix customers who had switched to Linux and sued them on alleged past violations of SCO Unix licenses. A number of these companies settled out of court because they couldn't afford to fight the claims. Chrysler fought them, and because the case ended up in a Michigan court instead of the very dodgy Delaware courts, SCOG had their arse handed to them by the judge in less than a week.

SCOG was solvent at the time they went into bankruptcy. It was simply a tactical move because a judge in Utah was about to hand down a ruling which would have flattened SCOG entirely. However, a couple of days before that was due they filed for bankruptcy in Delaware (the state is to dodgy corporate registration in the US what Liberia is to decrepit oil tankers in the shipping industry). Bankruptcy court trumps all others, so that put a freeze on the counter suits while letting SCOG continue to sue world + dog while being granted immunity from counter action. They then proceeded to piss away all their money on lawyers in a series of hopeless cases.

thames

Re: Wasted talent

@Whitter - SCO employed several law firms. One of the lawyers employed for minor tasks was the CEO's brother. I think he was being well paid for storing the case records or something like that. That CEO (Darl McBride) is now long gone from the scene, and I assume the brother is as well.

If you're reading this on your phone, pray you're in Singapore

thames

Re: It's a question of political will

@Anonymous Coward - "I've struggled for years trying to get above 2Mbit in Canada,"

I don't know of any major ISP in my area in Canada that's even selling anything that slow these days. The slowest connection from someone like Bell you can buy these days is 15 Mbps. Meanwhile they're calling me all the time and knocking on my door trying to flog me 1 Gbps fibre. I'm in a smaller city by the way.

If you had 2 Mbps in any recent years in Canada, then either you were off on the end of a semi-rural line or you had other hardware problems, possibly in your on-premises wiring or local loop. A high error rate will give you what is effectively a slow speed. That's the problem with anecdotes, they tend to be from the one person who had a problem and was unhappy about it, rather than what the overwhelming majority experiences.

I'm with a smaller ISP who have an extra cheap 5 Mbps package (along with faster ones). I've picked that one because I don't download movies, and even 5 Mbps is more than fast enough for Youtube, web browsing, Github, email, and all the other stuff I do. And I'm tight with my money as you may guess.

As for 2 Mbps? Never heard of it from anyone.

thames

The US rank far below Canada, which is bigger in area and has fewer people. If you look at the chart in the linked story which shows speed by carrier, the third fastest is Sasktel, which covers Saskatchewan - pretty much the definition of very few people spread over a huge area. The rest of the major Canadian carriers fall much lower, but are still well above the US.

I suspect that slow US speeds are mainly due to the market being dominated by very few companies whose customers have few good alternatives.

In Canada, the province of Saskatchewan has the lowest prices and the highest speeds despite having the lowest population density of any of the provinces. The fact that Sasktel is government owned while the phone industry in the rest of the country is privately owned probably has something to do with it.

Canonical and Spain's BQ team to put Ubuntu on a tablet

thames

Re: Interesting

The mobile/tablet version will be a Unity GUI with a Mir window manager. This is the reason for Unity - it's intended as a GUI which has features which can adapt to both desktop and touch. That doesn't mean it's exactly the same GUI - but rather there are common design features which are used to create both the desktop and small screen touch versions so that "apps" designed for one will work in both and users familiar with one will have an easy time adapting to the other.

Apple uses totally different GUIs on desktop and phone/tablet. Microsoft tried sticking a phone GUI on a desktop (Windows 8). Ubuntu is taking a third path - a GUI which is a desktop GUI when working as a desktop and a phone/tablet GUI when working as a phone or tablet and automatically switches as required. I suspect that both Apple and Microsoft will follow in this path eventually, and indeed I believe that Microsoft have already decided they will need to do so.

This by the way is the main reason why there was the split in Linux desktops between Unity, KDE, and Gnome. Ubuntu wanted to focus on touch (tablet/phone) first with desktop being an offshoot of that. KDE wanted separate touch and desktop GUIs. Gnome (basically Red Hat) wasn't interested in tablet or mobile, and so didn't want to accommodate it except as a afterthought. Mint with Cinnamon and MATE are also in the "who cares about anything but desktop" camp.

Canonical think that free/open source operating systems of the kind we are familiar with have only a narrow window of opportunity to establish themselves if they are not to be squeezed out by changes in the market. Replacing one monopolist (Microsoft) with another (Google) is no improvement, and the new generation of phone and tablet hardware is increasingly locked down so you can't install any alternatives.

It's probably too late for anyone to unseat Android from market dominance in the phone and tablet field. It's probably as unshiftable there as as Microsoft is from business desktops. This is why Ubuntu is putting the priority on "converged devices", which is what they had been banging on about for years before Microsoft woke up and noticed it.

I suspect that even for dedicated desktops, the conventional "Wintel" PC hardware we see today will largely disappear from the market. It will be replaced by very cheap and very small commodity boxes about the size of a Raspberry Pi (with a plastic case of course), and using a similar physical layout (all on one board). Everything including CPU, RAM, and SSD will be soldered onto a very small PC board. Whether it will have an ARM or Intel CPU is an open question. I wouldn't want to bet any money on Microsoft still being a significant factor in the desktop OS market by then either.

I suspect that the days of being able to go into a shop and buy what we see as a "conventional" desktop are numbered. If GNU/Linux (as opposed to Android/Linux) operating systems are not well established in the changed market by then it will likely be very difficult to buy any non-server hardware which will run it.

Little warning: Deleting the wrong files may brick your Linux PC

thames

With Ubuntu (14.04), when I built a new PC I just stuffed the DVD into the drive, and it booted up and installed just like it always did. I didn't touch the firmware. There was no drama or fiddling with anything.

I'm not a fan of UEFI however, for reasons that have to do with security. Matthew Garret wrote a blog post on it a few years ago when he was working on UEFI support for Linux, and he mentioned that there were more lines of code in UEFI than there were in the Linux kernel, if you exclude drivers (to get an apples to apples comparison). On the other hand, Intel had no plans for security support, and the motherboard makers had no way of coordinating and pushing out security fixes. Garret found a security hole, and when he contacted Intel to report it and ask what their plans for fixing it were, their reply was "none".

Personally, I would prefer something like Coreboot for most applications. It's small and simple, and so presents less of a security maintenance problem. It's also what a number of popular Chromebooks ship as their firmware, so it obviously works. Put the complicated stuff in the OS, as they already have established mechanisms to handle security fixes for the bugs which are inevitable in any complex system.

Investors furious that Amazon only made $482m last quarter

thames
Holmes

"numbers that, although strong, were short of analyst expectations"

Sales up 22%, profits double, AWS profits triple, but the "investors" (we're not told who) are "furious"? Someone took a gamble on the stock market and got it wrong. My heart bleeds for them.

Word up: BlackEnergy SCADA hackers change tactics

thames
Facepalm

Old wine in new bottles

This sounds like a pretty old fashioned computer virus. MS Word and Excel macro viruses that wiped your hard drive were all the rage back before there was money in viruses and all the miscreants wanted to do was to cause you some grief.

I can remember the days when it was common for MS Word and Excel attachments to be scanned for viruses at the mail server because they were so common. Then they started using the built-in MS Office file encryption and including the password in the email. The IT department would then simply block all MS Office attachments for a while whenever a new virus started making the rounds.

Roll forward to 2016, and the 1990s are back with their classic virus attacks.

I have to wonder though, how is the current version getting around virus scanning? Or perhaps the Ukrainians are too broke to afford anti-virus software? (Although I was under the impression that all software there was pirated anyway).

Microsoft requests ChakraCore support in main Node.js repository

thames
Thumb Down

Blech!

I'm not a Node.js developer, but if was this would be very unappealing to me. It brings no immediate real advantages to developers, while potentially only adding to their work load. I imagine that most developers using Node.js would simply test assuming the V8 version, and ignore application bug reports involving ChakraCore, saying they don't support it.

Every time you add yet another interpreter or compiler implementation to a supported list, you add another list of subtle and hard to find bugs which must be fixed.

Microsoft went through this before with their own versions of Python and Ruby based on Dotnet, both of which appear to have died from lack of interest from actual users.

They would be better off simply making sure that Node.js as it is now runs properly on Windows and be satisfied with that.

Advantech authentication forgets the authentication part

thames

Re: You don't hang these things off the Internet?

@Walter Bishop - Yes, but you better be doing it though a fairly decent VPN, because there is no authentication or encryption built into the Modbus/TCP protocol. It will accept and execute any commands sent to it from anyone.

Modbus commands fall into four classes - read the state of one or more bits, set the state of one or more bits, read one or more 16 bit memory address, and set one or more 16 bit memory address. The Modbus protocol like most other industrial protocols is intended to run on very simple, very low end hardware. Modbus (serial) and Modbus/TCP are very popular protocols in industry due to their simplicity, widespread industry support, and the fact that it is an open protocol.

Think of Modbus devices as being like a PLC's equivalent to keyboard, mouse speakers, and monitor. Your keyboard does not authenticate with your PC or encrypt its communications. It expects to be plugged straight into the PC. If you route your keyboard to PC connection through the Internet, you better not expect the keyboard to be providing the security for this. The PLC equivalent is switches, valves, relays, and operator panels.

It's probably just as well that most industrial control system vendors don't try to built in much if any control protocol security to all their products, as getting it right is very difficult and best left up to the experts. As a result, if you want to read a Modbus device from the other side of the world then you need to use a separate third party security system somewhere upstream, as I said in my original post.

The big advantage that Modbus/TCP has over many of the protocols based on more proprietary systems is that it is designed to work with standards and technology from the IT industry which means you can use security hardware, expertise, and services from IT specialists who deal with security as their bread and butter.

thames
Boffin

Doubt it's a big deal for most users.

I've used a related product from them. This is a box used in industrial control systems to convert the RS-232 or RS-485 versions of the Modbus protocol to the TCP/IP version (Modbus/TCP). The web and SSH interface is used to configure the hardware (I used the web interface, so I'm not 100% sure about the SSH).

You then can send and receive Modbus commands via TCP/IP and it will convert them to the RS-232 or RS-485 version. The main use is for interfacing between older hardware designs in end devices such as bar code readers or label printers that may only have RS-232 ports and things like modern PLCs that only have Ethernet and not serial, or where the PLC serial port is an add-on that costs far more money, or for wiring convenience, etc.

You don't hang these things off the Internet. For one thing, if someone can communicate with the module's Ethernet port they can do far worse things than just change the configuration on you, such as send it Modbus commands to control whatever hardware is at the other end of the RS-232 link or simply DDOS the whole thing (it likely wouldn't take much traffic to do that).

If you need security, you need to do that separately somewhere upstream, and in a way which will block DDOS attacks (which would overwhelm all industrial hardware). An attacker would more likely find the PLC to be a more inviting target anyway.

I would expect that 90 - 99% of their customers don't even bother with changing the password from the default anyway. They just don't put it on an exposed network. A lot of them would view a back door as a positive thing, as they wouldn't have to worry about the night shift not having the passwords when doing routine hardware maintenance and repairs.

They are occasionally used when connected to PCs. I've used related hardware from the same company that way. However, when I used them they were on a separate Ethernet card with each industrial machine on its own network which was inaccessible from the outside the machine and had a total of about 3 or 4 nodes on each isolated network.

Industrial control manufacturers used to use networks that used their own proprietary physical media (even proprietary cables) or RS-485, and in some cases still do (e.g. Profibus). However, they have been moving to Ethernet in order to let them use off the shelf hardware, and in some cases, software stacks. They're most commonly used as an internal "bus"' inside a machine.

It's possible that some people somewhere are hanging these things directly off the Internet or a major Intranet, but as I said above the configuration interface is the least of your problems in that case.

Devs complain GitHub's become slow to fix bugs, is easily gamed

thames

Re: Nothing Special

The problem is basically the design and configuration of the bug tracking system itself. "Filtering" is a very generic term that I used to cover a lot of UI and process ground.

The bug tracker should be guiding the users to enter useful information while also providing a way for people to enter "me too" responses in a way which is obviously just a "me too". A "me too" response is useful in gauging how many users are affected in order to prioritise fixes, but not if people have to read and summarise everything themselves.

The actual letter is linked in the story, and there are three short bullet points listing what they want addressed.

Issues are often missing crucial information such as what the version is or how to reproduce the problem. The developers want Github to let them add custom fields and mandatory templates to ensure the user fills out all the fields.

For just simple "me too" responses they want a voting system so that users can up vote an existing issue rather than adding another comment with no real information.

They also want individual project contribution guidelines to be made more obvious to people making pull requests.

I suspect that the first two are the most important, since in my own experience getting useful information out of users can be like pulling teeth if they are not familiar with how to make a bug report that can be turned into actionable items.

thames

Nothing Special

I've got an open source project on Github, having moved there from Sourceforge (who were reasonably good once, but are going down the drain). I've had no complaints, but in terms of features and ease of use, it has nothing to recommend it over the other alternatives. The only thing it has going for it is that large numbers of other people are also on Github. If Github goes the way of Sourceforge, I would have no problems with moving to some other place instead.

I read the complaint the devs have, and the problem seems to revolve around Github's bug tracker not being very good at filtering signal from noise for popular projects that get a lot of messages. Most messages on big open projects tend to be spam, trolls, or just people vaguely saying "this sucks" when they have PEBKAC problems. Traditional closed source companies often have humans acting as filters between developers and customers. Projects based on services like Github on the other hand expect the bug tracker software to be sophisticated enough to automate the filtering. Without some sort of automated filtering, the genuine problem reports get drowned by the PEBKAC/spam/troll ones.

I haven't had any issues along those lines, but that's mainly because my software tends to have a much narrower and more specialised audience.

Recall: Bring out yer dead and over-heating Microsoft Surface Pro power cords

thames

Re: Are "electronic" components involved in the failure?

I don't even know what the cable looks like since I've never seen an MS Surface or know anyone who has one. As long as we are speculating wildly though, the most likely point of failure is where the wire comes out of the terminals at either end. In that case the problem could be insufficient strain relief, poor design of the connection in general, or a combination of that with the type of crimp connection or too much solder wicking if the ends of the leads were solder dipped or the wire is simply too hard or as too few (too thick) strands.

If you coil the lead tightly, the wire could be breaking at the terminals or pulling loose due to any of the above. If you want to look at things in a more general sense, it is likely that the root cause is someone's cost saving idea ran up against the hard reality of life that wire leads face in the field.

Wire leads are a commodity item, but there are different grades at different prices and the device product designer has to know what choices to make when specifying one over another. The lead manufacturer will simply give you whatever you ask for because well, they're not in the device business and aren't really expected to know what the end user really needs. That's why they get paid peanuts for their effort.

P.S. - In a past life I was involved in the design of manufacturing equipment for various types of leads. There's loads of different ways of designing and manufacturing them using different material, all with different pros and cons depending upon what they will be used for and what the customer (product OEM) is willing to pay for.

AMD accuses Intel of VW-like results fudging

thames

Right, because 32 bits ought to be enough for anyone.

@Asterix the Gaul - "Comparing AMD\INTEL over the last 2 decades shows that AMD have always trailed INTEL, not just on the architecture,"

Right, because 32 bits ought to be enough for anyone. And if you really need 64 bits, well there's always the Intel Itanic, which dominates industry market share. I heard that AMD was planning to respond to Itanic by coming up with some sort of 64 bit extensions to the x86 architecture. I wonder what ever happened to that?

thames

Re: APUs

@RonWheeler - "Non gamers - don't need APUs as they don't need that level of graphics."

Er, no. that's not what an APU is. APUs are intended for non-gamers. That is, they provide enough hardware video acceleration to support composited desktops which all modern versions of MS Windows and Linux use. It lets them do window transitions, play video, animations, and all sorts of other things that the OS desktop, web browsers, and office suites do these days (and if at this point you're shouting "get off my lawn", AMD doesn't design the software, they just provide the chip).

This lets the APU do the most computationally intensive routine stuff in the GPU portion of the chip without requiring as much "oomph" from the CPU side. In other words, they looked at how to balance the overall feature set for the average user within a given transistor budget. For most desktop users, CPUs hit the point of diminishing performance returns long ago, but routine graphics demands have continued to increase.

If you're a gamer who wants the ultimate, then you need a separate CPU and separate graphics card because nobody can pack that many transistors into a single chip, especially if they don't want it to melt down.

The history of computing has been to pack more and more functionality into fewer and fewer chips. Putting a good graphics processor onto the same chip as the CPU is the inevitable and inexorable continuation of that development. AMD saw that coming, and bought one of the two leading GPU makers (ATI). Intel's efforts in that field have been less than stellar. The result is that for everyday normal use, an AMD APU is for most people the best choice from a price/performance perspective.

AMD's next generation APUs let the CPU and GPU share memory in a way which greatly reduces memory bandwidth bottlenecks. This will be of great advantage to people doing CPU/GPU computation work. This will be a very interesting field of development for a lot of specialised applications.

thames
Boffin

It's not just AMD saying there's a problem.

It's interesting that they mentioned PCMark. A few years ago they themselves were found to have been favouring Intel in their benchmark, until Arstechnica did some detailed tests and the PCMark people ended up with egg all over their faces and had to come out with a new benchmark.

In that instance the author was testing a new VIA CPU against Intel and AMD. The interesting thing about that new VIA CPU was that the CPU ID register was writeable, not read only. When they changed the CPU ID from "CentaurHauls" to "AuthenticAMD", the benchmark's performance magically jumped by 10%. When they changed it to "GenuineIntel" it jumped by 47.4%. That's running the same benchmark on the same hardware, but just changing the CPU ID register. Fascinating, isn't it?

It turns out that a lot of companies used Intel's compiler, and that compiler produced code that did different things depending upon the brand of CPU it found.

The FTC took Intel to court, and said in their complaint that

https://www.ftc.gov/sites/default/files/documents/cases/091216intelcmpt.pdf

"Intel ... used deceptive practices to leave the impression that AMD or Via products did not perform as well as they actually did."

and that

"Intel redesigned its compiler and library software in or about 2003 to reduce the performance of competing CPUs. Many of Intel’s design changes to its software had no legitimate technical benefit and were made only to reduce the performance of competing CPUs relative to Intel’s CPUs."

and that:

"Intel failed to disclose material information about the effects of its redesigned compiler on the performance of non-Intel CPUs. Intel expressly or by implication falsely misrepresented that industry benchmarks reflected the performance of its CPUs relative to its competitors’ products."

So, it's not just AMD that has been saying Intel has been up to some very dubious things with benchmarks.

Ubuntu's Amazon 'adware' feature to be made opt in

thames

Re: The money

Scopes are really just Ubuntu's take on copying Apple's Spotlight. What Ubuntu added was the ability for you to install third party plug-ins in a sandbox rather than just hard-wiring in a few sources. I doubt that the "scopes" idea had anything to do with money for Canonical. Most of the scopes they added were not money generators in any way and I doubt the Amazon one had potential for non-trivial (for a company the size of Canonical) amounts of money.

As a further anchor for the sort of numbers we are talking about, I recall reading that the search revenue that Linux Mint was getting from their own search customisation mods (putting a custom advertising header in Firefox) was generating derisory amounts of money. Of course Mint has a much smaller market share than Ubuntu, but in either case we are talking about very little money, unless you happen to be a one man operation.

The idea of including Canonical written scopes was to act as examples in order to get third parties to write them, so Canonical did a few which showed that there was potential for revenue. The idea was that when you searched for something, it would also do things like search Github if you had a Github scope installed.

I think that scopes were actually mainly meant for their phone platform, which is why they added things like an Amazon scope, which is a consumer oriented feature. They were using the desktop distro to try to kick-start the third party scope market for the phone. Scopes were supposed to be able to be extended to do all sorts of weird and useful things beyond just searching for stuff, although I can't remember the details (I didn't care enough to look into it further).

Pretty much anything new that Canonical adds to Ubuntu can be found in MacOS or OS/X, with enough of a change to avoid getting sued by Apple. The third party plug-in aspect of Scopes might be different, but the idea of sending search information off to to the Internet isn't. Apple fires your search info off to Microsoft via deal for Bing search.

By the way, to turn off Scopes, you can either un-install whatever specific Scopes you don't want via Software Centre, or just turn them off globally by going to Settings ==> Security & Privacy ==> Search (tab) and then turn the switch to off (when searching in the Dash, include on line search results). That's for 14.04, which is the current LTS release.

You can also turn off the feature which tracks your most recently used files and applications (which is not part of Scopes), exclude specific things or file types (I haven't used this), clear data, or turn it off altogether. The same applies to diagnostic reports.

I turned the feature off a couple of years ago as I have zero interest in it. I can say though that if we ever want an Internet where all the traffic isn't funnelled through a handful of big companies like Google, Facebook, and Microsoft (if they don't bin Bing as a hopeless cause that is), we'll need something like Scopes to give us a distributed search ability.

Fortinet tries to explain weird SSH 'backdoor' discovered in firewalls

thames

Re: Trust?

@tom dial - "Stipulating that equipment shipped from the US might be subject to interception and modification, the same certainly is true of similar equipment shipped from non-US addresses. (...) On the other hand, interception and modification by a government agency in the receiving country also would be a possibility, one not under the control of either sender or receiver, and that is not one about which either has a choice."

My own government could simply arrest me and grill me in the police station in order to find out whatever they want to know. They''re also unlikely to engage in economic espionage which will hurt their own economy. Foreign countries such as the US on the other hand can't simply send the police around, nor do they have any problems with engaging in economic espionage (e.g. against Airbus as was discovered in an EU investigation as long ago as the 1990s).

You could say "well, Russia and China could theoretically do the same". Sure, but that's the whole point, US kit is no more trustworthy than Chinese kit. Back doors in Chinese kit may at present be a theoretical possibility, but we know that US kit is being back-doored on a mass scale. If people just continue to stick their heads in the sand over US kit, then they may as well just set their root passwords to "passw0rd" and be done with it.

The solution is to trust no single outside party. That means making everything as open as possible, which provides fewer corners to hid back doors in.

thames

Re: Trust?

@AC - "But you still need hardware to run it on. If the NSA install a hypervisor in your BIOS or boot ROM, your open source software will happily run in an environment which captures all the traffic but is none the wiser."

People are working on this issue, because it's also a problem for ordinary viruses. One solution being proposed is for "stateless" hardware - there is no firmware shipped with the device, nor any persistent storage for it. Instead the user inserts their own flash (or OTP memory) with software later. This would be in the form of a generic card or module which is not specific to that device. Pull out the storage card, reboot, and you are back to a "clean" state.

Joanna Rutkowska, creator of Qubes OS (a Linux distro) has proposed something like this for a laptop, but it should be applicable to things like firewalls and other network hardware as well. Other people are also looking into the question of how the OS knows whether to trust the hardware.

If you're worried about something being built into the CPU, then using one with multiple suppliers from different parts of the world, such as ARM, or the up and coming open source CPU RISC-V (backed by a number of major IT companies) will mean that there is no single "choke point" which puts any particular government in a position of privilege for all hardware.

thames
Unhappy

Trust?

Let's not forget how Cisco are setting up dead drop addresses to try to stop the NSA intercepting their hardware in transit and installing back doors. Can anyone seriously keep a straight face when they say that any American IT product is safe to use?

I can't think of any realistic solution other than that all security related software and firmware be open source so that hiding back doors isn't so easy, and that customers install it themselves after downloading it from known good sources and verifying the hashes.

There are people who say that "you have to trust someone". However, ask a security professional what they mean by "trust", and they will tell you that a "trusted party" is simply someone who is able to break your security if they are so inclined. I'm not sure I'm ready to "trust" a foreign government, especially one who has a record of hacking into everything in sight.

BlackBerry baffled by Dutch cops' phone encryption cracked brag

thames

It might have something to do with using keys that are too short. There have been notifications that the older 32 bit key ids are no longer secure and that users should be using longer ones.

There are also reports around that something like 95% of Blackberry users use short passwords which can be brute-forced in practical periods of time.

Both of the above are related to the fact that passwords or keys that might have provided reasonable security 10 or 15 years ago are no longer adequate because of the availability of cheap computing power.

Of course a third possibility is that they are simply going around the encryption in a poorly implemented third party app. If the app is keeping plain text draft, display, cached copies, or fragments of text from deleted memory, then the Dutch don't need to actually crack the encryption in order to get at least some of the messages.

There are also reports that people have been removing (de-soldering) memory chips from the Blackberry PC boards, or using the JTAG port to get a copy of the password hash, and then brute forcing it.

The thing which suggests that it is something to do with one of the above, or something similar, is that the Dutch were able to crack some messages but not others.

Cellebrite says their UFED analyser will: "Recover a greater amount of deleted data from unallocated space in the device's flash memory ... Rich set of data: Apps data, passwords, emails, call history, SMS, contacts, calendar, media files, location information etc. ... Decoding of JTAG physical extractions of a rich set of data from Windows Phone 8, BlackBerry 10, Android devices and more ... Recover deleted image files and fragments when only remnants are available"

If you download and read their supported devices list, they "support" a long list of Blackberry phones (and pretty much everyone else as well). I suspect the Dutch used a Cellebrite analyser to get actual data, and then applied some techniques of their own to brute force weak passwords including passwords which were re-used for multiple purposes. If the owner used the same passwords over and over again for different purposes (as many people tend to do), the Dutch only needed to find one app that had a weak point in order to get access to the rest.

Crumbs! Stricken Kiev blames Russian hackers for Xmas eve outages

thames

Re: Boring technical details

Here's the figures for 2014: 45% nuclear, 6% hydro-electric, and the rest fossil fuel - so coal and gas. In other words, half of electric power generation capacity can be affected by coal or gas shortages.

https://www.eia.gov/beta/international/analysis.cfm?iso=UKR

More than 90% of Ukraine's coal comes from the Donbass, which is where the war is.

https://en.wikipedia.org/wiki/Coal_in_Ukraine

In 2014, 75% of Ukrainian coal production was for power generation. Coal production has plummeted, and Ukraine has had to import coal from Russia and South Africa.

https://www.kyivpost.com/content/ukraine/coal-output-in-ukraine-declines-224-in-2014-376952.html

So, if no coal or gas then the lights go out. Both have to be imported using scarce foreign exchange. Hence, my comment about unfortunate Ukrainians may be facing a cold, dark winter.

thames

The Ukrainian electrical system is pretty creaky to begin with. Combine several decades of being an economic basket case, under-investment, massive ongoing corruption, and now on top of it a civil war/"little green men", and just keeping the lights on at all is surprising. Outages during peak demand season? That should be pretty much a given.

The Russians may well be having a go at them, but so much of Ukraine's infrastructure is so outdated there may not actually be much that a "cyber" attack can actually do on the generating side. Of course they could get into the PC network in the business side, but that shouldn't be able to shut down the generating side. And yes, when the electric power goes out, customers do tend to flood the utility with calls of "the power is out" (like they didn't already know that), so that isn't unusual.

I could go into a lot of boring technical detail, but with generating plants and most of the coal mines on the other side of the front line, gas supplies from Russia for more plants terminated (by mutual disagreement/invective), and right wing militants blowing up the wrong power lines and cutting of parts of Ukraine as well as Crimea, this could be a cold, dark winter for a lot of unfortunate Ukrainians.

Feeling abandoned by Adobe? Check out the video editing suites for penguins

thames
Happy

Keep it Simple

I haven't done any video editing for a couple of years, but the last time I did it was with Avidemux. It is very simple, but as I had not done any video editing before it was actually the only one I could figure out without losing patience. All I wanted to do though was to edit out unwanted bits from vacation videos and to splice together multiple separate clips and to fix up audio which was out of sync with the video. Those it was able to do brilliantly. There are other features such as splash screen intros (or whatever they're called), but you have to do some reading up on how they work (I couldn't be bothered).

If I was to switch to something else, then I think that I might have a look at Pitivi again. I suspect that most people don't really want to do anything other than basic cutting and splicing since they, like me, have neither the time nor the artistic talent to do anything other than make a botch job of anything more complex.

Microsoft releases major PowerShell update after long preview

thames

Re: A shortsighted view

@sabroni - "Funny how the reasoned, sensible post in the middle has a name attached. Getting of sick of people using Anonymous posts as an excuse to behave like dicks."

If these people who post anonymously are so confident of what they claim, why are they afraid to attach their user name to it? One might suspect it's because that even they know that what they're saying is complete bollocks and they don't want that history accruing to their user name.

Let's look at the post which started this particular thread. It's a vague broad opinion without any real substance. It's a marketing message in a comment along the lines of "now PowerShell gets your clothes 26% whiter than the leading brand!"

Anonymous comments on El Reg seem to come in three main flavours. The first is when someone is offering genuine inside information on an IT cock-up and they are afraid of repercussions should that comment be traced to them. Those types of anonymous posts are valuable, but very rare. They are however the main reason why the anonymous feature is there.

The second type is where some marketroid posts horse shit about how wonderful his product is. These are the most common anonymous posts. Based on writing style I suspect that most of ones that I've read come from just one salesman from either Microsoft or from a Microsoft sales partner who seems to have a lot of spare time on his hands.

The third type are troll posts. They post utter bullshit because they know that they'll get other posters all in a lather to refute the blatant falsity in the post. Troll posts generally try to be as vague as possible in order to avoid giving respondents anything solid to respond to and easily refute. Something like "x doesn't even come close to y" without any sort of criteria or supporting facts is a classic troll.

I suspect that the first post is the third type. It's a troll.

The best response to marketroid and troll posts is to treat them as such, and not as serious posts.

Surface Pro 4: Will you go the F**K to SLEEP?

thames

Maybe they should put Android on it instead. Perhaps they'd have a real winner thn.

Press Backspace 28 times to own unlucky Grub-by Linux boxes

thames

I Don't Think This Feature is Used Much

I've never seen anyone use the password feature in Grub2, normally it just boots the OS directly. If you do a standard install of something like Ubuntu, you don't even see a Grub menu.

This is for when you have multiple operating systems installed, and you want to let the user pick which OS to boot, but you also want to make sure they can't edit the menu to add or remove items (operating systems). However, not many people are doing multi-boot these days. They tend to just run guest OSes in a VM instead.

Of course if you are using this feature and someone is sitting at the keyboard picking which OS to boot, it also means in most cases that they can do pretty much anything they want with the hardware anyway.

Grim-faced cosmonaut in ISS manual docking nail-biter

thames
Joke

Re: How Dare He!

Of course he's not smiling. Being Russian, he probably has a massive hangover from the night before.

Typo in case-sensitive variable name cooked Google's cloud

thames

Re: It's easy to see how someone got mixed up.

"Treating "sessionAffinity" as being the same as "SessionAffinity" would've fixed that."

Actually, it wouldn't, because this is just one example of a misspell. It could just as well have been "sessionAffiinity". You've only solved a very narrow problem without really solving the problem in general. The only real solution is testing to see if something actually works.

"Remember back in the day when we used prefixes to indicate types of things? Can anyone remind me why that was such a bad idea?"

It was a bad idea because it added a lot of cryptic prefixes for no real good reason. If you changed the type, you had to change the variable or constant names. And what about when you defined your own types? Do you create your own prefixes? And what about when everything became an object? Do you prefix almost every variable with something that means "object". How was that supposed to be useful? And how would it work with dynamic languages where the base type of the reference can potentially change?

It really wasn't very useful. That's why it never really caught on outside of a fairly narrow niche of developers.

thames

What does "case" mean in languages that don't use a European alphabet? Even defining what a "letter" is can be subject to many complex rules in some alphabets, sometimes with no clear answer. The only rational way of dealing with the issue is to talk in terms of unicode code points, or for OS internals, streams of bytes.

Case insensitivity came from the days when teletypes and punch cards formed the user interfaces, and they typically couldn't deal with case. Case insensitivity wasn't created because it was "better". It was an ugly hack that was done to get around short term hardware issues. Now think about a world in which believe it or not, not everyone speaks English or has a name that is spelled like "Smith" or "Jones".

Some operating systems and programming languages have huge case insensitivity related legacy issues dating from the punch card and teletype days. Those legacy issues form an interesting historical quirk, but they will eventually disappear as those legacy case insensitive operating systems and languages die out.

thames

Just Had a Look

I can't readily find where the bug occurred, as none of the reports I've seen are explicit about that. However, I downloaded the Kubernetes code to see what it's like (that's one of the nice things about open source).

It's all written in Go. I'm not that familiar with Go, but a quick grep through the code for sessionAffinity turns up lots of things like the following:

info.sessionAffinityType = service.Spec.SessionAffinity

There are lots of use of "sessionAffinity" in Go, JSON, and YAML, and HTML. It has different case in different applications - both "sessionAffinity" and "SessionAffinity". It's easy to see how someone got mixed up.

I'm not a big fan of camel case or really long names. Names should be short, simple, and distinctive, and people should either use all lower case or capitalise the beginnings of all variables and labels, i.e. "SessionAffinity" rather than "sessionAffinity". Having both styles of spelling in the same project (keeping in mind we are talking about more than just Go files) is asking for trouble.

The real screw-up however in my opinion is the fact that this wasn't covered by testing. This may have been due to an oversight in test construction, but it's possible that testing simply isn't complete enough.

A quick line count with "wc" shows 700,771 lines of Go, of which 30,867 is in the test directory. Or to put it another way, that's roughly 22 lines of application code for every line of testing code. To put that in perspective, I have a project that has roughly 30,000 lines of 'C' code that is being tested by more than 1,000,000 lines of Python, for a ratio of 33 lines of test code for each line of application code (in other words, the reverse ratio), and I can't see a reasonable way of reducing that while being confident in the results.

The worst thing to see people say is "the compiler or IDE should catch the errors". No, in any non-trivial project you cannot rely on the compiler to find errors. You can't say "it compiles, ship it!" The compiler may compile it, but to know if it actually works you have to test it.

So the question I have for Google is why wasn't this caught by testing?

NZ unfurls proposed new flag

thames
Linux

I Like the Swirly One

I like the swirly one. Too bad they didn't pick it. I think it would be fabulous to have a flag based on the Debian logo.

Penguin because well, Debian logo flag, and New Zealand has real penguins.

Microsoft offers Linux certification. Do not adjust your set. This is not an error

thames

"I think Torvalds has won, hasn't he?"

Yes he has, and we can tell that because the Microsoft Windows sales account managers seem to have nothing to do these days other than post self-delusional anonymous career path self-justifications on El Reg about how they've still "got it" while sobbing into their beer in a lonely corner of a pub.

Nobody believes the crap they post, but as they click the "post anonymously" box on the El Reg comment form, they can tell themselves that they "stuck it to the man" as their career gets steamrollered beneath the flippers of relentless progress.

thames

Re: When will we see SQL Server on Linux?

I listened to a podcast interview of one of the leading Postgres developers a while ago, and he said that of the users who were switching to Postgres, more were switching from MS SQL Server than from any other database (including Oracle). So evidently, switching databases is something that businesses actually do on a regular basis.

This particular guy however made a living consulting on large complex database applications, so he may not have as good of an insight into what the users of smaller and simpler MS SQL installations were doing.

MS SQL Server started life as the Windows version of Sybase SQL Server, until Microsoft bought that part of the business from Sybase and licensed the source code from them. I know of someone who switched a medium size business from MS to Sybase to get some really big cost savings, and he said he had no issues. That was before the take over by SAP however, so I don't know how things are going today.

In the long run, I suspect that legacy database systems such as MS SQL Server and Oracle are going to be stuffed. There is a proliferation of new specialised databases, almost all of which are open source, and new applications will be designed around those to avoid vendor lock-in.

Apple finally publishes El Capitan Darwin source

thames

@Lee D - Apple is largely making the point that they are pushing their changes back upstream to the original projects rather than maintaining a separate fork with extra proprietary bits. Taking an MIT/BSD licensed project proprietary is something they could do if they wanted to, it's only GPL style licenses which require publishing the changes.

Perhaps you weren't involved in the open source world at the time, but one of the most significant open source projects they used was the GCC compiler system, which was the basis of not only their C and C++ needs, but also Objective C was based on it. Next (before their merger with Apple) tried to do a proprietary fork (not publish their changes). However GCC is GPL licensed. It took the FSF saying they would not allow this (they consulted a lawyer, who said that under copyright law the angle that Apple was trying to use to get around the license would be considered a "subterfuge", and a judge would come down on Apple like a ton of bricks) to get Apple to publish the source. Following the Next merger however, Apple complied with the license in an very passive-aggressive manner, meeting the letter of the law while being as much of a knob end about it as possible. When LLVM became viable (it's still not quite up to GCC when it comes to speed though), they switched to that for their compiler back-end because the license on that is similar to BSD/MIT, which lets them do things like say make Swift closed source unless they felt they could derive some advantage from making it open source. Apple also initially took the passive-agressive approach to license compliance with KHTML (or "Webkit" as Apple likes to call it).

Apple took a major black eye in the open source community from all that, so at every opportunity their PR department like to spout about what good open source community members they are because they use open source, etc., etc. The "contribute back" is emphasised because their allergic reaction to any sort of license which requires them to contribute back (e.g. GPL) makes people suspicious as to the depth of Apple's commitment.

Of course while Apple was going around scooping up open source software (instead of buying an expensive license from AT&T) and packaging it with their product, Microsoft was going around saying that open source was "cancer" and that company directors would be going to prison if they used it (MS constructed an elaborate legal theory that claimed that using open source was a form of fraud under US securities laws). MS were also quietly channelling shed loads of money to SCO to go around and sue world plus dog.

So while Apple isn't exactly the cuddliest company around so far as open source was concerned, they weren't the worst. And of course the Apple reality distortion sphere has fan boys proclaiming that Apple creates everything they touch.

PHP 7.0 arrives, so go forth and upgrade if you dare

thames

Re: Not backwards compatible can cause a lot of problems

@DrXym - "It should have had 5 years of bug fixes and then frozen"

The LTS decision came about because of a lot of user demand, especially as it is an important part of most major Linux distros and companies like Red Hat have long term support requirements for it due to their own customer commitments. The core developers would rather have stopped support for 2.7 as soon as 3.1 came out. Having said that, there are no new features going into 2.7, just security fixes. All the major Linux distros however are in the process of switching to 3.x, or have switched already. Version 2.7 however isn't going to totally disappear until the last Red Hat customer still running it upgrades to a RHEL version that doesn't have it, some time in the 2020s.

@DrXym - "As for Unicode, from experience the best solution for any language is that all source code is treated as UTF-8."

Python has been around for quite a long time, longer than Java. Going to unicode was always going to be a problem, and the 3.x series is where they finally decided to bite the bullet and do it. The other changes came along for the ride, as they decided if they were going to make any breaking changes, they may as well do them all at once and get them over with. They weren't going to hack these changes onto the side, they made them thorough going and comprehensive to ensure they were done right and also done once and for all.

In 3.x, source is treated as UTF-8. With 2.x you could specify that at the beginning of the file, but it wasn't by default and most 2.x programs don't do it.

@DrXym - "At runtime the choice is harder since Unicode requires 32-bits encapsulate every code point but that's horribly inefficient.Java (and QT and Windows API) uses UTF-16 which *mostly* works until you hit some esoteric ancient language and then its guaranteed to break."

In the newer versions of 3.x, they encode UTF-8 string data using 8 bits, but switch to larger sizes if non-ASCII code points are encountered. This saves a lot of RAM in applications which process string data, such as web apps. Increased RAM consumption due to Unicode was in fact another factor keeping some people from switching to 3.x until that change was made.

The variable character length isn't a problem because that is all handled by Python behind the scenes. You don't get access to the raw memory buffers, you do it through Python's built in string methods and syntax. All strings are unicode, and binary data is now a different data type. There is no more mixing the two together.

Infosec bods rate app languages; find Java 'king', put PHP in bin

thames

Re: PHP et al

Very often what's in the introductory books is the wrong way to do it. The introductory books tend to be written to let students get something working as quickly as possible. Once you've got the basics of PHP down, you need to read some up to date more advanced books to find the correct way to do things.

thames

Their scanning software only works with certain languages. They said a couple of years ago they were working on Python support, but they don't seem to have it working yet. I'm not sure if they're continuing with that, or if they've given up.

thames

Re: Android apps

Their mobile tests are different from their server tests, so they put mobile in a different class altogether.

thames

Look at what they are actually measuring.

I'm surprised that everybody is simply taking this report at face value. It's an advertising paper by a company called "Veracode" who do super-special patented proprietary security analysis that doesn't actually look at the source code.

Let's look at a few examples of problems with this report. Why aren't Python or Perl listed despite those being widely used in web apps? It's because their software either doesn't work at all, or doesn't work very well with those languages. I imagine that Python is probably quite difficult to trace using Veracode's binary scanners.

How do they analyse programs without looking at the source code? They have a bunch of proprietary algorithms that they run over the binaries (byte code or machine code) that look for certain things. However, how well those work will vary greatly by language. For some languages they can't really tell very much. For others, they will spew out large numbers of false positives (rather like compiler warnings that may not actually indicate a problem). Customers love seeing lots of security alerts. It makes them feel like they're getting something for the money they are paying.

Comparing one language to another doesn't really tell you anything other than how many warnings Veracode will spit out. Surprise, surprise, some languages are easier to trace than others, and some languages will produce more false positives than others.

There are also sampling issues, which they readily admit to at the end of their report. The report was based on samples provided by customers, which means there is a degree of self selection going on. If for example a lot of their Java customers submitted everything they did as a routine part of their business process, while their PHP customers only used them after they had a series of serious security breaches, guess which one will show the most flaws (leaving aside any biases caused by the analysis methods themselves)? Now think about what languages the "money is no object, let's tick lots of certification boxes" development crowd use, and think about what languages the "one man developing small sites on a shoe string" development crowd use. The former will see lots of routine scans, while the latter will be motivated to buy Veracode's services only when they've had problems.

A further problem is that some types of programs take longer to scan than others (scans can take hours to a day or more), so some customers with large systems will abort the scan before it is done. Guess which types of languages tend to be be used in very large programs (or at least ones that have a lot of code)?

The two methods they use don't agree with each other. For example they say that DAST found SQL Injection problems in 6% of programs, while SAST found it in 29%. For insufficient input validation, it was 4% and 37% respectively. In other words, the number of results depend heavily on the algorithms used, and it is quite reasonable to assume that those will work "better" (produce more warnings) with some languages than with others.

I won't comment on Veracode in particular, as I haven't done business with them. However, if you read about user impressions of companies that work in the application security field, the term "snake oil" seems to come up rather a lot.

I'm not a fan of PHP, but quite frankly I think the report is rubbish. Even if you feel their cloud service is worth the money, there is absolutely no reason to believe that it is equally effective and accurate across all languages.