nav search
Data Centre Software Security DevOps Business Personal Tech Science Emergent Tech Bootnotes
BOFH
Lectures

* Posts by thames

762 posts • joined 4 Sep 2014

Page:

Apache OpenOffice, the Schrodinger's app: No one knows if it's dead or alive, no one really wants to look inside

thames
Silver badge

Counting is not so easy.

With respect to download counts, Linux users normally get their copy of LibreOffice from their distro, either as part of the default install or from their standard repos. I believe that most of the major distros ship LibreOffice, not Apache OO. Linux users have little or no reason to get their copy directly from the LibreOffice site. This means that a major part of LibreOffice's user base won't show up in their download counts.

Then there are derivatives and rebranding, such as NeoOffice, which can also make difficult to get accurate user base figures for either LibreOffice or Apache OO.

60
1

Decoding the Chinese Super Micro super spy-chip super-scandal: What do we know – and who is telling the truth?

thames
Silver badge

Re: My take?

There seems to be a general rule of thumb that when US intelligence departments leak alarming stories via compliant press contacts, it's usually the case that the US is already doing this themselves and are sweating buckets over the thought that someone else might be doing it as well. We saw exactly this in the run-up to the Stuxnet reveal, and we saw exactly this in the backdoors being installed in Cisco networking equipment.

I remember the same sort of vague but alarming stories claiming that foreign powers were infiltrating SCADA systems and could use that do destroy utility equipment. They even built a lab type setup of a diesel generator with attacked SCADA system and demonstrated it. Meanwhile the utility industry scratched their heads in puzzlement, because despite the alarm and panic in government, industry couldn't pry any actual details out of them so they could take preventive action and nobody was seeing it in the wild. And then the Stuxnet story came out and we found out the panic was about how the US (with the assistance of Israel) had infiltrated the SCADA systems controlling Iranian enrichment equipment and was using it to conduct sabotage and the US were afraid they would be hacked back.

To go back to the mysterious motherboard chips, if this was real, I would expect someone to present actual hacked hardware along with demonstrations of what it did. After all, if the story were real then it's not like Chinese wouldn't already know everything about it, so what's the point of hiding it?

And Amazon's and especially Apple's denials are pretty strong. If they were obfuscating the issue, then they would just release their usual vague waffle.

I suspect this story is complete bullshit. The use of a security company in Ontario Canada is also very interesting. At this very moment the US is putting lots of pressure on Canada to try to get them to ban Huawei equipment from important Canadian networks. It would not be surprising if this whole story were to be an exercise intended to pressure allies into stepping into line behind the US in freezing Chinese tech companies out of western markets in favour of equipment that has the backdoors of "friendly" countries in it.

55
5

Brexit campaigner AggregateIQ challenges UK's first GDPR notice

thames
Silver badge
Boffin

Anonymous Coward said: "Serious question but how are the ICO going to enforce the GDPR against a Canadian company?"

I imagine it would involve the ICO going to a UK court asking that the judgement be enforced, followed by the UK court filing appropriate papers with a Canadian court asking them to enforce the UK decision. AggregateIQ would then appeal to a Canadian court asking that it not be enforced, and then after some back and forth with lawyers, the Canadian court approves the UK request and the ICO gets their judgement approved.

UK law is considered to be close enough to Canadian law (closer than any other country) and the UK courts fair enough that the Canadian courts are not likely to question their judgements too much provided the proper paperwork has been filled out.

The ICO may have to wait in line however. Cambridge Analytica, AggregateIQ, and Facebook are already under investigation for the same or related matters by the ICO's Canadian equivalent, the OPCC (Privacy Commissioner) over violations of PIPEDA, which is Canada's equivalent of GDPR.

The OPCC web site mentioned six months ago that they are in contact with the UK ICO on their related investigation. It appears that the UK and Canada have been cooperating with each other on this matter for some time now.

28
1

GitLab gets it, grabs $100m to become $1bn firm

thames
Silver badge

Or Amazon. One of the major cloud providers likely will, in order to provide a seamless path from development to deployment and so increase sales to their cloud.

They don't need to make GetLab exclusive to their cloud in order to do that. They just need to be able to direct development to ensure that it supports all features of their cloud and that any operating assumptions built into GitLab mirror the ones in their cloud architecture.

1
2

Excuse me, but your website's source code appears to be showing

thames
Silver badge

And the problem is poor Wordpress configs

If you read the actual report it becomes apparent that the problem seems to be mainly from Wordpress or Wordpress-based systems. The author does a lot of analysis, but in the end it comes down to poor Wordpress installs. There seem to be a few other similar systems as well, so it's not solely Wordpress, but the big one is Wordpress. That's not to say that Wordpress is inherently bad, but it is very, very, popular, and very widely used by people who don't necessarily know what they are doing.

I suspect that most of the problem comes from hosting providers offering "one click install" options for many of the most common hosted systems from a management panel, while not making those standard options secure by default. They seem to be deploying via Git, and when Wordpress and similar systems detect they have been deployed from Git they disable automatic updates (presumably because they believe the administrator will want to handle that himself under those circumstances). If the hosting provider doesn't keep up with security updates, then that adds to the problems still more.

I suspect if the hosting providers were to fix their standard offerings most of this problem would go away so far as new installs goes. The major issue would then be whether they could fix what their existing customers have already got deployed.

4
0

Redis has a license to kill: Open-source database maker takes some code proprietary

thames
Silver badge

Re: Wait and see

Redis Labs are going to what is called an "Open Core" model. This is where the main software project is open source (in this case Redis itself), but add-on modules are proprietary (adding the Commons Clause to Apache is equivalent to). If you don't use the add-on modules then it doesn't really affect you.

A number of other companies do this, Oracle MySQL is perhaps the best known. There aren't however many well known examples because there simply aren't that many successful examples of Open Core as a business model. It really only seems to work in cases where the main software system has a very narrow and specialised market where there is direct contact between the software creator and the customer, and there aren't many third parties interested in creating add-ons. Perhaps the most successful "open core" example is Google Android where the base OS is open source but Google Play Services is proprietary. There aren't a lot of other successful large examples however.

I'm not sure this is really such a big change to what Redis was already doing however. The reason that most companies use an AGPL license rather than a standard GPL license is so they can charge customers a fee for selling them a non-AGPL version (unlike standard GPL, AGPL is less convenient to use on proprietary web services).

In the case of Redis, the base database was and remains BSD. Some of the add-on modules doing things like full text search however are changing from AGPL to Apache + Commons Clause. The latter apparently achieves the same intent (from Redis Labs's perspective) as the former when used in "cloud" operations as opposed to enterprise data centres which were Redis Labs' traditional market for add-on products.

Redis Labs' main problem is that their core customer base of business enterprises is to a large extent moving from their own data centres to cloud operations. Since the cloud vendors are increasingly providing the core software infrastructure as well as the hardware to run it on that removes a lot of their traditional customer base contact, and the ability to charge support fees along with it.

This follows general long term industry trends as lower layers of the stack, from hardware to operating system to databases, increasingly become commodities. Profitability increasingly comes from more specialised software higher up the stack which has not yet become a commodity, or from services which are inherently more difficult to commoditise due to economies of scale.

Older vendors selling legacy software or software/hardware combinations with a great deal of customer lock-in such as Oracle databases or Microsoft Windows or IBM mainframes are another profitable business model. However, all of these examples face eroding revenues as new development goes elsewhere and existing customers gradually drop away through natural attrition.

5
0

SUSE and Microsoft give enterprise Linux an Azure tune-up

thames
Silver badge

Re: At what cost?

From what I can tell it's just something to let the kernel know that it is running in a VM and to use the VM's direct interfaces for storage and networking rather than using emulated interfaces. This is something that Linux versions optimised for VMs have been doing for years.

Generally, when you are running a generic kernel on a VM you lose some I/O capacity if you are talking to it as if it were emulated hardware. Most VM makers offer a way around that so that the I/O systems can talk directly to the VM bypassing the emulation features.

About size months ago Microsoft started offering a version of MS Azure with hardware accelerators for I/O (google "TCP offload engine" for examples). Such things have been available for years in things like NIC interfaces if you run directly on your own server hardware instead of using "cloud" versions.

"Cloud" versions of course require additional support from the VM so that different cloud instances can share the hardware without stepping on each others toes. The new Suse version just has added modules to use the interfaces in Azure for this.

I'm not sure what is really new in this announcement, since according to Microsoft the previous version of Suse had this, as well as Red Hat, CENTOS, and Ubuntu. It might be just that there was a delay in support for this feature in the new version of Suse that came out recently but now it's there for people looking to upgrade their version of Suse.

8
0

Drink this potion, Linux kernel, and tomorrow you'll wake up with a WireGuard VPN driver

thames
Silver badge

Re: Why?

@Anonymous Coward said: "I had a ran a Linux VM that was optimised to be a VPN server (...) These days, you'd be lucky to get away with a 4GB USB stick."

Let's see how that compares to the actual size of a popular Linux ISO.

Ubuntu desktop: size is 1,953,349,632 bytes. That includes a full GUI (Unity), office suite (LibreOffice), web browser (Firefox), email client (Thunderbird) and all sorts of media players and other odds and ends. If you just want the desktop with web browser, pick minimal install and the other stuff will get left out.

Ubuntu server: is 845,152,256 bytes. That is just over a fifth of the size of the desktop ISO.

For Debian I just have the server net install, but the installed size won't be much different than the above.

FreeBSD (not Linux, but we'll list it anyway) is 2,930,397,184

OpenSuse is 3,917,479,936 bytes, although you don't have to install everything it includes.

Those are the ones I happen to have sitting around. The Ubuntu desktop will boot directly from the ISO so you can try it out without installing it.

If you want to know the install size, then as an example a Debian 9 64 bit server has a Virtual Box VDi size of 1.7 GB. However, that includes a C compiler and a bunch of other extraneous stuff that I use for testing software, so you can probably cut that down somewhat.

7
3

Spectre/Meltdown fixes in HPC: Want the bad news or the bad news? It's slower, say boffins

thames
Silver badge

Meltdown seems to be Intel specific, while Spectre is a more general problem relating to speculative execution.

For ARM, it will depend on the specific model. Some ARM chips are affected and some are not. There is a list somewhere of what models are affected.

For example, the Raspberry Pi Foundation have said that all models of Raspberry Pi are immune to both Meltdown and Spectre due to the model of CPU they use.

Generally, some of the top end ARM models are affected by Spectre, while the rest are not. What this has generally meant in practice is that the most expensive premium model mobile phones have a potential problem while the medium to low priced Android phones are largely immune. The bulk of embedded applications using ARM are also probably immune.

2
0

Hooray: Google App Engine finally ready for Python 3 (and PHP 7.2)

thames
Silver badge

Re: about bl**dy time

If I recall correctly, AppEngine was stuck on using an old version of Python because they had taken a snapshot of Python and heavily hacked the source code to introduce language run-time level checks and limits on what a user program could do in order to add sandbox style isolation without using VMs.

It sounded clever, but it had several major drawbacks. The obvious one was that they were stuck on one increasingly old and obsolete version of Python while the rest of the world moved on. Another was that they could only support a subset of the Python standard library and no user C modules.

The Java version they supported also had comparable limits, but I don't know the details of that.

"GVisor" replaces the need for a custom version of Python so they can now support up to date versions without having to hack the language run-time.

Most major Python projects such as Django, SciPy, and others have either already dropped or are in the process of phasing out support for Python 2.x, and many newer libraries have never bothered supporting 2.x in the first place.

While you may not want to try running something like Django on App Engine (I don't know if it is even possible), App Engine's lack of support for modern versions of Python was leaving it increasingly isolated in terms of third party library support. Since Google was running their own version of Python, the end of life date for mainstream Python wouldn't have affected them. However, lack of third party libraries, lack of new educational material, and just generally being out of the development mainstream would. This I think was the motivation for replacing their original solution with gVisor.

And the overall motivation for the update I think is that Google are now putting more emphasis on App Engine due to the current trendiness of "serverless", the latter of which was the basic idea behind App Engine to begin with.

2
0

On Android, US antitrust can go where nervous EU fears to tread

thames
Silver badge
FAIL

Two issues

The article is full of holes as it confuses two different issues.

One problem is that Google tells phone vendors that if they want to sell genuine Android phones they can't also sell any that use a forked Android. This is quite effective in preventing companies such as Samsung from coming out with their own Android fork which is mostly compatible with actual Android and forces them to make something completely different and incompatible such as Tizen. That is a much bigger gap to jump to create an attractive product.

The other issue is the services market, which includes mail, location, ads, etc. Google has theirs which only works with Android. Apple has theirs which only works with their phones. And there are several Chinese companies who are able to offer the same for their local market, but only with a forked Android. The fact that Google is only a minor player in China and China is the world's biggest mobile phone market probably goes a long way to explain why non-Google Android succeeds there.

The example the author is looking for is Russia. Google lost an anti-competition case in Russia and can no longer demand exclusivity of its applications in Russia. This includes search, where Google has to provide a window which lets the user select what search engine to choose. The case originated in a complaint from Yandex.

The story would have been much better if the author had discussed the Russian case and how that precedent might be applied to the EU.

11
1

No big deal... Kremlin hackers 'jumped air-gapped networks' to pwn US power utilities

thames
Silver badge

The usual pattern for this sort of thing is that it starts when the US do this to someone else. The US counter-intelligence department then find out what their colleges on the floor above have been up to and crap their pants over the thought that someone might do the same to them. They then stage a series of leaks into the press that someone else has been doing it to them in an effort to whip up enough publicity to spur the industry into taking some preventive measures.

Prior to the news of what the Americans did to Iran with Stuxnet, there was a long series of "confidential intelligence briefings" to selected newspapers and politicians about how US utilities may be vulnerable to being hacked. A demonstration using a specially set up diesel generator (simulating a power plant) was conducted which was supposed to show how SCADA systems could be infiltrated.

The utility industry just shrugged it off, as they weren't seeing any of this in practice. And then Stuxnet hit the news and we saw that it had done exactly the sort of SCADA infiltration that the Americans had claimed was the threat to US utilities.

And then there was the big campaign using the same PR techniques over how Chinese IT gear might have back doors in it. Nobody could find these back doors, but we were assured they might be there and it was a huge national security risk. And then it turned out that the American NSA was putting back doors in Cisco kit.

I could go on with more examples, but the pattern follows a well-worn groove by now. The US hacks someone else, they crap themselves over the thought that someone might do the same to them, they start a propaganda campaign via the channel of suitably compliant major news media to whom they give an "exclusive" in return for not asking the wrong sort of questions, and industry is left to wonder "WTF?" because the story is full of holes due to so many details being held back because of course the US doesn't want the target they had actually hacked to find out what had been done.

To address the story in particular, very likely the "air gapped" systems aren't actually air gapped. The utility has an "air gap" policy, but an exception was made for remote vendor support. The vendor isn't air gapped because they're too small to have a dedicated IT security team who could plan such a thing. And true "air gapping" probably isn't practical to begin with because the vendors are software developers who need to get software updates from Microsoft and their PCs need to connect to the Internet on a regular basis to validate software licenses, etc., etc.

And if software updates from the vendors to the utilities aren't conducted on a timely basis, ordinary bugs can crash the electric network just as surely as malicious action could.

Genuine security is probably possible, but it would require a complete overhaul of the industry and the relationships with vendors and the software development environments they use, and that simply isn't going to happen any time soon.

21
0

Oldest swinger in town, Slackware, notches up a quarter of a century

thames
Silver badge

Well Ahead of Red Hat

El Reg said: "2017 saw the distribution drop to 31 in page hit ranking, according to Linux watcher DistroWatch.com, from position 7 in 2002. "

Well, they're 28 on the list right now, and way ahead of Red Hat who are at 45, sandwiched in between Kubuntu and Gentoo. Number 1 on the list is Manjaro, who are so far ahead of the rest of the pack that nobody is even close to them.

Years ago Distrowatch had to write a complaint about "abuse" (their word) of the counters by fanbois who were gaming the system to try to push their favourite distro higher on the list. Or as DIstrowatch themselves put it "a continuous abuse of the counters by a handful of undisciplined individuals who had confused DistroWatch with a poll station".

Distrowatch rankings have nothing to do with how "popular" any specific distro is. They're just a count of how often someone clicks on the page that describes that distro. The average Linux user never has any reason to ever visit Distrowatch, which means that the ranking is simply an indicator of what caught the eye of the sort of person who collects Linux distros the same way that some people collect stamps.

24
0

GitHub to Pythonistas: Let us save you from vulnerable code

thames
Silver badge

Re: pickle

@stephanh said: "What do you consider a known vulnerability?"

A known vulnerability is something that is supposed to be secure against attack but isn't. Pickle wouldn't count as a vulnerability, because you are essentially just serializing and unserializing executable object code and data. This is something you do between different parts of your own application, not with data from outside. The docs as you said, make this clear. If your application un-pickles data from untrusted sources, the mistake is yours since you were explicitly told not to do that.

For untrusted data you would use something like JSON. If there were a bug in the JSON decoder which allowed someone to execute arbitrary code, then that would be a vulnerability.

Most programming language libraries have something to let you execute OS shell commands. That is potentially dangerous if you were to write your application such that anyone could execute arbitrary shell commands via the web interface. However, that wouldn't be a programming language vulnerability, that would a vulnerability in your program since you should not provide a feature that does this.

Something is a vulnerability when it can do something dangerous that wasn't in the documentation.

2
0

Python creator Guido van Rossum sys.exit()s as language overlord

thames
Silver badge

Re: Reinventing a more limited wheel

@rgmiller1974 said: "I'm curious about the example thames posted. Is ... really any slower than ..."

The interpreter doesn't cache the results of f(x), and I doubt it would be feasible to determine if it could do so in all cases. Static analysis couldn't determine that since function "f" could be written in another language (e.g. 'C') for which you might not even have the source code and dynamic analysis would run into similar problems.

The new syntax achieves the same result under the control of the programmer as well as being useful in other applications. Plus, you can see what is going on without having to analyse the behaviour of f.

0
0
thames
Silver badge

@AC said: "Any language that depends on differing amounts of whitespace to alter the program is stupid. "

For those who have moved on since the days of GWBASIC, everybody (other than you apparently) indents their code in a way which is intended to convey meaning about it.

Differing amounts of white space alter the meaning of programs in all programming languages - in the eyes of the programmer for whose benefit those visual cues are present. The fact that in most programming languages indentation level doesn't alter the meaning of the program in the "eyes" of the compiler is a major problem.

The Python compiler reads code the same way that a human would and derives meaning from the indentation level similar to how a human would. That eliminates whole classes of errors which would derive from humans reading it one way and the compiler reading it another. And once the compiler uses the same cues that the programmer does the block tokens become redundant and can be eliminated.

31
12
thames
Silver badge

Re: Here's a PEP

Just use #{ and #} where you would like to use brackets and you can put in as many as you want.

17
10
thames
Silver badge

Re: Reinventing a more limited wheel

I would be fascinated to hear how you would do the following in one line of idiomatic C using commas.

results = [(x, y, x/y) for x in input_data if (y := f(x)) > 0]

The major objective appears to be avoiding duplicating work unnecessarily when doing multiple things in a single expression. The previous way of doing the above would have been:

results = [(x, f(x), x/f(x)) for x in input_data if f(x) > 0]

I can think of multiple instances in which I could have used this feature in cases similar to the above.

8
9
thames
Silver badge

Re: Futuristic progression of Programming Languages?

A program written in Python can be a fraction of the number of lines as a program which does the same thing in C.

Time is money, or whatever other means you want to measure the value of time in. You can get a finished program in fewer man-hours. That matters in a lot of fields where being first to market is what counts, or where you are delivering a bespoke solution to a single customer at the lowest cost, or where you have a scientific problem that needs solving without investing a lot of time in writing the software part of the project.

Python isn't the best solution to all possible problems, but it is a very good solution to a lot of problems which are fairly prominent at this time. It also interfaces to C very nicely, which allows it to use lots of popular C libraries that already exist outside of Python itself. These are why it is popular right now.

There is no one size fits all solution to all programming problems. It is in fact considered to be good practice to write bits of your program in C and the rest in Python if that is what makes for a better solution for your problem. There is no necessity to re-write everything in Python in the manner that certain other languages require everything to be re-written in "their" language. The result is that Python has become the language of choice for a lot of fields of endeavour where you can reuse existing industry standard C and Fortran libraries from Python.

Van Rossum's "retirement" isn't a huge shock and won't make much difference. For quite some time other members of the community have been taking the lead in developing new features and Van Rossum's main role has been to say "no" to adding stuff that was trendy but didn't provide a lot of value. Everything should continue along find with the BDFL further in the background. Overall, it is probably a good idea to get the post-BDFL era started now while the BDFL is still around.

41
4

I think I'm a clone now: Chinese AMD Epyc-like server chips appear in China. What gives?

thames
Silver badge

Re: Contradictory

They can replace the built-in encryption accelerators and random number generators with their own. The way the US has been putting back doors into systems is to get people who work for them (either openly or clandestinely) on industry standards boards and get subtle weaknesses introduced into the standards. They also bribed American companies to implement these backdoored standards and then certified them as "secure" in order to get them adopted in the market.

These weaknesses make encryption easier to crack. You couldn't prove the standards had a back door unless you knew what the back door was, and independent cryptographers who thought things looked more then a bit fishy were dismissed as tin foil hat wearers. Then it all came out in a set of leaks a few years ago.

How that relates to CPUs is, how do you know that the encryption acceleration or random number generator built into an Intel CPU doesn't have a similar US government backdoor built into it? You don't, which is why Linux kernel devs don't trust the built-in random number generator for use in encryption. They only use it as one of a number of different sources of randomness specifically because of the threat of US back doors.

So, the Chinese can replace the encryption accelerators and random number generators in AMD CPUs with their own. They may possibly Chinese back doors instead of American back doors, but at least they know the US government won't be reading all their encrypted messages. That isn't an assurance that the rest of the world doesn't have.

Oh, and if the Americans have back doors in Intel CPUs, then the Russians and a number of other countries probably have managed to get themselves a copy of the same keys as well, one way or another.

2
0

Boeing embraces Embraer to take off in regional jet market

thames
Silver badge

Yes, Boeing had been talking with both Embraer and Bombardier about some sort of acquisition or merger. Talks with Bombardier fell apart, and Boeing already had a partnership with Embraer to sell and service their military transport jets (same market segment as the Hercules). A straight out purchase of Embraer was not in the cards though because of political opposition to it in Brazil.

There is widespread speculation that Boeing's plan was to get the US government to give them a monopoly on the US market for this size of jet to sweeten the deal enough to bring Embraer back to the table.

The Canadian government then played matchmaker to get Bombardier and Airbus back to the negotiating table and a deal was made (and is now in effect). The UK government got involved as well as large parts of the CSeries are to be made in the UK. PM May held high level meetings in Ottawa about strategy and then did some high level lobbying in Washington to try to get Boeing's tariff plan killed.

There is also an upcoming major arms deal in Canada where Airbus are now in an improved position for their Typhoon due to Bombardier's local industry links. Boeing meanwhile, who once were seen as having the deal in the bag, have been told their bid will have big negative ratings all over it due to being seen as not being very friendly to Canadian interests (a criteria for this was actually added to the formal bid process because of these events).

Overall, while Boeing may have possibly have now got their partnership with Embraer, they screwed up badly overall.

5
0

GitLab's move off Azure to Google cloud totally unrelated to Microsoft's GitHub acquisition. Yep

thames
Silver badge

Next Month's Le Reg Story

And next month The Register will report that GitLab is being bought by Google. Someone is going to buy them, and the top candidates would be Google and Amazon.

13
0

From here on, Red Hat's new GPLv2 software projects will have GPLv3 cure for license violators

thames
Silver badge

Re: I have a better remedy...

Changing the license to BSD would do absolutely nothing to resolve the questions being addressed here because the BSD license does not even attempt to address the issue of what happens if you were found to be exceeding the terms of the license.

The question of "cure" is with respect to what does someone have to do to get themselves into the clear if they were caught violating copyright law with respect to a published work. GPLv2 as it stands does not address this. BSD also does not address this.

GPLv3 however lays it all out clearly that you if you bring yourself into compliance with the license then "forgiveness" (in the legal sense) is automatically instated. Under copyright law, formal "forgiveness" is required in order for copyright infringement complaint to be considered closed. The measure being adopted by Red Hat tacks that aspect of GPLv3 onto the side of GPLv2 without changing the license itself.

When you "violate copyright" you are violating copyright law. The license is your defence as a user against being sued by the copyright holders. A license that is more explicit in this respect is to the user's advantage. A license which does not address this issue leaves it up to the courts and the lawyers to argue it out.

2
0
thames
Silver badge

There is no change to the actual license. If the original license was GPLv2, then that remains the license. What happens is they add another file to the project which says that in the event of a license violation, the "cure" procedure for copyright violation will be as specified in the new file. Since GPLv2 doesn't address what happens at that point there is no conflict with that license. Since Red Hat stated what they they would do in that instance, so far as a court is concerned they are as effectively bound by it as they would be if it was a clause on the license itself.

GPLv3 addressed a lot of issues in GPLv2 like this, and is in my opinion overall a better license and what I use in my own open source projects unless I need to conform to the license of an existing project. The GPLv3 drafting process also took in a lot of input from lawyers around the world to correct issues relating to legal systems which are different from that in the US, as well as many other matters.

The main objection that people had against GPLv3 was the provision that manufacturers of locked-down platforms had to provide unlock keys. The main objector back then appear\ed to be Tivo and other makers of things like home TV video recorders. These days it is cell phone and tablet makers who object to it.

I won't be surprised if eventually they end up with what is effectively a "GPLv2.9" - or a GPLv3 without the anti-lock-down provisions.

5
0

Microsoft will ‘lose developers for a generation’ if it stuffs up GitHub, says future CEO

thames
Silver badge

Re: puts a dampener on rival GitLab’s claim?

This is what I'm doing. Within the next couple of weeks I will be setting up an account at GitLab, but will still keep the GitHub repo. The project will simply be hosted in two places. If that works out well, then I may look for a third location as well. I will want to automate this with proper scripting first however so I don't have to do it manually.

My plan isn't to simply switch hosting providers. I did that once before when I moved from SourceForge to GitHub. What I intend to do is to have multiple mirrors where the project is hosted so that the loss of any one of them is not a major setback. There is no point in trying to do that after you have been presented with the choice of either accepting new terms of service or being locked out of your one and only account.

So I will be moving to GitLab, but I will still be at GitHub for now as well. This is what I would expect other people who are concerned about this to do as well.

The only question really is which one becomes the primary repo and which one becomes the secondary mirror. A lot of GitHub's value is in the "community" aspect of having the largest number of developers already active there. If the community becomes more dispersed then a lot of that value will fade away.

27
3

Microsoft commits: We're buying GitHub for $7.5 beeeeeeellion

thames
Silver badge

Re: Shite

@J. R. Hartley said: "Wonder which new and exciting way they're gonna fuck it up."

They'll integrate it with Linkdin, Skype, Azure and MS developer tools.

Their press release said: "... bring Microsoft’s developer tools and services to new audiences."

Expect to see Github features appearing which hook your code repos directly into MS Azure for deployment. Your Github rankings will be reflected in your Linkdin profile. If you don't have a Linkdin profile, one will be automatically created for you based on your Github data. Skype will be integrated into team meetings for projects. MS Visual Studio will have deep integration into GitHub beyond just being a Git client.

So, you'll still be able to use Github via the web interface and via the command line Git client, but every possible Microsoft service that can be integrated into Github will be to the degree that a software developer could work through the life of an entire project without ever leaving the Microsoft walled garden.

Microsoft just paid a staggering amount for Github (three times as much as press analysts were speculating) and they will be looking for ways to make that back. Introducing a variety of forms of vendor lock-in in order to sell other goods and services is the obvious choice here.

I'm looking into what is involved in setting up a Gitlab account. I won't pull my open source projects from Github, but I won't use it as the sole public repo any more. Just like a lot of Youtubers have come to the realisation that they need to diversify their options rather than being at the mercy of Youtube's latest policies, I'm going to make sure that I can pull the plug on Github at any time if necessary just like I did with Sourceforge.

P.S. Don't be surprised if Amazon come out with some sort of response to this.

45
0

Welcome to Ubuntu 18.04: Make yourself at GNOME. Cup of data-slurping dispute, anyone?

thames
Silver badge

@AC said: "As a random example, lets say you're a manufactuer that has a line of custom Linux laptops. Want really good support added to them for nearly no cost? Well then, send in ten or twenty thousand entries for your stuff, randomising things to look legit and using fake source IP info."

Or just send an email to Canonical telling them that you are are a manufacturer who is planning on coming out with a line of custom Linux laptops and that you would like them to work with Ubuntu out of the box on launch. Then ask them if their developers would like some free laptops. They're happy to work with anyone who wants to support Linux.

However, just have a look at the type of information being collected. According to the story it just amounts to the following:

  • Ubuntu Version.
  • BIOS version.
  • CPU.
  • GPU.
  • Amount of RAM.
  • Partitions (I assume that is number and size of disk partitions).
  • Screen resolution and frequency, and number of screens.
  • Whether you auto log in.
  • Whether you use live kernel patching.
  • Type of desktop (e.g. Gnome, Mate, etc.).
  • Whether you use X11 or Weyland.
  • Timezone.
  • Type of install media.
  • Whether you automatically downloaded updates after installation.
  • Language.
  • Whether you used the minimal install.
  • Whether you used any proprietary add-ons.

There is basically two types of information there. One is some basic parameters such as RAM, CPU, GPU, hard drive size, etc. That tells you what you should be targeting in terms of hardware resources, and so whether your desktop (e.g. Gnome) is getting too fat for the average user (as opposed to the average complainer, at which point you are far too late to be addressing the issue).

The other is what install options people changed compared to the default install. If most people don't pick live kernel patching, then you know not to make that option the default. If a lot of people are selecting Urdu as the language, then you might want to make sure that language has better default support. Etc.

Ubuntu will publish this information publicly. Personally I am looking forward to the RAM and CPU type data, as that will give me information on what CPU features to target in certain software I have been working on. I have been relying on Steam data, but that may not be very representative of the science and engineering field which my software relates to.

1
0
thames
Silver badge

@doublelayer - They'll use the data to decide what ought to be the defaults for the next release. They will be making decisions based on actual data rather than someone's wild guesses. A major problem has been that developers often assume that the sort of hardware they have on their desks is typical of what everyone else has.

In the past they've had to make decisions on things such as "should the default install disk be CD sized so that it will work with PCs which have CD drives but not DVD drives, or should it be DVD sized so that the user is less dependent on having network access at the time of installation to install stuff that wouldn't fit on the CD?".

They've also had to worry about things like graphics support, what CPU optimisations to compile in as default (some packages have optional libraries for older CPUs), etc.

Apple know exactly what hardware they ship. Microsoft can simply assume that the non-Apple PC market is the same as the Windows market. Linux distros can't make these assumptions so they either just pull numbers out of the air, use opt-in surveys which are usually wildly unrepresentative of the user base, or do something like this.

Before this they had a detailed opt-in hardware data survey which so few people bothered with that it was pretty much useless. The new one collects far less information, but does so from a sample which will likely be representative of the overall user base.

4
0

I got 257 problems, and they're all open source: Report shines light on Wild West of software

thames
Silver badge

The article seems to be mainly buzzword bingo.

* unpatched Apache Struts.

* Heartbleed

* GDPR

* IOT securtiy

None of these have anything to do with license terms. They can be related to keeping your systems patched and up to date.

However, the real issue in that case is whether you are talking about vendor support of software you have bought, or whether you are talking about supporting software you have developed in-house (or via a contractor).

In the case of vendor support, the license is irrelevant to this issue. The real issue would be the quality of service provided by that vendor. Whether that vendor is Red Hat or Microsoft, the issue is the same.

In the case of self-support of something you developed yourself (or paid a contractor to develop for you), then you need to handle this aspect yourself.

In the general case of security patches for open source libraries and components though, if all of that came from the standard repos of a Linux distro then the distro manages all of this for you. They have security teams and their distro comes with an updating system that manages security patches. They can't make you apply those patches though, that is up to you being willing to do so and having the procedures in place which prevent the issues from being ignored.

This though is really just another variation on the vendor support question, with the license being irrelevant except that you now have a variety of competing vendors all supporting very similar systems to choose from.

17
2

S/MIME artists: EFAIL email app flaws menace PGP-encrypted chats

thames
Silver badge

Check the List

The authors have a list of email clients they tested where they state which ones had a problem, and which ones didn't.

My email client of choice - Claws Mail - was listed as not vulnerable to either attack.

Claws looks very old style, but it is fast, reliable, and has all the features I want. I have used Claws for years and highly recommend it.

7
0

Ubuntu sends crypto-mining apps out of its store and into a tomb

thames
Silver badge

Re: Got to give this punk some credit.

AZump said: "never saw a Linux distribution swap before Ubuntu, let alone suck up memory like Windows"

I'm more than a little sceptical about that claim. I'm writing this on an Ubuntu 16.04 desktop that has been been doing software development and web browsing all day long. Amount of swap being used - zero. That is typical for a system that I use on a daily basis, and I see the amount of usage on a regular basis as it is displayed incidentally to certain tests I run as part of software development.

About the only thing that pushes it into using swap is heavy memory usage from applications that are allocating a large proportion of available memory (e.g. very large arrays in the software I am working on). And that is exactly what happens in every other modern operating system since that is why swap exists in the first place.

If you want to make comments like that, I would suggest doing it on a non-IT related web site where you are less likely to run into people who actually use this stuff on a daily basis.

22
0

Prez Donald Trump to save manufacturing jobs … in China, at ZTE

thames
Silver badge
Boffin

ZTE is also a major customer of certain US chip manufacturers, particularly in ZTE's networking gear. For example Acacia gets 30 percent of their revenue from ZTE. Acacia's share price went into free-fall when the news came out. The same is true for a bunch of other American suppliers.

ZTE can source many components from other places, but will have difficulties doing so with some.

However, this situation is a long term problem for US companies who supply anybody outside the US. While their name might not be on the box, a lot of the value of what is nominally Chinese kit is actually made in the US, South Korea, and Japan. The Chinese assemble it and put a "made in China" label on it, but the majority is actually made elsewhere.

The Chinese government's current economic plan is to design and build more of this high-tech chippery in China. If the US is seen as too risky of a supplier, that will only accelerate this trend in China and the rest of the world to the detriment of US business and the US economy.

I should note that many European defence firms go to great lengths to avoid American suppliers because of the risk inherent in buying from the US. Look for "ITAR-free" suppliers as an example of this.

43
0

UK.gov expects auto auto software updates won't involve users

thames
Silver badge
Stop

It doesn't take much imagination to see how this could go horribly wrong.

Now waiting for a nation state to infiltrate the over the air update system and deliver a patch which bricks every vehicle in the country simultaneously, causing transport, the economy, and society in general to collapse with no practical means of recovery.

Meanwhile the government will defend their plans on the grounds that they just make policy and law, but it's someone else's role to be held responsible for the consequences of it when the government's plans invariably go wrong.

3
0

Pentagon in uproar: 'China's lasers' make US pilots shake in Djibouti

thames
Silver badge

Laser canon and sonic death rays.

As the story notes, loads of these incidents happen in the US all the time. They are caused by scrotes with laser pointers. I don't see why Djibouti would be any different and I suspect that the importation of laser pointers in terms of power and frequency sees a lot less regulation there.

This smacks of the American story about the Cuban sonic death ray supposedly being directed at their embassy personnel in Havana. That would be the Cuban sonic death ray that no other country finds credible. Canada has investigated it and come to the conclusion that the sonic weapons theory isn't plausible. The Americans none the less persist in blaming Cuba.

43
19

If you're a Fedora fanboi, this latest release might break your heart a little

thames
Silver badge

I have an AMD APU in the PC I am typing this on (CPU and GPU in one chip package). Before that I had always used NVidia graphics cards.

For my next PC I would definitely choose an AMD APU again. I have had zero problems with it in several years of use. It's fast, glitch-free, and reliable, as are the open-source drivers used with it by default (I'm using Ubuntu).

In contrast I always had some problems with NVidia graphics cards used with multiple Linux distros, especially when using the proprietary drivers.

Considering the AMP APU comes with CPU and GPU in the same chip package for considerably less money than I would have paid for a comparable CPU plus separate graphics card, it is pretty difficult to justify anything else for typical desktop applications.

I don't play video games so I can't speak to that field of use. I use mine for software development, full screen video playback, and web browsing. I have no complaints whatsoever about AMD APUs in my applications.

8
0

Leave it to Beaver: Unity is long gone and you're on your GNOME

thames
Silver badge

Re: New Linux poweruser here ...

I started trying to write a short summary of all the different issues, but realised there's really no way to cover even a fraction of them.

The short answer is that the basic idea is good, as it is more or less a clone of the init systems used by Solaris and Apple.

However, the implementation was sadly lacking, mainly because of what the Systemd developers were like. The Systemd developers didn't know how much they didn't know about all the obscure edge cases which exist in server applications, and wouldn't listen to the people who did know. When they made mistakes, they blamed other projects for having "bad" software, because well, Systemd is perfect so obviously the problem couldn't be there.

They also insisted that everyone else had to rewrite their own software to work "properly" with Systemd (mainly to do with network socket initiation on start up). The fact that this then made server applications incompatible with BSD and others without a lot of if-defs didn't go over well with the maintainers who were affected or with BSD people (the Systemd developers had no interest in working with the latter on these issues).

Debian had to ditch their project for a BSD based Debian distro version because they didn't have the resources to support two init systems (and all the resulting Debian specific init scripts) and the Systemd developers as mentioned above had no interest in working with the BSD people on this.

And since we are talking about Ubuntu in this story, I should also mention that the Systemd developers screamed much abuse at Ubuntu for not volunteering to be the guinea pig for the first commercial distro release of Systemd (no other commercial distro was using it at the time either). Ubuntu was bad, bad, bad, they insisted. The fact that Red Hat wasn't shipping it either at the time seemed to go right over the heads of the Systemd developers, the leaders of whom just happened to be Red Hat employees.

As for why Systemd got adopted by most distros is simple. It was backed by Red Hat and they have enough muscle in the Linux market to push through things they want. The same is true for Gnome 3 by the way.

If you are using a desktop distro that uses Systemd, or you are using bog standard server applications (e.g. LAMP, mail, SSH, standard database, etc.) then all of this probably doesn't make much difference. Your distro will have figured out the problems and fixed them. The distro that I'm using on my desktop to write this adopted Systemd a few years ago, and I didn't really notice any difference other than boot up taking longer (Systemd has the slowest boot times of any init system that I've measured).

If you are administering a complex server system, especially if you are using proprietary software that isn't packaged properly, then you have to deal with all the Systemd issues yourself instead of just hacking on an init script or installing a third party logging system. A lot of the complaints about it come from people who have to deal with this aspect of it.

8
0
thames
Silver badge

Re: Upgrade, but not right now?

@Notas Badoff: This is not a new policy, they did this with the 14.04 to 16.04 upgrade as well. Existing LTS users don't get upgraded until the first point release comes out (18.04.1). The point releases bundle up accumulated security and bug fixes so that new installations don't have to download them all again.

Normally by that time bug and security fixes related to a new release seeing first widespread use should be down to a trickle. This in turn means that LTS users will see fewer update notifications. If you are an LTS user, you probably care more about not having as many updates than you do about having to wait a few months before getting the next LTS. Non-LTS users on the other hand probably do want the latest stuff ASAP.

When the release does go out to existing LTS users, it won't go out to all of them at once, Instead it will be trickled out to smaller numbers of users at a time over the course of a week or so. Thus even after the LTS upgrade cycle begins, some of those users will be waiting for a while.

If you are an LTS user but really can't wait, then you can force an upgrade now if you know what you are doing (there is a package you need to install which automates the Debian command line process to make it easier).

3
0
thames
Silver badge

Re: Ooops they violated GDPR

Canonical are a UK company. I suspect they have heard of GDPR and know what data is personally identifiable and what isn't amongst the data they actually intend to store.

1
0
thames
Silver badge

Re: On the face of it

@I ain't Spartacus - "It's funny as a non-Penguiny person. I've not read as much about Linux of late, so was amused to see a review talking about people being sad to see the back of Unity."

The sort of person who is motivated enough to write a comment on an IT oriented web forum is generally not the typical user. There are loads of Unity users out there who are just using their PCs to get work done. Fans of the less commonly used desktops or distros seem to feel they need to slag off the major ones rather than promote what is actually good about their own. KDE versus Gnome flame wars for example go back to near the beginning of modern Linux desktop distros.

I ain't Spartacus said: "So when do I expect the article mourning the loss of systemd?"

Based on how these things tend to go, I expect we'll see that in about 10 years.

7
2
thames
Silver badge

Re: On the face of it

@K - Even the version numbers on your middle two examples are indistinguishable.

The reason that Ubuntu bailed out on Gnome 3 in the early days is that it had a very unstable UI that was not ready for prime time and the Gnome developers were no longer supporting Gnome 2. Quite a few people in those days thought that the Gnome project had committed collective suicide and would soon be an ex-parrot.

From that came Unity. It addressed the major usability problems with Gnome 2 (dock moved to the left and reduced use of vertical window space to work with modern display proportions, bigger dock icons, integrate the dock with workspaces, etc.) while keeping the keyboard short cuts and underlying assumptions as similar to Gnome 2 as possible.

After that the user facing stuff remained more or less the same with changes mostly just polishing what they had. The latter though did include a good deal of major work on the underlying bits and pieces to account for major changes in common PC hardware and driver support. The biggest example of the latter is the work they did for compositing desktops when the third parties Ubuntu had been depending on dropped work on their own support for older hardware.

And all that suited most Ubuntu users quite nicely. The Unity desktop worked and was based on sound ideas so why change it? Ubuntu started out as just a much more polished and more up to date version of Debian Gnome 2 and was very popular as that.

Several other currently popular desktops got their start in a similar way. Now however that the Gnome 3 developers have cut back on the crack smoking and have stopped changing how their desktop works every other release and have quite frankly copied some of the better parts of Unity, the reasons for continuing with Unity have to a large extent gone away and Ubuntu can go back to its roots of being a better (and with commercial support available) version of Debian.

Some of the major criticisms that I have of Gnome 3 at this time are the support for keyboard short cuts are not as good as with Unity (this is the biggest complaint I have), the dock is not as well integrated with workspaces or application indicators, and the non-traditional workspace concepts (such as variable number of workspaces and only linear navigation between them). I made very little use of Unity's HUD, so it's loss doesn't bother me much.

Most of the complaints about "Ubuntu" on forums such as this one seem to come from people who are using third party derivatives with non-Unity desktops (I'll avoid mentioning any in particular to avoid flame wars). These non-Unity desktops are put out by community members rather than Canonical, and simply don't have the resources to put the same degree of polish into them that full time distro maintainers do. I've tried some of them and salute the volunteers who work on them for their effort, but I'm more interested in using my PC than in experimenting with desktops. As a result I will be using Gnome 3 after the upgrade notification comes in.

Existing users of Ubuntu will get the upgrade notification in July when Ubuntu 18.04.1 comes out rather than on release day. This is the same policy as was used with 16.04.

9
5
thames
Silver badge

They had one non-LTS version, 17.10, which used Wayland. Other than that, every official mainstream version of Ubuntu right from the beginning used X.

3
0

Russians poised to fire intercontinental ballistic missile... into space with Sentinel-3 sat on board

thames
Silver badge

What goes up, must come down (in pieces).

And meanwhile in Canada today's news headline is that the %@!#$%# Europeans are dropping another one of their left over missiles on us again, left over toxic fuel and all. The Nunavut Territory minister of the environment said: "It is a concern for us," said Savikataaq. "No country wants to be a dumping ground for another country's spent rockets."

2
2

US sanctions on Turkey for Russia purchases could ground Brit F-35s

thames
Silver badge

Re: Garbage in, garbage out

The main value of Turkey to NATO these days is its position in the Middle East. American bases in Turkey are ideally situated to strike east into Iran or south into Iraq or Syria and Lebanon, and generally complement the US bases in Bahrain and Qatar.

The US bases in Turkey saw extensive use in the first and second Iraq wars, and in the war against ISIS in Iraq. Their key role in providing bases for aerial refuelling means that even aircraft based elsewhere depend upon them.

So long as the Middle East has oil, Turkey will be important to NATO.

8
1
thames
Silver badge

Re: "nd what's the problem with an ally (*) buying a potential adversary's kit?"

The S-400 system is not a specific missile and radar combination. It is an air defence system with a family of missiles and radars. What the Russians export is not necessarily the most advanced versions of what they used themselves.

As for why the Turks are buying them, they put out an RFP for an air defence system. Part of the requirement for any major Turkish defence contract these days is a degree of technology transfer to Turkish defence firms. The Turks are trying to build up their own defence industry. This by the way is why they are making parts of the F-35 as well as doing the engine overhauls. Turkey makes a major section of the fuselage, landing gear components, parts of the engine, electronics, sensors, and a whole range of other items. They are sole source suppliers of a number of pieces, so every F-35 built today is partly Turkish.

As for missiles, the Americans submitted a bid for the Patriot missile system, while the Russians submitted a bid for the S-400. However, the American bid did not include technology transfer, while the Russian bid did. Hence, the Russians won the contract. Toys were quickly ejected from the Americans pram - they wanted the contract, but not on terms the Turks were willing to grant it on. The only thing that will satisfy the Americans on this one is for the Turks to buy Patriot missiles on terms the Americans dictate.

As for stealth fighters in general, the Turks are designing their own, with British and other foreign help. The UK has its own sovereign stealth aircraft technology which is as good as anything the US has, which is why the UK was invited to be the only Tier One foreign supplier for the F-35 (which caused the UK's own stealth fighter project to be cancelled). BAE is supply extensive unspecified technology, and Rolls-Royce are supplying the engine technology licenses. The UK involvement has support from the highest political levels in the UK government. The Turkish fighter is scheduled to replace their F-16s and will supposedly first fly in 2023. The Turkish fighter will do the air-to-air fighting while their F-35 fleet will act as bombers/air support.

5
0

Kaspersky Lab loses the privilege of giving Twitter ad money

thames
Silver badge

Re: @Martin

On it's own it might be remotely plausible as a "security" action. In the wider context though, it fits in as American trade protectionism. Canadian steel and aluminum companies have also been labelled "national security risks" by the Americans. Bombardier is "bad" until they promise to assemble planes in the US, and then the trade complaint gets magically thrown out at the next hurdle.

I think that the head of Huawei said something along the lines of that getting blocked from the US market feels much more relaxing now that they know that they don't have to worry about keeping the Americans happy any more.

44
1

Aw, all grown up: Mozilla moves WebAssembly into sparsely furnished Studio apartment

thames
Silver badge

Re: Hypervisor?

@Charles 9 said: "Because Google's strongest platform, Android, runs on ARM, as does Apple's iOS the #2 mobile platform."

Google's response to that was PNaCl, which was supposed to be a portable form of NaCl based on LLVM intermediate code. That wasn't any more successful because LLVM intermediate code isn't really suited to that.

By that point everyone had decided that ASM.js was a much better solution from a technical and practical perspective so Google threw in their (P)NaCl cards.

3
0
thames
Silver badge

Re: Hypervisor?

WebAssembly isn't a binary executable. It's a language which the browser runs through it's normal JIT compiler before executing it. It is fundamentally no different from how all major web browsers currently run Javascript, except the browser doesn't have to do as much parsing before being able to use it.

To put it in simple terms, it's a development of Mozilla's ASM.js. ASM.js is a subset of Javascript which browsers can more easily analyse in order to execute it efficiently. It does this largely by jettisoning the dynamic features of Javascript in favour of using only those features which can be subjected to static analysis (everything can be resolved at compile time, nothing about how to run it has to be figured out at run time). As a result, C and C++ programs can be compiled to this subset of Javascript, which is then sent to the web browser to go through its normal parse, interpret, optimise, JIT compile phases.

WebAssembly is simply a more low level representation of the same sort of compiler output code that is ASM.js. It's not native executable code, but it does cut out a number of steps in the parse and JIT compile process. That means the web browser has to do less work before running the resulting code. Every web browser already had something analogous to WebAssembly in its Javascript compiler subsystem. However each browser had a different one with a different representation which wasn't compatible with how every other browser did it. WebAssembly provides a standard intermediate representation which is implemented in a compatible manner by every vendor.

Now instead of sending a browser normal Javascript source code (which may itself be the output of a compiler), they send WebAssembly and can cut out some of the intermediate steps. How the web browser handles it from there is up to each vendor. It could be interpreted, it could be JIT compiled immediately, or whatever. This should be chip architecture independent by the way.

Sending native x86 binaries over the web to execute in a sandbox on the other hand is what Google Chrome did with NaCl. That went over with developers like a lead balloon, and Google pulled the life support on it last year in favour of joining Mozilla in using WebAssembly.

12
0

Application publishing gets the WebAssembly treatment

thames
Silver badge

El Reg said: "The technology is a W3C standard, emerged from Apple and promises a secure sandbox running inside a browser."

That will come as a surprise to the people who actually developed WebAssembly. Here's one of the original announcements: https://brendaneich.com/2015/06/from-asm-js-to-webassembly/

Who: A W3C Community Group, the WebAssembly CG, open to all. As you can see from the github logs, WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks. I’m sorry the work was done via a private github account at first, but that was a temporary measure to help the several big companies reach consensus and buy into the long-term cooperative game that must be played to pull this off.

So far as I know, WebAssembly actually came out of primarily Mozilla's success with ASM.js, plus some of Google's work with the less successful PNaCl.

5
0

Linux Beep bug joke backfires as branded fix falls short

thames
Silver badge

Re: Of course it's not an important security issue

@Anonymous Coward said: "It's not on Windows."

Oh look, someone trolling anonymously. What a surprise.

Well guess what, it's not normally installed on Linux either, as you would know if you had actually read the story. It's a third party program that an administrator can install if he or she wants to, but very, very , few actually do.

3
0
thames
Silver badge

Almost nobody even has beep installed.

According to Debian, only 1.88% of users have beep installed. Only 0.31% use it regularly. Apparently "beep" doesn't even work on most hardware. I suspect that the few people who do have it installed used it in a bash script somewhere years ago and forgot it. I checked my PC (Ubuntu), and It is not installed.

The best solution is probably to check if you have it installed, and if you are one of the few people to who, to simply un-install it. If you are worried about some obscure script failing because it got an error when it tried to call beep, then perhaps symlink some fairly innocuous "do nothing" command, or possibly even a script which will write to a log somewhere to tell you when it was called.

If I need to have my speakers on my desktop make any noise I use "espeak", which is a text to speech utility. There are other noise making utilities as well which unlike beep actually work on modern hardware.

20
1

Page:

The Register - Independent news and views for the tech community. Part of Situation Publishing