* Posts by thames

1125 publicly visible posts • joined 4 Sep 2014

This malicious PyPI package mixed source and compiled code to dodge detection

thames

Re: Why have pyc files in a package anyway?

The Python "interpreter" automatically compiles each source file as it is imported and then caches the compiled form as a ".pyc" file. This means that the second time it is executed it can skip the compile step and import the byte code binary directly. This can speed up start up time significantly. While the Python compiler is very fast, on large programs it can make a perceptible difference.

Because of this you don't actually need the ".py" file on imported modules if the ".pyc" file is already present.

This isn't something unique to Python, as many other languages have used a similar strategy.

Some people use this as a very weak form of copy protection so they can distribute Python programs to customers without giving them source code. That isn't what it was orignally intended for, it's just a side effect of having a faster start up.

However, this does mean that there is a use case for having ".pyc" files rather than source in a package. This in turn means that having the standard installation tools exclude ".pyc" files would break at least some existing software out there.

The solution is to simply have the code analysis tools disassemble the ".pyc" files and analyze those (the output is like a form of assembly language). A disassembler comes as part of the Python standard library.

Python Package Index had one person on-call to hold back weekend malware rush

thames

Re: Difference between PyPi and NPM?

It's apparently a typosquatting attack. The malicious author creates a new package which has a name which is very similar to a commonly used legitimate package.

When a developer who wants to install a legitimate package misspells the package name he may get the malicious one instead. It then gets installed and used in a project which the developer is working on.

The target seems to be web developers. When the victim tests his project with his web browser, the malicious package injects some JavaScript which looks for bitcoin addresses in his browser's clipboard. It then changes the address to the malicious author's own so that any bitcoin transactions go to the attacker's own wallet.

The attackers are using automated scripts to creat hundreds of new package names which are very similar to legitimate ones, and counting on human error to occasionally get picked due to a typo.

This is a problem inherent to any repository which is not manually curated, regardless of the language.

The most straightforward solution is to only install packages from your distro's repos instead of directly from PyPI.

Double BSD birthday bash beckons – or triple, if you count MidnightBSD 3.0

thames

OpenBSD 7.3 won't install in VIrtualBox

I installed both FreeBSD and OpenBSD in VMs on Tuesday. FreeBSD installed without any problems.

OpenBSD 7.3 however crashes during the installation process. I eventually ended up installing 7.2 and then did an upgrade to 7.3, which went fine.

I just repeated the attempt with a new download a few minutes ago before making this post. It's not clear what the problem is. The VM is VirtualBox 6.1.38 on Ubuntu 22.04.

I install all the major Linux distros and BSDs in VMs for software testing, and this is the only one which is giving me this sort of problem. Aside from that, it worked fine once it was installed and running.

One thing that I did notice is that in OpenBSD the Python version has been upgraded to 3.10 from 3.9, while with FreeBSD it remained unchanged at 3.9. This isn't a problem (I welcome it in fact), but it's worth noting that a change like this has been made in a point release.

TikTok: Is this really a national security scare or is something else going on?

thames

Re: TikTok is a smokescreen

I've just skimmed over the actual proposed legislation, and TikTok isn't even mentioned. What it is is a law to allow the US president to arbitrarily ban pretty much anything involved in communications if he doesn't happen to like it. The VPN industry are apparently particularity worried.

Covered are:

  • Any software, hardware, or other product which connects to any LAN, WAN, or any other network.
  • Internet hosting services, cloud-based services, managed services, content delivery networks.
  • The following is a direct quote: "machine learning, predictive analytics, and data science products and services, including those involving the provision of services to assist a party utilize, manage, or maintain open-source software;"
  • Modems, home networking kit, Internet or network enabled sensors, web cams, etc.
  • Drones of any sort.
  • Desktop applications, mobile applications, games, payment systems, "web-based applications" (whatever those are interpreted to mean).
  • Anything related to AI, quantum cryptography or computing, "biotechnology", "autonomous systems", "e-commerce technology" (including on-line retail, internet enabled logistics, etc.).

In other words, it covers pretty much everything in the "tech" business. Singling out TikTok in particular is nothing but a red herring meant to divert attention form what is actually going on.

The bit covering "open source software" is particularly troublesome. People running open source projects may have to seriously think about moving their projects outside of US influence.

thames

Re: "Nations are one by one banning it from government-owned devices"

If there are genuine security concerns then the only apps which should be present on government owned devices are those which have gone through an official security review and been approved as valid and necessary for the device user to perform his or her job.

And for the 95 per cent of the world who aren't the US, Facebook, Twitter, and the like are equally as problematic as TikTok and for the same reasons.

Putting out a blacklist is pointless, as anyone with any knowledge of the subject would know. Lots of apps rely on third party libraries which have data collection features built into them, it's part of their business model. This data is then sold to data brokers around the world with few or no controls over what is done with the data or who it is sold on to. There are so many apps in existence that blacklisting is an exercise in futility.

Don't allow anything on government devices which has not gone through a security review and whose data is hosted outside of one's own country. The same rules should be applied to any business which handles matters which have security implications.

I don't know of any business which allows individual users to install whatever they want on government owned PCs, so why should phones be any different?

The same thinking should be applied to the OEM and carrier crapware that gets pre-loaded onto phones as well. There should be no pre-loaded apps beyond those which have been approved.

Chinese web giant Baidu backs RISC-V for the datacenter

thames

I suspect that Baidu have a list of things they are interested in using RISC-V for, but are not making solid commitments to any in particular on any specific time line until they've done some researching and testing.

Overall though, I expect them to shift to RISC-V over a period of years. Anything else is too much of a risk given the current international environment.

We can probably expect to see India going down the same road for the same reasons, just a few years behind though.

Critical infrastructure gear is full of flaws, but hey, at least it's certified

thames
Boffin

Let's look at a few CVEs

I had a look at the actual CVEs for kit that I'm familiar with and with regards to that kit I'm not too worried with what I saw.

I'll take Omron as an example because their kit is the most mainstream of that listed. They had three CVEs listed against them. One CVE was for passwords saved in memory in plain text. However, in decades of doing PLC programming I've never seen the password feature used even once on any PLC. It's something which a few customers want, but it just isn't used by most. Not all PLCs even have a password feature. In practical terms there's nothing to stop someone from simply resetting the PLC to wipe the memory and loading their own program anyway.

The other two CVEs basically amount to the user programs are not cryptographically signed. That's no different from the fact that I can load programs on pretty much any PC without those being cryptographically signed either. I can write a bash script and run it without it being cryptographically signed. Accountants can write spreadsheets and run them without them being cryptographically signed. Saying that PLC programs should be cryptographically signed is really stretching things a bit.

The main real world security vulnerabilities in industrial control systems have been bog standard Windows viruses, root kits, and the like in PC systems running SCADA software.

My opinion of security for industrial equipment is the OEMs are not security specialists and they won't get it right so they're better off not trying. Also the in service lifetimes of their kit can be measured in decades, which is far beyond any reasonable support period.

More realistically, isolate industrial kit from any remote connections. If you need a remote connection, then use IT grade kit and software as a front end to tunnel communications over and get the IT department to support it as they should know what they're doing, unlike say the electricians and technicians who are often the ones programming PLCs.

Expecting non-security specialists to get security right 100 per cent of the time is a policy that can only end in tears.

Havana Syndrome definitely (maybe) not caused by brain-scrambling energy weapons

thames

Already solved in a Canadian investigation made at the time in question

Canada investigated the problem and fairly quickly found it was due to excessive use of pesticides during the zika virus outbreak in the Caribbean at the time. A US contractor from Florida was used to fumigate the embassies and staff housing with organo-phosphate pesticides.

Due to the panic over zika at the time (the virus was causing serious birth defects), the fumigation was carried out much more frequently than normally recommended. The result of this was that people were exposed to toxic levels of pesticides.

Blood samples of Canadian diplomats found above normal levels of organo-phosphate pesticides. The symptoms associated with this are consistent with these associated with so-called "Havana Syndrome", including hearing sounds that aren't there and the rest. Examination of the patients also found nervous system damage consistent with pesticide poisoning. Organo-phosphates affect the nervous system (some types of organo-phosphates are used as military nerve gas), so effects on the brain are entirely to be expected.

The US government were aware of the Canadian medical investigation but chose to ignore it. Hypothetically this may have been due to concerns about the finger of blame (and lawsuits) coming back on the US officials who approved the use of excessive amounts of pesticides.

By order of Canonical: Official Ubuntu flavors must stop including Flatpak by default

thames

Re: Who cares?

As I mentioned in a post above, the issue is with respect to the terms and conditions for using the "Ubuntu" trademark and financial support in the form of free services. If you want to use the trademark and get official help, then you need to follow the guidelines.

thames

Re: Software Freedom

Derivatives are free to do what they want with the software. If they want to use the "Ubuntu" trademark or get free infrastructure and other benefits though, then they have to stick to Ubuntu policies with regards to official "flavours".

Other distros do take the Ubuntu software and don't follow Ubuntu guidelines, but they don't get to call themselves "Ubuntu".

Try starting your own distro and calling it "Debian" without Debian's permission and you may find that Debian the organization may have a sense of humour failure. The same goes for Red Hat or Suse.

Ubuntu are probably the least restrictive of the major distros when it comes to derivatives.

thames

Re: future of apt on Ubuntu?

Aside from Firefox, Snap seems to mainly have replaced PPAs. It used to be that if you wanted the newer version of some obscure package you needed to find some potentially dodgy PPA and install that.

Now that's done through Snap, there's an official Snap store, and Snap packages are confined so they have limited access to resources.

It's useful for certain things, such as allowing one Firefox package to run on all Ubuntu versions instead of rebuilding it for each version. It's also good for obscure packages that that need to be kept up to date.

On the other hand, it's probably not going to replace the bulk of Deb packages, as there's no reason to do so. I have a handful of Snap packages installed (e.g. the Raspberry Pi imager), and I normally look for a Deb first before falling back to a Snap. In some cases without the Snap I would probably have to build from source.

The main targets for Snaps are actually applications from propriety vendors who traditionally had horrifically bad packages, and games, as game studios don't want to update their packages for each new release. You could also think of Snap as being an alternative to something like Docker when it comes to server applications.

Flatpack on the other hand seems to be pretty badly thought out. It only does GUI apps, doesn't handle server cases, and package management is pretty poor (as described on the article). Snap is what Flatpack should have been, and the only real reason why Red Hat and friends persist with Flatpack seems to be NIH. There's nothing to stop each distro from setting up their own Snap store the same way they do their Deb/RPM repos.

Could RISC-V become a force in high performance computing?

thames

Re: A mixed blessing?

Massive incompatible fragmentation outside of the core instruction set pretty much describes x86 vector support, and that doesn't seem to have hurt it any.

The big question is whether CPUs will be available on low cost hardware equivalent to a Raspberry Pi so that people can test and benchmark their code, or if you have to book time on the HPC system just to compile and test your code. That is what will make the difference.

If you are doing serious vector work you need to use the compiler built-ins / extensions, which are more or less half way between assembly language and C. Good vector algorithms are not necessarily the same as non-vector algorithms, which means you need actual hardware on your desktop for development. This is the real advantage that x86 and (more recently) ARM have, and which RISC-V will need to duplicate.

US and EU looking to create 'critical minerals club' to ensure their own supplies

thames

Re: What about Canada and Mexico?

It's about the electric car market. The US introduced highly protectionist subsidy legislation for the US domestic car market. The goal was to establish US manufacturing sites in the market before other countries got their foot in the door. This was in direct violation of NAFTA rules, and Canada and Mexico threatened retaliation.

Canada then offered the US a face saving formula of "access to critical minerals" in return for watering down the protectionist measures. What exactly that means is anybody's guess, as nobody was seriously talking about export bans to the US (or anyone else) anyway. However, Canada has been using the "critical minerals" phrase (very vaguely defined) in trade talks with a variety of countries (particularly in the Far East). Canada is also very good at finding and exploiting American political weak points, and found a formula that played to American fears about China. Mexico was brought into the plan and the two presented a united front that got the Americans to water down their protectionist measures.

The EU were already unhappy about the new American auto market protectionism and were also talking about retaliation in the form of action through the WTO. This is something which the US would be guaranteed to lose, but would take years to get a final decision on (assuming the US didn't simply ignore it). Once they saw the US reverse course in the face of threatened Canadian and Mexican retaliation, the EU demanded a similar deal. The same "critical minerals" red herring was brought into the discussion for the same reasons.

Biden is as protectionist as Trump ever was, even more so in some ways. This latest venture in the form of the "Inflation Reduction Act" is an absolute disaster for free trade and a major step towards state control of industry. International auto trade would be a major casualty of it if it isn't de-fanged.

Any talk about a "critical minerals club" controlled by the US (or the US and EU) is simply talk. None of the big exporters have any incentive to sell to anyone other than the highest bidder and really aren't interested in being used as cannon fodder by the great powers.

Oh, WoW: Chinese gamers to be cut off from Blizzard games next week

thames

Re: Cause and effect...

Blizzard wouldn't leave the contract renewal negotiations for the last minute, so they will likely have been negotiating for at least 6 months. The statement from NetEase in November would seem to indicate that talks were going nowhere by then.

I would imagine that it's all about money. NetEase have probably been getting bent over a barrel by Blizzard and want more money. Blizzard probably want to pay them less.

Given that there's been no announcement yet about who is going to replace NetEase, it sounds like Blizzard has had even less luck at finding a replacement who are willing to accept the terms that are on offer. If it was NetEase who were the problem, then Blizzard would have found someone else by now.

Should open source sniff the geopolitical wind and ban itself in China and Russia?

thames

Re: Absurd

The article is based on the assumption that the US government will dictate terms to the rest of the world and everyone will fall in line. The word isn't like that. It's worth remembering by the way that the RISC-V organization moved out of the US specifically to get away from this sort of thing.

Once politically based license restrictions start there would be no end to them. Loads of people will start putting in license terms that say things like "cannot be used by anyone who does business with the US military". That's why this sort of thing would backfire on the US.

If you think that's a bit over the top, if you follow the links associated with the "advocate" promoting this idea, you will find that the licenses being promoted as being "ethical" do exactly that.

The Tornado Cash example was a complete red herring, as it's the Tornado Cash company who were being blacklisted for money laundering, not the source code. The Github repo is still on line, but the company's access to it has been frozen. There's nothing to stop someone else from forking the code and continuing on with it.

There's a good reason why proper Free Software has no restrictions on fields of endeavour. Once you bring politics into software licenses the entire field would Balkanise into incompatible licenses and the entire field of software would fall into the hands of a few big proprietary vendors who would base their business on having huge teams of lawyers who can navigate the licensing issues in each country.

Tech supply chains brace for impact as China shifts from zero-COVID to rampant COVID

thames

Not sure the evidence is there

El Reg referenced two studies. The link to one appears to be broken, but the Singapore study titled "Comparative effectiveness of 3 or 4 doses of mRNA and inactivated whole-virus vaccines against COVID-19 infection, hospitalization and severe outcomes among elderly in Singapore" had this to say:

"As BNT162b2 and mRNA-1273 were recommended over CoronaVac and BBIBP-CorV in Singapore, numbers of severe disease among individuals who received four doses of inactivated whole-virus vaccines or mixed vaccine type were too small for meaningful analysis." So, we may not want to draw too many conclusions from that study.

Another comparative study which is widely referenced is one in Brazil titled "Effectiveness of CoronaVac, ChAdOx1 nCoV-19, BNT162b2, and Ad26.COV2.S among individuals with previous SARS-CoV-2 infection in Brazil: a test-negative, case-control study"

June 01, 2022

Here's the link to the article in The Lancet (a major UK medical journal).

https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(22)00140-2/fulltext

It had this to say:

"Effectiveness against hospitalisation or death 14 or more days from vaccine series completion was 81·3% (75·3–85·8) for CoronaVac, 89·9% (83·5–93·8) for ChAdOx1 nCoV-19, 57·7% (−2·6 to 82·5) for Ad26.COV2.S, and 89·7% (54·3–97·7) for BNT162b2."

The vaccines referenced are CoronaVac, Oxford AstraZeneca, Johnson & Johnson, and BioNTech/Pfizer respectively.

In other words, the Johnson & Johnson is not very effective, but CoronaVac seems to be only marginally less effective than Oxford-AstraZeneca or BioNTech-Pfizer in terms of hospitalisation or death.

Efficacy against infection is less impressive, but the same is true for the rest of the vaccines as well.

There are two main components in your immune system which vaccines stimulate. Antibodies prevent infection, but they are short acting (weeks or months at most) and sensitive to changes in variants. T-cells prevent severe hospitalisation or death and are both much longer lasting and far less sensitive to changes in variants.

The big problem in China isn't that their vaccines don't work. The problem is that the older people are the ones who are least likely to have gone out and gotten their jabs and the younger people are the most likely, while in many Western countries it's the other way around.

What this suggests it that there will be plenty of symptomatic infections among the general population, but hospitalisation and severe disease is likely to be mainly in people who are unvaccinated.

Since in China the unvaccinated are mainly the elderly who are much less likely to be part of the working population, the economic effects are at best unclear.

Among the working population there may be plenty of short term work absences due to mild illness, but we saw the same in Western countries a year ago when the omicron wave swept through and that didn't shut down the economies there.

I'm making no predictions here, just pointing out that it's a bit soon to be predicting pandemic induced chaos in the Chinese economy as the evidence isn't there yet.

That doesn't mean to say that there won't be plenty of unfortunate deaths, but the kit you've got on order from China may arrive safely after all.

Bill Gates' nuclear power plant stalled by Russian fuel holdup

thames

Re: Low enrichment?

The name HALEU (high assay low enriched uranium) is the name the US uses for uranium that that has an effective enrichment of 5 to 20 percent.

Just straight low enriched uranium is 5 percent or below. US reactor designs use this.

Natural uranium is less that 1 percent. About 10 percent of the world's reactors are built to use this.

Their present plans to make it involve taking some of their existing stockpile of highly enriched uranium from weapons reactors, reprocessing the fuel, and blending it with low enriched uranium.

There is a finite amount of this available, only enough to do some demonstration projects.

For commercial supply they will have to build more uranium centrifuges, which are apparently years away from coming into service.

thames

Re: Poor choice of fissile material?

The modifications can be as minor as a new fuel bundle design. There are many different fuel compositions incorporating thorium, all with various pros and cons. If you're satisfied with using some thorium in existing heavy water reactors you don't need much change to them. If you want a completely self-sufficient fuel cycle, then the reactor design itself has to be tweaked to optimize it for thorium. There are many other solutions that fall somewhere in between.

As I've said before though, there's currently no economic case for thorium fuel. If uranium gets expensive enough, then that will change.

Canada has done tests with thorium starting the 1950s. Making use of thorium however involves re-processing spent fuel, otherwise it's a waste of time. At present it's cheaper to just use a once-through uranium fuel cycle and store the spent fuel until fuel prices rise enough to make recycling worth while.

The reason the US is so fixated on thorium fuel at this time is because of their concerns about international nuclear proliferation with enriched uranium.

This has never been a concern for countries that don't use enriched uranium in their power reactors. They have looked at thorium purely from an economic standpoint, which at present doesn't justify its use.

Current US reactor designs and their derivatives (e.g. in Japan and elsewhere) are descended from naval power plants (nuclear submarines) where reactors had very tight space constraints.

CANDU and derivatives are descended from the joint Canada-UK nuclear weapons program in WWII. The current CANDUs and derivatives are direct descendants of the first Canada-UK weapons program reactor built during WWII north of Ottawa. This resulted in a completely different development path from nuclear power in the US right from the first criticality experiments onwards.

A lot of the assumptions that people have about nuclear reactors based on US experience simply don't apply in this case. The whole subject is very complicated and a lot of the real problems with thorium revolve around fuel composition, fabrication, and reprocessing. Many of the reactor designs you see promoted by start-ups are based on very complex proprietary fuel designs whose main purpose seems to be to create some potentially extremely lucrative vendor lock-in for refuelling and licensing.

thames

Re: Poor choice of fissile material?

Thorium will work as a fuel in modified CANDU style heavy water reactors, which is what Canada uses (as well as a number of other countries).

However, uranium is currently cheap enough that it's not worthwhile using thorium. India are working on it because they have lots of thorium but not much uranium and want nuclear fuel self sufficiency. Canada has done experiments on it to prove the technology, but the economics don't justify using it as a production fuel.

Thorium can be used as a fuel without any new or exotic technology, there's just been no reason to bother until uranium gets expensive enough. Despite all the hype, thorium is not a magic solution to any problems we actually have at this time.

GCC 13 to support Modula-2: Follow-up to Pascal lives on in FOSS form

thames

Opaque Types

The story missed one of the big features of the language, which was "opaque types". This was a different approach to solving the same problem as object oriented programming was trying to address at about the same time.

The name of an opaque type would be declared in the module interface, but any details of what it actually was would be hidden in the implementation, and so not visible to anything outside of the implementation (interface and implementation were defined parts of each module, and in separate files).

The interface would also declare functions which would take the opaque type as a parameter, and so allow you to manipulate it. Anything outside of the module however couldn't do anything with the type directly.

Overall it was equivalent to an object's attributes and methods, but defined at the module level. It differed from objects however in that there was no internal state to the module, you needed to pass around the opaque types explicitly.

I did several projects using TopSpeed Modula 2. It was probably by far the best compiler, IDE, and debugger available for MS-DOS PCs in that era. I also had their C compiler as well.

If I had any criticism about Modula 2 it was that screen I/O was a lot of work due to the strong typing of the I/O. Every data type had its own output function, and even newline was a separate function. C's printf may be complex and sloppy, but it's also a lot less work to use.

I liked Modula 2 a lot at the time, but I don't think it's going to make a come back. The industry went firmly down the object oriented route and Modula 2's opaque types are an interesting but forgotten side diversion.

CERN, Fermilab particle boffins bet on AlmaLinux for big science

thames

Been using it for almost a year.

I've been using AlmaLinux as a replacement for CentOS for automated software testing for almost a year now, and I've had no problems with it that I can recall. I can recommend it for that purpose without hesitation.

I was motivated to use it by CentOS dropping support of their then latest release. I had to either pick something new or drop Red Hat as a supported distro for my software project. AlmaLinux was a straight drop in replacement for CentOS.

I want to congratulate Lord Raglan and Marshal Saint-Arnaud for their success at Alma, hard fought it no doubt was. With that out of the way Sebastopol should be within their grasp before long. Hurrah!

US ends case against Huawei CFO who holed up in Canada for three years

thames

The view from Canada was a bit different.

As I pointed out in the comments in the comments to the previous story (which El Reg references in this one), the whole story was very badly reported on by the international press, who mainly just reprinted US official spin. The Canadian press attended the actual trials and their reports gave a very different picture.

To summarize a few points, the reason the US charges were constructed as "fraud" was to get around Canadian extradition law ("dual criminality"). Sanctions charges would have been tossed out by the court in Vancouver immediately, as Canada was still part of the European deal with Iran so no sanctions laws were violated in Canada. The US therefore constructed a very convoluted argument that "fraud" was committed because fraud is illegal in Canada, and they knew the judge in Vancouver was not able to take into consideration whether there was a genuine fraud case against Meng, just that fraud was also a crime in Canada and the US was charging her with that.

Meng's lawyers were able to obtain copies of documents directly from HSBC which contradicted versions provided by the US as evidence (they needed to show that they had some sort of evidence). The US versions of the documents turned out to have been edited by the US to remove significant exculpatory evidence. However, the extradition judge didn't consider these documents, as they were evidence of Meng's innocence and Meng's guilt or innocence could play no part in an extradition hearing which was only concerned with whether the right paperwork had been filled out and whether Canadian officials had behaved legally.

Canada is in the process of revising its extradition laws due to systematic abuse of the extradition processes by allies. This was already on the agenda before the Meng case and so had nothing to do with that. France in particular were notorious for extradition cases which turned out to lack substance upon actual trial. The extradition review is now back on the table after having been sidelined by the pandemic.

The Meng extradition case seemed to be on the point of collapse on abuse of process grounds when the US suddenly reversed course and decided they wanted a "deal" instead. Meng's strongest arguments against extradition had always been on abuse of process grounds, and the hearings on that were about to start when the deal to drop the extradition was reached. The evidence before the court showed that Canadian police and immigration officials had been doing illegal favours for their US counterparts, police were suddenly reversing their testimony and contradicting their written notes, and one of the key senior police witnesses had left the country (to go to Macau!) and had hired a lawyer to try to fight having to testify. There were a lot of people who may have found to have been involved in a lot of unsavoury activities if the hearings had proceeded. This sort of thing probably goes on all the time, the police just weren't used to dealing with someone who had the money to hire the lawyers to fight it.

A series of senior retired Canadian diplomats and cabinet ministers had advised Ottawa that they had sufficient grounds to toss the case on final review (this would have followed a judges decision). Trudeau however was desperate to avoid getting dragged into the issue because he had just taken a major kicking at the polls over the SNC legal scandal and wanted to avoid anything which might remind people of that, regardless of whether it was justified or not.

Canada had been pressing the US to make some sort of "deal". Trump however was not willing to accommodate him, although Trump's offer to China to release Meng in return for a good trade deal would likely by itself have provided grounds for Canada to refuse extradition at a later point in the process.

However, Trump was gone, and Biden was apparently more willing to be accommodating to Canada and so a deal was finally done. It was widely suspected that this was a quid pro quo in return for Canada not making a fuss over the cancellation of the Keystone XL pipeline.

Alibaba, Tencent enlisted to help sanction-weary China build RISC-V chips

thames

Re: RISC-V fragmentation

When you look at what RISC-V is currently mainly used for, which is embedded devices, and when you compare those to the ARM equivalents, you see that ARM is just as fragmented. It doesn't matter though, as they are single purpose devices.

When it could be an issue is when RISC-V starts being used in phones, servers, PCs, etc., where the user acquires software separately and wants to run it. There, ARM has only recently started sorting itself out after a good deal of effort put in by various Linux trade groups.

What they can do with RISC-V is learn from the mistakes that ARM made and design chips that adopt similar solutions to ARM for things like device discovery in servers.

I hope by the way that nobody is under the impression that there isn't fragmentation in the x86 or ARM markets because both of those require a programmer to jump through a lot of hoops and do a lot of testing if you want to use anything other than the lowest common denominator features.

AI giant Baidu shrugs off US chip export restrictions as having 'little impact'

thames

Re: No Surprise

Anyone in China who wants to get into the chip fab equipment business now has a guaranteed market protected from outside competition.

Give it five or ten years and the Americans will be filing trade complaints about unfair Chinese restrictions on imports of chip fab equipment.

Meta links US military to fake social media influence campaigns

thames

It's the same sort of reports that we get when talking about alleged Russian, Chinese, or other operations. In this case it just says "US" instead of "Russia". If it's good enough for the one set of reports it's good enough for the other.

We know the US engage in this sort of thing because they've admitted it in the past. The UK do the same by the way, using 77 Brigade (one of their main specialties being social media "influence operations").

As for what "associated with" means, without access to the relevant HR records Facebook can't exactly tell us whether the people involved are salaried civil servants, military personnel, military reservists on duty, or outside contractors.

I too would like to see more detail on this sort of thing, but among other things I imagine that managers at Facebook don't want to have their collars felt by the US authorities for revealing too many operational details.

Qualcomm faces fresh competition in world of Arm-based Windows PCs

thames

Windows on x86 is the new mainframe.

Apple has shown that it's possible to make an ARM CPU for PCs that has reasonable performance.

The issue with aiming for the Windows market is that the amount of third party software, especially business critical software, is far, far, larger for Windows than for Apple. And that x86 software is in running systems that are not easily replaced even if it is technically possible to port to ARM. Microsoft makes most of their Windows money from licensing and support contracts with businesses who are tied to x86 by all of this software.

Microsoft Windows on x86 is the new IBM mainframe. It may not be the future, but it's not going away any time soon, and it will continue to provide a steady cash stream to Microsoft.

LockBit suspect cuffed after ransomware forces emergency services to use pen and paper

thames

Re: At least they caught one

The gang seems to have been based in Ukraine and at least some of the others were arrested last year. Vasiliev seems to have evaded the initial rounds of arrests because he lived in Canada.

He's an idiot. He would have known that the authorities were after the gang when the others were arrested in Kiev so he should have dropped all connections with it then and worked to cover his tracks. His second warning should have been when Canadian police raided him during the summer looking for evidence. He just kept at it though and there was loads of evidence laying about the place.

India's Home Ministry cracks down on predatory lending apps following suicides

thames

Is this related to the recent problems with organized IT crime elsewhere in Asia?

It would be interesting to know if these are the notorious Chinese crime gangs that have been operating out of Myanmar (Burma) and Cambodia which have also been in the news recently.

These crime gangs have been running rampant across China, Hong Kong, and much of the rest of east and southeast Asia with telephone scams, cryptocurrency scams, and pretty much everything else that can be done remotely. They get workers from all across Asia (including India) by promising them well paid jobs in legitimate businesses and then holding them prisoner until they make enough money from scams to pay back a specified "debt".

Most of them seem to operate from free trade zones in Myanmar, with Cambodia being another major location, but they also have offices in places such as Dubai.

The Chinese government have been trying to get to grips with them by telling expats in certain countries to return home and provide evidence that they are engaged in legitimate business. If they refuse then their property in China will be seized and life will be made difficult for their families (e.g. cut off from government benefits). They seem to congregate in countries which don't extradite to China (either because of no treaty or because the legal system doesn't work).

The main problem seems to be that in Myanmar at least the crime gangs seem to operate under the protection of the Myanmar military who get a cut of the action. The main operating location is in a free trade zone across the border from China. I think the free trade zones (which also have legitimate businesses) get Internet connectivity via China and so have access to Chinese IT hosting services and the like through various cut-outs and fronts.

I don't know if the connection is there, but it occurs to me that it may not be a coincidence that the problems mentioned in India are occurring at the same time as the other problems across east and south Asia. I suspect that countries across Asia may need to cooperate to squash this problem or else it will just move around from legal haven to legal haven if countries act individually to try to squash individual manifestations of it.

Westinghouse sale signals arrival of a new nuclear age

thames

Cameco have MOU with several modular reactor design companies to supply fuel to them. There are fairly firm plans by government in Canada to evaluate these designs and build one as a trial, followed by more if successful.

I suspect that Cameco may be looking to have some sort of organization which can take the results of these trials to other markets around the world and support these customers.

How Citrix dropped the ball on Xen ... according to Citrix

thames

Re: "We didn't mean to cause any harm. We wanted to be good citizens."

Microsoft HyperV is apparently a fork of Xen. However, Xen originated as a project at Cambridge University which was partially financed by Microsoft, so it's not as if they came along late in the game after Xen was already in the commercial market.

Some Hyper-V source code from Microsoft later ended up merged back into Xen. I can recall seeing it and the comments and variable names made its origin pretty clear (the "HyperV" name was all through it). This had its 5 minutes of fame more than a decade ago.

Cambridge's goal in the 1990s was to create what we would today call "cloud computing", and what we call Xen today was the hypervisor part of it.

Uncle Sam to unmask anonymous writers using AI

thames

Using AI to attribute anonymous texts to known authors is something that academics have been working on for years for historic texts. This in turn is an improvement on the older manual methods. A typical application would be for example to try to attribute an anonymous poem to a known author (e.g. Shakespeare, or one of the Greek classics).

If they are doing anything new in the project being described in the story, I suspect it will be in terms of trying to do this in a way which was more suitable for mass surveillance, or to try to do it in cases where the author was trying to conceal his writing style.

This last point, uncovering authors who were trying to conceal their style, may be why they are doing joint a anonymize/de-anonymize project. By pitting them against each other they can get ahead and stay ahead of the AI arms race when it comes to de-anonymizing people.

thames

Yes, this is what it will mainly be used for. Someone is posting stuff somewhere on the Internet that some government wants to stop. They'll probably have a list of suspects but can't pin it down to a particular individual.

By running these posts through AI analysis and comparing them to existing samples from their suspects they'll be able to figure out who it is with a high enough degree of probability to make it worthwhile picking him up for interrogation, or just "disappearing" him.

To make this work you need a body of existing samples to compare to. Fortuitously for those seeking to apply this technology, the sorts of people most likely to be targets are also people who tend to write for a living - journalists, academics, etc.

AI as a means of enabling pervasive surveillance is just getting started and we can expect lots of stuff like this to appear more in future.

Kylin: The multiple semi-official Chinese versions of Ubuntu

thames

Re: "All the leading Western distros tend to focus on GNOME"

Distrowatch themselves say their rankings have nothing to do with usage or market share. Here's what they say about their list "they correlate neither to usage nor to quality and should not be used to measure the market share of distributions".

Of course if the Distrowatch list did correspond to how popular a distro was, it would put Ubuntu Kylin well ahead of SUSE for the past month, and between half and a third of the user base of Red Hat. Red Hat itself lags behind ReactOS.

The Distrowatch list is just a count of how many people clicked on the page for that distro. It's more of an indicator of interest by the sort of hobbyists who collect or play with Linux distros the same way that some people collect stamps.

There are also people who obsessively click on a distro's Distrowatch page to try to bump the ranking up, as also stated by Distrowatch "continuous abuse of the counters by a handful of undisciplined individuals who had confused DistroWatch with a poll station."

So, don't take the Distrowatch rankings too seriously.

thames

Excellet Review

That is a very detailed review and the author evidently put a lot of effort into it.

I suspect that the Kylin desktop UI won't be much more than another niche alternative outside of China unless someone makes a serious effort at increased support of other languages.

On the other hand, it's possible that their focus on Chinese language support will find them a niche in places that use either Chinese or languages which use a similar alphabet.

The differences between the Latin alphabet and the Chinese alphabet are quite profound, and as you noted results in different UI design decisions. It might make for an interesting article if someone knowledgeable in the area were to write about how significant this is and what the implications are.

Boffins rate npm and PyPI package security and it's not good

thames

Not Impressed.

I can't comment on NPM, because I haven't used it, but I do have projects on PyPI. I had a look at the paper, and it's pretty clear that the "problems" listed are not in PyPI, but rather in Github.

To start with, they don't actually look at PyPI except to get a list of projects which they then look for on Github. There is no link between PyPI and Github. You can have packages in PyPI without having a Github account or any code in Github. They are two completely independent things.

Their scorecard is entirely based on the assumption that you do everything through Github and use all of it's workflow features. If you use Github just as a place to publish code for the public, then you will get a low score. If you use all the Github bells and whistles and use them the right way, then you get a high score.

In other words, part of the score is based on result, and part of it is based on "process". And by "process" they only mean is your process conducted in Github rather than somewhere else.

A good example is "maintained". If a project doesn't get at least one commit per week to Github, then it is is marked down. There's no reason why that should be a valid criteria. The project may not be unmaintained. It may simply be stable and isn't getting updates because there isn't anything wrong which needs fixing. Or you could be working away on new features, but Github is just where you publish the source code as opposed to the place where you actually work from.

This is why there are so many projects which score highly in terms of not having anything wrong, but most seem to have low scores in terms of making use of Github's automated work processes.

I have a Github account and I have packages in PyPI. Part of my work process is to push code to Github for source publishing and to upload packages to PyPI for users. I have my own testing and QA processes which I run on my own hardware as I have no intention of locking myself into Github. It's just a convenient place to host the source code for anyone who wants it. I have been planning to also push source to another Git repo aside from Github to reduce my dependency on them for some time, but I simply haven't got around to it yet.

Overall, I'm not impressed with the report.

P.S. "Standard" security mode (the most relaxed standard setting) in Firefox seems to give The Register fits and result in a page not found error. I can only post on this site by fiddling with the security settings and manually turning off tracking protection. I've no problems anywhere else. El Reg should get a "fail" on the testing and maintaining score card.

India signs local server-maker to build nodes for home-grown supercomputers

thames

From what little I've been able to find out about it, the Centre for Development of Advanced Computing (CDAC) are the designers and VVDN are just the contract manufacturer. VVDN were selected as they have the necessary manufacturing facilities.

The Rudra supercomputer will use dual socket Intel CPU servers with GPU acceleration, and an interconnect system called "Trinetra".

The individual servers are intended to also be as the basis for conventional stand alone commercial servers. The intention seems to be more that of using the HPC project to help advance India's commercial server manufacturing capacity rather than trying to be at the bleeding edge of supercomputers.

In other worlds, it's part of their economic development program, and should be evaluated in light of that.

SoftBank reportedly moves London IPO out of Arm's reach

thames

According to US financial sources, if ARM list in the US as their only listing they will need to move their headquarters to the US in order to qualify to be part of US indexes.

So, not listing in London probably guarantees an eventual move of the headquarters, and probably sooner rather than later.

It will also put ARM more firmly under US legal jurisdiction, which in turn will cause many of their non-US licensees to elevate their RISC-V contingency plans to priority one for anything not locked into Google or Apple.

In other words, it would be the start of a long, slow. death of ARM in terms of any British connection.

Botnet malware disguises itself as password cracker for industrial controllers

thames

Has been a problem for decades.

This sort of thing has been very common with industrial control software for at least 20 years that I can recall. Downloads of password crackers and cracked versions of (otherwise very expensive) copy protected programming software has been widely known to generally come full of all sorts of malware.

That anybody would fall for this shows if anything the naivety of the targets.

The main reasons for needing password crackers by the way are:

  • Someone left the company on bad terms and put a password on some of the PLCs as a parting gift.
  • The project engineering department has a "toss the project over the transom" relationship with the maintenance department, and any drawings, passwords, and backups the latter received were not "as built".
  • The company bought some used machinery, and anybody who may have known what the password was is long out of the picture.

The above doesn't cover every reason, but it probably covers 99 per cent of cases.

Fortunately, passwords are only very rarely used on PLCs, as there's seldom any point to them. Out of many hundreds of PLCs that I've worked with, I can't recall seeing a password on any of them.

Any access control is usually handled by the fact that you typically need physical access to the PLC, a copy of the programming software, and a knowledge of how to use all of this in order to do anything with it. Some programming software uses access control passwords as part of the software rather than in the PLC itself.

Someone who was really determined to change the program in a PLC and had the physical access to it could just wipe the memory and reload a new copy of the program reconstructed from printouts.

Tech world may face huge fines if it doesn't scrub CSAM from encrypted chats

thames

Re: We have your children

Microsoft keep a database of hashes of illegal images which are used by various police forces to automatically scan computers belonging to suspects. Automatic scanning is the only practical way of searching a PC hard drive given the enormous size of drives these days.

Microsoft also provide a program to do the scanning. So far as I know, this is just more or less a re-implementation of the open source "findimagedupes" which can be found in many Linux repos. The original is written in Perl and has been around for decades.

Essentially the algorithm breaks the image up into blocks. converts each to B/W, and does various other things so that just altering shade or colour balance or cropping it slightly doesn't throw the algorithm off. It then converts the whole thing to a hash.

The default is to calculate all the hashes on the fly, but findimagedupes has an option to use a file of stored hashes in order to reduce the amount of work that has to be done on repeated comparisons.

I've used findimagedupes on large data sets of legal images of various sorts. On a small data set it is remarkably effective. As the size of the data set increases however, so does the number of false matches. I haven't done a statistical analysis on it, but as the number of images to be compared increases the false match rate also seems to increase exponentially.

The algorithmic match factors can be tweaked as parameters, with the default being a 95 per cent match. That seems to be the optimum between too many false matches and too many close misses.

Some false matches are quite easily understood. Suppose for example you took a photo of yourself standing in front of a blank wall, and then turned around and took a second photo. The difference between your face and the back of your head is too small to really matter to the algorithm to avoid a match, regardless of how obvious it seems to you.

However, some false matches are completely inexplicable. There may be no points of similarity at all but it still comes across as a match. I suspect these may simply be actual mathematical hash collisions.

Overall findimagedupes is good if finding some matches is good enough for your purposes. Suppose for example you have a large collection of categorized WWII military photos and want to compare it to other similar but uncategorized collections so you can do a fast first pass sorting of these other collections based on near duplicates which may have been resized, slightly cropped, or otherwise altered. Some errors are acceptable in this sort of application as you are just looking for a starting point.

The big problem with the current proposal is that the application is so different from what the police are currently doing (all they need is one true match to kick off a manual investigation of the rest of the images) that I don't see how the idea will work.

To be useful for pre-scanning all messages, the false positive rate must be negligible or else the support system would be flooded with customer complaints. There must be hundreds of millions if not billions of images sent as messages each day. Even a tiny false positive percentage is a big number in absolute terms.

If someone has developed some sort of magic technology to get around all of these problems, I haven't heard of it yet.

Meanwhile for people who want to avoid the filters, all they would likely need to do is to put the images into an encrypted archive file before attaching it to the message, or even possibly just change the file extension from "jpg" to something else.

What to do about inherent security flaws in critical infrastructure?

thames

Incebe-cert Error

The Incebe-cert article referenced in the story has a rather glaring error in its description of Modbus/TCP.

"Change from master/slave to client/server. According to the new specification, Modbus devices will now no longer be the master and the slave, rather it will change to a more computer language and therefore they are called client and server. The client would correspond to the traditional master and the server would be the final device, previously called slave."

The Modbus/TCP spec already uses client/server rather than master/slave and has done so from the beginning. I just checked my copy of the spec (2006 edition) and it says client/sever.

The reason that client/server was adopted as the terminology was to make it compliant with the terminology used by TCP, which in turn came from the analogy of a restaurant waiter and diners (server and clients) used in the IT industry to reflect the network configuration used there, where one server would have many clients. In an industrial network one client would have many servers (hence the master/slave analogy used)

Since the intention was to simply adopt office IT technologies wholesale instead of re-inventing them, the client/server terminology came with it.

The Modbus organization themselves have published a security specification which is referenced in the inciebe-cert article. Inciebe-cert seem to understand normal computer security, but they're clearly not that familiar with Modbus itself.

thames

Re: it's not all bad

Relay logic and digital logic modules however were very limited in functionality and were very, very difficult to debug when it came to finding problems in the machine. There's also a lot of things they simply couldn't do. There's a good reason they fell out of favour even for very simple applications.

A lot of the reason you can have a plethora of inexpensive high quality goods in abundance these days is because of advanced industrial controls.

thames

Re: The past is the past

The AMD2900 series were bit slice processors, not micro-controllers. Each was a 4 bit vertical "slice" through a processor. You would create an actual processor by combining multiple bit slice processors together with some glue chips to create a CPU.

Other PLC makers used them as well, including Allen-Bradley in their early PLC/2 series (which also had magnetic core memory).

Early PLCs used bit slice processors because the microprocessors of the day were too slow. As microprocessors became faster they displaced the bit slice processors in that market.

Before PLCs existed, mini computers were extensively used in some areas of industrial control. What the PLC added was an easy to use application specific programming language (ladder logic in most cases) and packaging in a format which allowed the CPU to be located with the machine and all its associated wiring instead of in a separate control room. This made it possible to use programmable logic in a wide range of industrial applications.

thames

Modbus "has no security" in the same sense that "JSON has no security". Both are at their heart just data formats. Transport security is something provided by the network layer.

Modbus/RTU started out as an RS-232/422 point to point serial protocol. There was no security because none was required. Transmission over leased line phone communications depended on the phone lines being secured by the provider.

Modbus/TCP was created by just taking Modbus/RTU and tunnelling it over TCP. Since Modbus is just a data format, it doesn't really matter how the message gets there.

The logical step to add security to Modbus/TCP is to just tunnel it over SSL or something similar. The Modbus messages won't care about that any more than the web page you are reading cares about whether it arrived by HTTP or HTTPS.

All that's really needed is for industry to agree upon what port to use and how to handle certificates. Modbus itself requires no changes.

Of course this will do little in practical terms for security, since most of the real world security exploits seem to revolve around finding a Windows PC somewhere on the industrial network and exploiting that through bog standard Windows vulnerabilities.

China rallies support for Kylin Linux in war on Windows

thames

Re: Kernel kicks

Every major distro does this. Stuff takes time to get accepted into the mainline kernel, years in some cases, so the distros pick up the required patches from the sources and include them in their distro kernel.

The optimizations for the latest Intel, AMD, and RISC-V ,models probably come directly from the relevant CPU makers. Eventually these will get mainlined and the distro won't have to do this.

What is being announced here Is the formation of SIGs (Special Interest Groups) to allow companies who want to sell kit for use with OpenKylin to feed kernal patches and feedback to the Kylin organization. The announcement gives a list of companies who sell things such as laptops or RISC-V chip sets who will be participating. Many of these companies are not Chinese but sell to the Chinese market.

When distro representatives talk endless about "community engagement", it's the above sort of thing they are talking about.

One of the first RISC-V laptops may ship in September, has an NFT hook

thames

I suspect that the main market for the first RISC-V laptop is going to be software developers who want a native platform on which to test and debug the software needed to help create the second RISC-V laptop.

Why Wi-Fi 6 and 6E will connect factories of the future

thames

I tried to have a look at the actual report. The publicly available information is so vague as to be totally meaningless. Actual details are only available to members.

The one example they did make at least vague reference to sounded like an oil tank farm. This is such a niche and atypical industrial application that it provides no guide to whether their concept has wider application in industry.

There are existing wireless industrial applications, but they tend to be either things like remote access to installations that are scattered over a wide area (e.g. tank farms, water or petroleum well sites, etc.), or mobile applications (lift trucks in places such as warehouses, mine vehicles, etc.).

Moving from what are often bespoke and proprietary wireless systems to some new industry standard may have some advantages, but it's far from a game changer in industry overall.

CISA and friends raise alarm on critical flaws in industrial equipment, infrastructure

thames

Re: Hmm

You need to find out what they mean by "isolate", whether it's an actual air gap (probably not) or just separate networks (more likely).

When you say "the PLCs need to send a sizeable stream of data to the SQL servers over in the office network", is it the PLCs which are talking directly to the database servers, or is there one or more PCs in between which runs software which polls the PLCs and then writes to the database servers? I suspect it's the latter.

Alternatively, it may be the PCs which are the thing which needs to be isolated from the office network, especially if they are running some sort of SCADA or HMI software which is collecting the data as well as doing it's main job.

In either case you may need some sort of firewall and proxy server between the industrial network and the office network. Think of it as being a simplified version of what is used to isolate the office network from the Internet (I'm assuming you are doing this). That way an attacker who gets into the office network doesn't have direct access to the industrial network (the thing which is connected to the part of the company that actually makes money).

If you google for "scada firewall proxy server" (or something like that) you should be able to find plenty of examples.

Ubuntu releases Core 22: Its IoT and edge distro

thames

Re: Snap....

Most Linux users have probably never built a package from source in their entire life, and many wouldn't know how. Most use pre-built packages from their distro's repositories.

There are source based distros, but the user base is small. If you like building packages as a hobby that's fine, but not many people do.

Snaps are well suited for the market that Ubuntu Core is addressing, which is dedicated "appliances" which do one thing and need regular completely automated atomic updates with no user interaction.

Ukraine's secret cyber-defense that blunts Russian attacks: Excellent backups

thames

Re: Insightful

Perhaps the production environment needs to be designed from the ground up with recovery in mind rather than it being tacked on as an afterthought. And perhaps recovery needs to be tested regularly, with objective measures of how well it worked.

Like you said, eventually an attack will get through. The real test of your preparedness comes in terms of how fast you can recover from it.

Sick of Windows but can't afford a Mac? Consult our cynic's guide to desktop Linux

thames

Installation Time

Just as a data point, I decided to just now time how long it takes to install Linux in a VirtualBox VM. I picked Ubuntu 20.04 as that was what I had on hand. I turned off the VM's network connection so that the test didn't include time to download updates (which would depend on connection speed). I used the default installation options. The VM was running on a PC with an SSD.

Installation mainly consisted of giving it a user name and password, selecting time zone from a map, and clicking on "continue" to accept the defaults for everything else.

From the time I clicked on "start" to boot the VM until it was fully installed, including the default set of applications, and rebooted (one reboot), was 10 minutes and 8 seconds.