* Posts by thames

1124 publicly visible posts • joined 4 Sep 2014

Sick of Windows but can't afford a Mac? Consult our cynic's guide to desktop Linux

thames

Re: Humorously Scare People Away

I've been using Ubuntu on my desktop ever since Mandriva started going down the drain (new management at the company). I can't recall it ever "breaking".

As for ChormeOS, that's more or less Linux but with Google being root instead of you. You can do the same thing by installing Debian and not giving the user the root password.

thames

Re: Control Your Own Upgrades

@ThatOne said: "200+ distros, one would assume they all have some unique characteristics, but unless you spend months reading through scattered, unverified and mostly outdated internet resources, how on earth am you supposed to know what those are ... "

Perhaps someone ought to write an El Reg article on which Linux distro to pick ...

thames

Re: Not be happy ... to reinstall my OS from scratch every year or two

If your hobby is re-installing operating systems, that's fine, I'm sure there are worse hobbies to have.

If it's what you like to do, then set up a series of VMs on your PC and install into them to your heart's content. You can then install the most obscure hobbyist distros, BSD, and various other operating systems without having to keep separate hardware around for them.

I keep a variety of VMs and hardware for testing software and so have a fair bit of practice at this, even though it's not my idea of entertainment. Any of the major Linux distros installs with pretty minimal effort if you accept the defaults.

Broadcom's stated strategy ignores most VMware customers

thames

Re: So, farewell then, VMware Fusion for macOS. Oh, you're still here …

I don't know what you've been doing with VMWare, but if you are using it on an Apple PC as opposed to a server then you may wish to have a look at VirtualBox. I've been using it for a number of years on a Linux PC to run a variety of operating systems for testing purposes with few problems.

I use the GUI to install and configure the VMs. I then use the VBoxManage command line utility to roll back, start up, monitor, and shut down VMs automatically controlled by bash scripts.

You can control pretty much everything about the VM via VBoxManage, As a very simple example if all you want to do is make launching the VM clickable with a mouse then you should be able to do this easily provided you can make a shell script clickable via an icon (this is easy with Linux).

Minimal, systemd-free Alpine Linux releases version 3.16

thames

Re: Thank you for a nice article

I installed 3.16 shortly after it came out, replacing an existing 3.15 installation. I use the "standard" ISO.

I tried an in-place upgrade from 3.15 to 3.16 which mostly worked but would for some reason not install a working pip (python).

I gave up on that and tried a fresh install. That kept getting stuck because when I tried to create a user (following the setup-alipine script) it would refuse saying that it wouldn't let me because the user already existed (I was creating the same user as was in the previous install).

I finally just re-formatted the disk using the disk setup script, which I believe is setup-disk.

I then ran setup-alpine again and was able to create the user. However, it didn't create a user directory for that user. I had to create that manually and set the appropriate permission bits manually.

I have gone through the installation process probably two dozen times now in order to get working systems for both 3.15 and 3.16.

I am using it on an old x86 32 bit system with a hard drive. I normally use it as a headless system accessing it by ssh. I plug in a monitor and keyboard for setup. The CPU does not support later x86 instructions, so for example 32 bit Debian will not run on it (the installer complains about a missing CMOV instruction). Alpine does run and this is the reason that I persisted with it.

The installation script has good and bad points. The good point is that it limits the number of options.

The bad points are that it's often not clear what is being asked and there isn't any way of going back and changing an option once you have made a selection.

The selection of mirrors is a complete train wreck. Most of the list scrolls off the top of the screen before you have a chance to see it. I always have to just pick "fastest" and then go back and edit the file manually after installation if I'm not happy with the choice it made.

I was never able to get through an installation in one go. The networking configuration would always seem to fail somehow and the process would stop due to a lack of network connection. I would then figure out how to get the network running and run through configuration again to get through the final steps.

I'm satisfied with the Debian text based installer. You might want to have a look at that for a guide. The Ubuntu server installer is also good guide.

What might be particularly useful is some way of doing a headless install which inserts a configuration file into the ISO so you just create and edit the file, run a script to insert it into the ISO (or build a new ISO), boot from that, and the configuration script reads the file and does what it needs to do based on that. The Raspberry Pi imaging program does this for its images (which are not ISOs). This would be useful for people setting up headless systems where they don't want to have to drag out an extra monitor to set it up.

I believe the setup system has some sort of way of creating a "replay" file, but that assumes you already have a working system that you want to duplicate. It doesn't help if you don't have that.

thames

It's good for embedded applications. The documentation goes through how to do installations in applications where you don't have a conventional disk to boot from.

It's also compiled so it will run on older generations of x86 chip, including ones which won't run on most 32 bit distros because they require certain features or instructions which older x86 chips don't have.

New audio server Pipewire coming to next version of Ubuntu

thames

According to the web site 0.3 means that the part that handles audio is ready to replace Pulseaudio and Jack.

They have a list of other features to add, many of which are not directly related to audio (they also intend to handle video), but those milestones come later.

thames

Re: Another Sound Server

Some things are not "fixable", and I doubt that you could "fix" the problems in PulseAudio without replacing it, especially as Pipewire handles video as well as audio.

Ubuntu 22.10 is a non-LTS release so most users won't see Pipewire until a decision is made to put it into an actual LTS release and that hasn't happened yet.

The same thing happened with Wayland (replacement for X). It was tried out in non-LTS releases but most users didn't see it until it was judged good enough for an LTS release.

This is why there are LTS and non-LTS releases. If you want bleeding edge, then use non-LTS. If you just want to use your computer for daily work, then use LTS. Simples.

China-linked Twisted Panda caught spying on Russian defense R&D

thames

How?

How exactly does Check Point get access to sensitive Russian systems and communications in order to conduct this sort of analysis? I find "Chinese haxors pownd the Russians ha! ha!. Don't ask how we got copies of everything" to be vaguely unsatisfying.

Canada bans Huawei and ZTE from 5G networks, citing national security risks

thames

Re: Bit late to the party, aren't they?

Canada does independent security reviews on all telecommunications kit from all suppliers, not just Huawei, and has done so for many years.

No problems were ever found with Huawei kit of a nature which didn't also exist in that of their competitors such as Nokia or Ericsson.

The complaints from intelligence officials in Canada relating to Huawei had to do with how it affected diplomatic relations with their counterparts in the US intelligence apparatus and US threats to cut off security cooperation via "Five Eyes" if Canada didn't fall into line on the Huawei issue.

thames

Re: An apology

It's quite possible that this was part of the deal Ottawa negotiated to with the Americans for agreeing to drop the extradition request for Meng.

A decent interval has now gone by so that Ottawa can claim there was no connection, and they drop the news right before one of the biggest holiday weekends of the year (Victoria Day) when people are too busy heading off to the cottage or elsewhere to pay much attention to the news. By the time people are back from holiday the news cycle will have moved on and this will be forgotten.

Export bans prompt Russia to use Chinese x86 CPU replacement

thames

Re: Military implications?

The Russians modernized a lot of their military kit starting before 2010 and carried on doing so up until today. They have reasonably modern kit for their "contract" (full time professional) army, but they also have big reserves of older and simpler hardware which they can pull out and use at need, including with their reservists.

Their main problem is that they will have limited manufacturing capacity to replace modern kit lost in combat, and modern warfare eats equipment at an alarming rate. This is simply a result of the finite size of their economy.

thames

Re: Yablochka redux?

I've been using my Raspberry Pi 4 (running 64 bit Ubuntu) for converting a large number of videos from MPEG 2 to MPEG 4 and it works quite well for that. It's not as fast as my PC, but it can run the conversion job quietly on its own while I use my PC for other things.

A Pi 3 running 32 bit Rasbian is many, many, times slower however, and I wouldn't recommend it for this. I don't know if that's mainly a CPU difference or 32 versus 64 bit (and associated SIMD instructions) difference, but it is very noticeable.

Also, you want to have a case with a cooling fan for doing anything like this. Ffmpeg will use all cores to the max and it needs cooling.

thames

Re: Yablochka redux?

My Pi 4 worked with full screen video on Youtube, but just barely and it dropped frames now and again. I would call that not quite adequate, but possibly acceptable if that's all you've got. I don't know how well it would handle a Zoom conference or the equivalent, as I didn't test that. I suspect that with a better GPU that problem would go away entirely.

The Pi 4 isn't designed to be a desktop replacement but it's entirely acceptable as a temporary backup for one if that's what you have.

I didn't buy my Pi 4 specifically for use as a desktop replacement, but I got the 8 gig version so that I could use it as one in an emergency. It came in handy mid-pandemic (and pre-vaccine) when I was able to keep on doing what I needed to do while waiting for a part to replace the one in my broken PC (and even used it to order the part on line). I had an SD card prepared and ready to go for that eventuality.

thames

Re: Military implications?

I don't know what is in the Armata specifically, but MCST's Elbrus processors are used extensively in Russian military kit. It's probably their biggest market. They're made in Russia in a fab in or near Moscow.

thames

Re: Yablochka redux?

Well, you're in luck. The same site that The Register chose to cite for CPU benchmarks also benchmarks the CPU used in the Raspberry Pi 4, the BCM2711.

For the KX-6640MA the "Average CPU Mark" is 1,566. For the BCM2711 it's 834. So the KX-6640MA is roughly twice as fast.

While I take synthetic benchmarks with a very large grain of salt, if we accept the above numbers as valid then it should be just fine for doing office work and the like provided the GPU is reasonable. You may not want to use it for playing games however.

I used a Raspberry Pi 4 as a desktop PC for a day when my regular desktop was broken. It was adequate for everything except full screen video, and I suspect the latter was due to the GPU limitations.

The most likely reason to want to use the KX-6640MA under these circumstances is to run Windows desktops. Several years ago Russia announced a plan to move all government operations off of American IT kit onto RISC-V by 2025, I assume using Linux. This means the KX-6640MA may be seen as just an interim solution.

The KX-6640MA by the way is one of Zhaoxin's slower low end chips, and they have faster ones. This one may have been announced because Dannie already had a motherboard for it.

thames

Re: Russian? CPUs?

The Elbrus series of CPUs mentioned in the story are made by MCST. It dates back to the 1970s I believe, but is actually the trade name for a series of architectures rather than one specific architecture.

After several generations of CPU they settled in the current incarnation which is a VLIW architecture. It apparently has a binary translator which translates x86 instructions to Elbrus VLIW instructions dynamically.

Performance of the binary translation process is not very great and currently they are working on getting developers to produce native ports rather relying on the translator. I suspect this will improve performance quite significantly.

MCST also produce a line of Sparc chips, although I don't know how much of their market that will account for.

The advantages of switching to RISC-V are obvious in that they will be able to use open-source software as is instead of creating, testing, and debugging their own ports to Elbrus VLIW.

Python is getting faster: Major performance tweaks on horizon

thames

Re: The McDonald's of programming languages

Most of the people who were primarily using Visual Basic either kept on using it or switched to C#. Their language choice tended to be determined by a combination of vendor platform support (Microsoft and their various products), IDE familiarity (Visual Studio), familiarity with third party libraries, etc., rather than features of the language itself. They were mainly interested in developing software for desktop PCs running MS Windows.

Python users originally mainly came from a Linux/unix/BSD background and were interested in server side development for web servers, numerical processing on super computers, biology, machine learning, large scale system administration tools, etc. A large community and set of third party libraries grew up around this. The main competitors to Python were Perl and Ruby.

With the rise of the Internet and the smartphone / tablet, the focus of the overall software market shifted away from the PC to server side, and Python was well positioned to take advantage of it.

An increasing number of new people coming into the software development learned Python because it was already well established in those up and coming areas as the language for rapid development of applications. As the market shifted away from the PC, people working in that field were drawn to Python for the same reason. Educators noticed the trend as well and started using Python as a teaching language, but most were very late to change curricula and did so only years after the trend was well established.

So the trend to Python came about due to a number of large scale industry trends coming together over the past couple of decades. The features of the language may have determined why people picked Python over Perl or Ruby, but they would have picked something along those lines regardless.

thames

Re: The McDonald's of programming languages

Anonymous Coward said: "Sounds very much like you're describing NumPy arrays, which are not part of the language itself."

No, I'm talking about arrays, which are part of the standard library and have been for as long as I can remember. The library is called "array" and it provides array objects as well as a number of methods for manipulating them. You referred to arrays in your initial post so I assumed you meant the arrays which come with Python.

Numpy is a third party open source library which provides its own array object types along with a very comprehensive set of methods for doing a wide variety of mathematical manipulations on the contents.

It sounds like you could use to spend some time reading the documentation on the Python standard library as you are evidently not very familiar with it. The documentation is on line and is very good. Time spent reading over the Python standard library documentation is usually time well spent as it is very extensive and many problems that you may need to solve often have a solution in the standard library.

You mention that you happen to like NodeJS. The original creator of NodeJS said in a podcast interview that he took his inspiration from Python Twisted. He wanted something like Twisted but with a JIT, and at the time Pypy did not exist. He therefore took another event loop implementation and added Javavscript V8 to it. He picked Javascript because that's what was available to him at the time "off the shelf" as it were. Twisted is another long standing third party open source Python library which is worth getting to know.

thames

Re: The McDonald's of programming languages

If you find yourself confused by the difference between Python lists and arrays then I would suggest reading some basic computer science books on data structures.

Python arrays are simply a library wrapper around a standard C array. If you write a C extension you can use the Python array as a C array. Python arrays have all the same numerical data types as the underlying C compiler, and indeed the definitions of those data types is based on whatever the C compiler decides they are.

Lists are simply dynamic arrays of pointers to objects, with those objects being of any arbitrary type. They are conceptually similar to C++ vectors or Java arraylists. If you're not familiar with either of those or similar languages then the concept may appear new to you but it's not something unique to Python although some of the details may be.

As for your apparent dislike of how import paths are handled, the standard system is very simple and easy to understand and suits what most people need. If for some reason you need to customize your import paths you can do that through the provided means. Read up on the relevant documentation to see what suits your situation.

As for "significant white space", white space is significant to how people reading the code understand it so the compiler should read and understand it the same way as a human does. Having humans apply one method in their minds and the computer using a completely different method of comprehending code is simply a recipe for entire classes of avoidable bugs. There have been enough Register articles on major security problems caused by this. Most professional environments have coding standards and in this one the preferred standard is enforced as part of the language syntax.

In my opinion there is no such thing as a one size fits all language that is best for all applications. Different languages are best used for different applications, which is why it's a good idea to learn several different languages of different types.

In my opinion Python is particularly good for applications where minimizing the implementation time and effort is critical, such as in custom business applications, scientific experiments, and the like. You can generally write a Python program in a fraction of the number of lines of code than most other languages require, and when time is money, that matters.

RISC-V needs more than an open architecture to compete

thames

Re: Plenty of reasons to choose RISC-V

India are also very interested in RISC-V as well, for all the reasons you state plus another one as well. The additional reason is that allows Indian companies to participate in the market on an equal footing to those of other countries. This will allow the Indian technology sector to grow without being subsidiary to the global plans or priorities made in other countries.

China's Kylin Linux targets second RISC-V platform

thames

But it may be good enough.

El Reg said: "... but at the time of writing also cannot match the performance of Intel's or AMD's finest – and will struggle to do so within China's two-year PC replacement deadline.but at the time of writing also cannot match the performance of Intel's or AMD's finest – and will struggle to do so within China's two-year PC replacement deadline.

RISC-V doesn't need to match the performance of "Intel's or AMD's finest" in order to replace PC's on the desks of bureaucrats. Said bureaucrats don't get PCs with top end CPUs to begin with. RISC-V only needs to have acceptable performance at an acceptable price.

I used a Raspberry Pi 4 as a desktop PC for a day while my regular PC was broken. Performance was perfectly acceptable for all tasks except for watching Youtube video full screen, where performance was barely adequate. That was probably more down to the GPU than the CPU however.

Take a RISC-V CPU that is the equivalent of the ARM CPU used in a Raspberry Pi 4, add a better GPU, and the end result is probably more than good enough for routine desktop office use.

RAD Basic – the Visual Basic 7 that never was – releases third alpha

thames

Re: Beginners'

Some sort of BASIC was pretty much the "standard" way of adding a programming language to hardware back in the days when Visual Basic was relevant. Lots of industrial and laboratory equipment was programmed using an embedded BASIC interpreter. Every vendor had their own slightly different dialect of it.

There was a whole class of hardware known as "single board computers" which were 8 bit micros intended for embedded applications. Many came with BASIC burned into ROM. This is old style basic complete with line numbers and GOTO, rather than VB of course.

Intel had a version of their 8031 microcontroller called the 8052AH Basic which had a BASIC interpreter burned into ROM on the chip and it was a big seller in embedded applications. At least one other major company also made the chip under license.

I used a lot of BASIC in embedded industrial applications. It's what was available and would run on very limited hardware. It also allowed for rapid iterative development on systems (manufacturing cells) which were often unique and expensive. You would connect to the hardware using an RS-232 cable and program it by typing directly into it. If you were clever of course you would actually do your editing in a text editor on your PC and set up your terminal emulator to "type" the text file into the embedded system.

If you drove a car in those days, almost certainly some parts of it were made by equipment controlled by an embedded BASIC.

GitHub to require two-factor authentication for code contributors by late 2023

thames

Re: I wonder

I did some digging yesterday when I first saw the news of this. From what I can see TOTP is one of the main options. Github are pushing their phone app as the preferred TOTP solution, but I don't see any reason why you would have to use it.

I installed "oathtools" (sudo apt install oathtools) to see how that worked, although I haven't tried it with Github yet however. Oathtools uses HMAC by default, but has a TOTP option.

Apparently once you have registered your key with Github you give oathtools your key and it generates a second key which basically amounts to a series of one time and time limited passwords. You then use that to log into Github. Your PC's clock obviously has to be reasonably correct for this to work, as the one time password uses the current time as one of its inputs.

The intent of this is to prevent replay attacks where someone gets a copy of your password and reuses it over again later. With TOTP your "password" stays on your PC and you just use an encrypted one time version of it instead.

Here's an example:

oathtool --totp 0123456789abcdef

197691

In the above "0123456789abcdef" is your secret key (in hex) which you register with Github and "197691" is an example of the one time password you use to log in with. Each time you run it you will get a different output.

I haven't tried this yet (I'm in no rush to be one of the guinea pigs) but from what I can see it should work. I will probably end up using zenity to turn the command line program into a simple GUI app which I would launch with a click from the desktop.

The whole thing could probably use a Register article showing how the best options are done as I found the Github explanation to be convoluted and difficult to understand. Github's explanation is very "Microsoftian" in that it only really makes sense after you've spent time figuring it out based on other sources.

Almost two-thirds of SMIC's Shanghai employees are living at work

thames

Re: WTF???

There isn't one big lockdown across the whole city. The city is divided into districts and different levels of control are applied to different areas depending upon the local infection rate.

So in some areas people are not allowed to leave their homes while in other areas they are, but are limited in the size of gatherings. Movement between districts is controlled.

OpenBSD 7.1 is out, including Apple M1 support

thames

Re: OpenBSD is Faaast!

I run a large set of benchmarks for my software in VMs on my PC, testing a wide variety of Linux and BSD OS distros. FreeBSD and OpenBSD were always measurably slower than any of the Linux distros, and roughly comparable to Windows. I suspected that this was mainly a reflection of the respective compilers.

A big gap opened up however after Meltdown/Spectre mitigations were put into place, with OpenBSD and FreeBSD slowing down very significantly, OpenBSD much more so than FreeBSD. I suspect this was a result of different ideas of what a suitable compromise was in terms of security versus performance.

Whether any of this makes a difference will depend on your application. If your application is I/O bound, then the CPU performance difference probably won't matter. So, OpenBSD might be a good choice for a mailserver or firewall or the like, but not a good choice for a supercomputer. Benchmark your application and keep the security / performance trade offs in mind.

thames

"Poweroff" works for FreeBSD, but not for OpenBSD which uses "shutdown" in the default install (FreeBSD has both). However, the OpenBSD "shutdown" has slightly different semantics than the Linux version.

Here's the Linux version on Ubuntu (this is just some of the options).

-H, --halt

Halt the machine.

-P, --poweroff

Power-off the machine (the default).

-r, --reboot

Reboot the machine.

-h

Equivalent to --poweroff, unless --halt is specified.

Here's the OpenBSD version (again, not all the options).

-h The system is halted at the specified time when shutdown execs

halt(8).

-p The system is powered down at the specified time. The -p flag is

passed on to halt(8), causing machines which support automatic

power down to do so after halting.

-r shutdown execs reboot(8) at the specified time.

Note the difference between lower and upper case in the above options.

thames

Thanks, that works. It turns out the actual problem is rather subtle.

If I type "shutdown now -h" in a Linux system, that works fine.

However, OpenBSD "shutdown now -p" will ignore the "-p" if it comes after "now". If typed as "shutdown -p now" however, it works fine. I had used the first format because that is what I was used to with Linux. I've made a note of this for future reference.

Thanks again.

thames

With regards to not being able to install Firefox due to the splitting up into partitions, I suspect that was a consequence of having a small VM disk and multiple partitions. If it was installed directly on hardware with a much larger disk this wouldn't have been a problem.

I use OpenBSD as one of my testing VMs and installed the new version yesterday and started using it with no problems. I've been using it for testing for years and didn't find it any more difficult to get used to than say a new Linux distro.

The one complaint that I have about it is that when run in VirtualBox shutdown won't actually shut it down. It announces the shutdown process but then doesn't actually do it. I have to issue an ACPI shutdown via the VM control to shut it down. I don't know if this is an issue on actual hardware or in other VM systems.

Skipping CentOS Stream? AlmaLinux 9 Beta is here

thames

Re: Why not CentOS?

I write software and run automated tests on a dozen different platforms using both VMs and direct hardware targets. When I do a release I list all of the platforms on which I ran tests, including the versions so that users can have confidence that it has been well tested and should work.

I use AlmaLinux in a VM to test software which my users may choose to run on RHEL. There is no point in me using Centos Stream because it doesn't act as an equivalent to any particular current RHEL release. What's the point of testing on Centos Stream when my users are unlikely to deploy on it?

There no doubt other people for whom Centos Stream serves a purpose, but I'm not one of them.

thames

Re: So far so good

I've been using AlmaLinux to replace Centos in a VM for software testing since the beginning of the year and I've had zero problems with it.

I just use it to test software that I write via an automated test system, so Red Hat have not lost a paying customer. However, I do publish a list of what platforms I support with each release, and it now says AlmaLinux where it used to say Centos, so Red Hat are losing a tiny bit of mind share in that respect.

For my use case AlmaLinux has been a straightforward drop in replacement for Centos that I started using when the then last normal Centos release went out of support. I don't see any reason for anyone who has similar requirements to not use it.

Oracle already wins 'crypto bug of the year' with Java digital signature bypass

thames

Re: So what other sanity checks did they leave out on the rewrite?

The whole thing sounds like a lack of serious testing. I have written a fair bit of numerical code over the past few years and one of the things that I learned was that there is lots of room for all sorts of non-obvious errors. The only viable solution seemed to be testing. I just ran a check over one project and have roughly 5 times as much testing code as there is code which is being tested.

Since it isn't possible to test every possible numerical combination (although test values can be generated algorithmicly), a good sense of what sort of numbers can be problematic is necessary.

Testing for (0, 0) is one of those really obvious problematic value pairs if you are doing anything involving any numerical operations. There are so many errors associated with 0 that if you are going to test anything at all, it should be 0.

This really raises the question for me that if they didn't test (0, 0), do they have any systematic testing at all? Or did they just pick a few numbers at random and checked to see that they got the same answer as the C++ implementation and called it a day?

This is not exactly confidence inspiring if that's the case.

Under pressure, SAP shuts down Russian operations

thames

SAP's days in Russia were numbered anyway due to Russian legislation.

It's not like SAP had a long term future in Russia anyway. El Reg previously posted a story on Russia was already banning foreign owned software in critical industries starting in 2025. The industries covered probably accounted for most or all of SAP's customer base there. I suspect this may have played a major factor in SAP's decision.

Russia bans foreign software purchases for critical infrastructure

https://www.theregister.com/2022/04/01/russia_bans_foreign_software/

In the mean time, the same legislation authorizes import and use of software without the copyright holder's permission if the company selling it has pulled out of the Russian market.

I imagine that someone somewhere will launder updates and support through a company located outside of Russia to cover the interim. This sort of thing already happens in the US for purely commercial reasons unrelated to Russia and has been the subject of lawsuits there by Oracle, if not others as well.

I suspect that SAP and other major Western companies realize they really have no future in the Russian market so they may as well plan an exit strategy now which still gives them potential for revenue from remaining customers in the interim.

Russia cobbles together supercomputing platform to wean off foreign suppliers

thames

Re: RISC-V maybe ?

RISC-V is probably the future for Russia and quite a few other countries some day, but if you want to buy off the shelf CPUs today with the necessary speed and memory capacity and which are available in large numbers from distributors who aren't going to ask too many questions about what you are going to do with them, then you're talking about x86_64.

Industrial cybersecurity group gathers lobbying force

thames

Not exactly confidence inspiring

Based on who is involved I wouldn't expect much of consequence from this group. I suspect their main focus will be on repackaging basic IT industry practice and using it as an excuse to fend off any trend towards mandatory regulation.

Aside from ABB (who are at best second tier), there isn't the buy-in from the major vendors whose cooperation is actually needed to make a difference.

Raspberry Pi OS update beefs up security

thames

Re: TITSUP

You can create a user named "pi", the installer just tells you that it's not a good idea.

thames

I was a bad boy and set my user name to "pi" because I have a complex set of scripts which drive a test system remotely, and I couldn't be bothered to change them at this time.

When I get the time I'll probably reinstall with a different user name to match the dozen other systems (VMs and another Pi running Ubuntu) so they're all consistent. I don't do anything important with them, just test software via an automated system and then shut them down again.

Meanwhile however, everything worked fine and it all went without a hitch. I can't see any reason for anyone to not upgrade.

Russia bans foreign software purchases for critical infrastructure

thames

Re: Software with western components

I suspect it's mainly about who does commercial support. To pick a simple example, if a customer has a server with an American Linux distro, it can be replaced by an equivalent Russian distro. The customer then deals with a Russian company for support rather than an American company.

For things like Oracle databases, convert to using Postgres (or whatever) with a Russian company providing commercial support.

Obviously for some situations this is a bigger job than others. However, there is probably a lot of low hanging fruit to be taken which will cut American and western European companies out of the picture, greatly reducing the effects of Western trade boycotts.

It's the obvious response. It's not like the Russians are just going to sit there and do nothing.

C: Everyone's favourite programming language isn't a programming language

thames

Re: Other languages....

I read Beingessner's original post. There is no core argument and Beingessner doesn't really have a point. It's just a long meandering rant about some the problems which are inherent in writing a compiler and some of the possible ways of getting around them.

From the context I gather the author of the original post was working on a number of issues relating to trying to auto-generate Rust bindings for C libraries, how to handle differences in language calling conventions, problems with different underlying hardware having different native integer sizes, and problems with less than ideal user programs having hardware related assumptions hard coded into them.

It would be nice if all of these were simple to handle, but they inherently aren't. There is and likely never will be either a one-size-fits-all programming language or an ultimate programming language that solves all problems for all time.

The original post author's attempts to redefine the English language does not somehow make the argument somehow more profound or more correct. It just instead implies that there really isn't a coherent argument to be made.

China's tech hub relaxes COVID restrictions to restart industrial production

thames

Re: Whodathunkit

There were a couple of reports published recently on "excess deaths", which is a long standing system for measuring current deaths against historical trends, taking into account changing factors such as population age profiles, etc. Health experts have been saying since the start of when the pandemic got serious that the only accurate measure of how many people die during the pandemic will be the excess deaths stats, but those will be an after the fact measure rather than a real time daily count.

Well, I just read one of the excess deaths studies covering 2020 and 2021 and the UK stands out as having their official COVID-19 deaths count nearly bang on to the excess deaths estimate (within 3 per cent). Western Europe as a whole underestimated their deaths by roughly 50 per cent on average, with some being off by a factor of 2 or more.

On a per capita basis, excess deaths in the UK were below the averages for any of western, southern, central, or eastern Europe and almost identical to France or Germany.

Excess deaths estimates for the third world were often an order of magnitude or more above the official count.

We'll find out how many people omicron killed when the excess deaths estimates come in months or in some cases years after it's over. Loads of people are dying from omicron, it's only "mild" if you're vaccinated or survived previous infection and so already have immunity to serious disease. Omicron is roughly the same degree of seriousness as the original strain or alpha, which shouldn't be too surprising as that is what it is descended from, instead of delta.

FIDO Alliance says it has finally killed the password

thames

Trust Who?

El Reg said: "That may be the case, but a key question remains: will businesses be OK with trusting their security to an OEM?"

More pertinently, will businesses be OK with trusting their security to an OEM located in another country outside the reach of their legal authorities? They would have to be severely negligent to do so for anything other than trivial applications.

Ford to sell unfinished Explorers as chip shortage bites

thames

Re: Thank god for small favors

I live in Canada. I've never had heated seats or steering wheels and I don't seen any need for them. They're gimmicks with no real practical value in a modern car.

I remember long ago when vinyl seats were common and they were rock hard and cold when you first sat on them in the morning in the winter. However, it's been decades since I've seen a vinyl seat, and cloth just doesn't have the same problems and modern foams don't get as hard in the cold. Heated seats solve a problem that ceased to exist long ago.

As for heated steering wheels, if it's actually cold then you will be wearing gloves anyway. Just leave them on until the interior warms up enough at which point you don't need a heated steering wheel to take your gloves off. Again it's a solution looking for a problem.

I have a car for the purposes of transportation, not as an expensive toy with pointless widgets to fiddle with.

Fujitsu confirms end date for mainframe and Unix systems

thames

There are emulators for IBM mainframes that run on Linux. I don't see why the same couldn't be done for Fujitsu systems.

The problem will be the operating system and other associated software. At the last that I heard customers attempting to run the above mentioned IBM emulators run into problems because IBM won't license their OS for use on emulators. I don't know if Fujitsu will take a different position.

The first paying software development job that I had was reverse engineering an IBM mainframe application to run on a PC. That is, here's the inputs, here's the outputs we desire, write a program which does this on a PC running MS-DOS. I did it for a small fraction of the cost of the proposed charge to be added as a customer on the existing mainframe application, which was running in a EDS data centre (the original "cloud"). I know someone else for whom one of his first paying software development jobs was also to move a mainframe application to a PC. In his case the mainframe was nearing end of life and moving to another mainframe was seen as not really providing a long term solution.

There were obviously small, specialized applications, but loads of these were around at the time as big companies had IT departments that preferred doing things on mainframes which smaller companies used PCs for. Most mainframe applications didn't involve anything that couldn't be done with a fast enough PC with a big enough hard drive provided you knew what you were doing and weren't trying to just duplicate how the mainframe application approached the problem.

There's probably no single answer to this sort of question. Each company will have to look at their own individual situation and figure out how to deal with it. The biggest problems for most companies probably aren't going to be the technical ones, bur rather the ones involving their own internal bureaucracies which will attempt to throw barriers in the way of any sort of change.

Intel energizes decades-old real-time Linux kernel project

thames

With Linux you need to use a different audio system for that. The standard one is made for ease of use which is fine for what nearly everyone needs, but there is a different audio system you install for serious audio work.

There is a project to replace both with a single system that does both, and it will probably become the default some time in the near future.

Linux kernel edges closer to dropping ReiserFS

thames

Re: Puzzled

The issue has nothing to do with who the original developer was. It has to do with that it had few users to begin with (mainly Suse), and hardly anyone has used it in years. It was a niche file system to begin with and it's been an obsolete file system for years.

Nearly everybody uses Ext4, which does everything that they need. Other alternatives include XFS, ZFS, and BTRFS. ReiserFS was always an also-ran a long way behind the leaders.

As I understand it, Hans Reiser ran a small company which licensed out a non-open source version of his file system to proprietary SAN vendors. The open source Linux version of ReiserFS gave his small company a larger user base which gave the file system more testing than his small company could have economically done themselves. It always however had very limited use as compared the the more mainstream Linux file systems.

If anything what this shows is the overall stability of the underpinnings of Linux, that ReiserFS should have continued on for so long with so few Linux users.

20 years of .NET: Reflecting on Microsoft's not-Java

thames

Mono

The section on Mono skipped over a lot of history which is important if you want to understand why it flopped on Linux. That's understandable because you could dedicate the whole article to the history of Mono and just scratch the surface. I'll just satisfy myself with adding in a few more details.

Novell had bought Suze and was looking for life beyond Netware as an enterprise IT vendor. They bought Suze and were buying up various other assets. One of them was de Icaza's company (the name of which escapes me) which they bought for its management software (which didn't last long).

De Icaza had been working on the idea of an open source equivalent of Java (which was still proprietary). He sold the idea to Novell that they couldn't be a real enterprise vendor if they didn't have their own complete "stack" like Sun or Microsoft. This role was to be filled by Mono, with Novell writing the cheques to finance development.

Reception among developers however was cool. A few were openly hostile for a combination of reasons. One was concern about the legality of the licensing, as doing anything actually useful with Mono required going well beyond the specifications that Microsoft made open and required copying stuff that definitely was proprietary to Microsoft.

Another reason was that de Icaza was going around trying to hijack other projects to get them to convert to using Mono in order to get some sort of a user base. He got a lot of push back from that. He also got his employees to write desktop applications for Gnome and used his personal connections with Gnome project managers to push them into Gnome as defaults. Once those people moved on to other things however, those applications were rapidly ejected from the defaults and replaced with stuff that actually worked reliably and without being resource hogs.

The main reason Mono didn't succeed in its target though was the supreme indifference which most potential developers or users had towards it. Dot Net developers weren't going to use Mono because they used MS Windows and that had the original Dot Net already, which was covered under Microsoft support contracts. Why pay more money to Novell just for Mono support? Java developers had no interest in switching from Java. And finally all the other Linux users simply didn't see Mono solving any problems that they actually had. They had other languages and tools which did what they wanted with much less effort, and if they had wanted a Java clone they would have been using Java (which most weren't).

Novell's strategy for reinventing themselves was faltering, and one of the casualties was their Mono subsidiary, which was spun off with de Icaza still in charge of it.

De Icaza finally found a user base for Mono, one that he had never expected. This was as a run time for game engines in competition with Dot Net. Game engine companies felt they were being bent over a barrel by Microsoft and were looking for someone who would offer a compatible product with more reasonable licensing terms.

Finally, here was a user base for Mono, and one that was willing to pay money to use it. De Icaza binned any ideas of being an enterprise full stack and focused on the game and mobile markets The core of Mono was still open source, but the profitable bits around it were taken proprietary.

Eventually Microsoft bought them, although more for the game and mobile assets than Mono itself.

I looked at Mono a few times to see if had potential use for a project, but always came away with a poor impression of it. It was slow and buggy, and documentation was pretty much non-existent once you tried to dig below the facade. Stuff may work if there was direct paying customer interest in it working, while for the rest it was haphazard as to whether it worked at all.

Mono pretty much vanished without a trace from the Linux news years ago. Any interest now seems to be from people trying to move their legacy Windows Dot Net server applications from Windows to Linux cloud VMs to cut licensing costs.

Out of beta and ready for data: 64-bit Raspberry Pi OS is here

thames

Re: A silly idea from a silly person...

Standard Ubuntu Desktop has a screen reader built in, but I haven't tried it on a Raspberry Pi. You just turn it on or off in "settings", or enable through a keyboard combination.

I just played with it on my PC (running Ubuntu 20.04) and it works, although I don't know how to make use of it effectively. The voice doesn't appeal to me, but then I don't use screen readers, I'm not used to it, and I don't know if there is a way to change it.

I will emphasize that I am talking about standard Ubuntu Desktop, I don't know if it exists or works in any of the third party flavours.

thames

Re: @thames

I was in the middle of writing a reply about that I didn't keep the benchmarks when I recalled a place that I hadn't looked and found some. On average, with the same Raspberry Pi3 and just swapping the SD card between 32 bit Raspbian and 64 bit Ubuntu, and running the same benchmarks I get an average performance increase of 25 per cent with 64 bit.

These are with numerically intensive benchmarks running in tight loops, with two sets written in both Python and C. The C version is of course much faster than the Python version, but both show a similar speed up with respect to their 32 and 64 bit versions.

thames

Re: A silly idea from a silly person...

I used my RPi4 with 8GB of RAM as a regular desktop PC for a couple of days about a year ago when my PC died and I needed something to fill in as a backup while waiting for the computer parts store to open after the weekend during lockdown so I could fix it.

Fortunately I had made an SD card specifically for this eventuality just a couple of weeks before when the Pi arrived, and I had 64 bit Ubuntu ready to use.

My impression of it was that it was fine for regular web browsing, email, and the like. With full screen Youtube video in Firefox on a large monitor it was marginally adequate.

Using a different GUI other than Gnome / Ubuntu will do nothing of significance in terms of addressing the performance issues, it's almost entirely a matter of the GPU and video driver speed when running full screen and that isn't a function of the desktop GUI.

A faster GPU or better video driver would address pretty much everything of concern in this regards. Something that I didn't try was downloading the video and playing it using a desktop player (e.g. MPV Media Player) rather than watching it in Firefox.

The case I am using with my RPi4 has a small CPU fan, and that probably makes a difference to the performance as well as the RPI4 will throttle back on heavy load if you don't have a fan.

I was using a conventional hard drive with my PC at the time and one thing that I noticed was how significantly faster the RPi4 with Ubuntu booted as compared to the PC with Ubuntu from a conventional hard drive. I now have an SSD in my PC, so that's less of an issue.

thames

I benchmarked the RPi3 and RPi4 in both 32 and 64 bit modes, and just switching to 64 bit made each significantly faster. How much faster depends on what features your application exercises though. You do get a significant speed up just by going to 64 bits however.