* Posts by Henry Wertz 1

3137 publicly visible posts • joined 12 Jun 2009

Nvidia says its SmartNICs sizzled to world record storage schlepping status

Henry Wertz 1 Gold badge

Re: Snake oil?

I think in principle it's exactly the kind of thing Rivet Networks did just at a different scale. I remember the Killer NIC, it was marketed to gamers; it would bypass Windows' TCP/IP stack in favor of handling that on the card, plus it had like FTP and bittorrent apps that could run on the card. But it was pricey, and the NIC was gigabit, but if you had any normal internet connection (this was 15 years ago), how much CPU time does it take to handle like 10mbps anyway?

Now, there's two factors on this current stuff. 1) These hyperscalers are going for the "mainframe model", leaving the CPUs free to compute with IO processors handling disk I/O, network I/O, etc. as much as possible. If you're running 100gbps the traditional way that'll keep several CPUs very busy just handling interrupts and whatever. 2) Running some traditional router functionality on there, they may be able to buy very expensive NICs in lieu of even more expensive switches and routers.

Developer creates ‘Quite OK Image Format’ – but it performs better than just OK

Henry Wertz 1 Gold badge

ATSC

ATSC here is different -- the digital multiplex is not owned by whoever then bandwidth sold to the channels -- here, the channel is owned, they run (usually) 1-2 of the old networks (CBS, NBC, ABC, Fox, CW) -- one of these would be what the channel was entirely when it was analog -- and then however many "digital networks" (ones that usually didn't exist when OTA television was analog), usually 4-6 total channels. They run it how they see fit -- one channel here still has fixed bitrate, so the main channel looks great and the rest look like crap -- but the same digital networks from another transmitter only looks bad for a second or two per hour, they run VBR (variable bitrate) so instead of the channel being shorted on bits all the time, it only happens if there's action on too many channels simultaneously. One channel runs a variant I like, instead of looking all blocky in action scenes, it seems to "draw" the screen until it runs out of bits, so instead of picture quality going to hell in action scenes, you have good picture quality except the bottom 10% or so of the picture not updating for a moment or two. Looks odd, especially on scrolling credits, but much better than having the screen go to giant blocks.

Then there's PBS (Public Broadcast Service). Not much to say, except instead of having unrelated channels they have a main and subchannels all PBS channels. Here in Iowa, Iowa PBS (which was recently renamed from Iowa Public Television since having initials IPTV was confusing as hell given the prevelance of internet protocol TV...) for example has a main channel, a arts/crafts channel, a kids show channel, I think a travel channel, and an audio channel that reads books for the blind.

After deadly 737 Max crashes, damning whistleblower report reveals sidelined engineers, scarcity of expertise, more

Henry Wertz 1 Gold badge

Procedural changes

It sounds almost to me like there could be two procedural changes that'd help.

1) It really sounds like it'd be better if there was no liaison at all between the FAA and the engineers; engineering concerns are filed like bug reports, and the FAA could then be immediately aware if the engineers had concerns and how they were addressed.

2) It should be made clear to Boeing et. al's non-engineers that any given design is going to see millions of cumulative operating hours. I mean, the saying for saying something is unlikely is "Wow, that's a million to one chance", well in this case a million to once chance per hour means it's reasonably likely to actually occur sooner or later. You're still dealing with statistics (even a triple-redundant sensor or computer, with 3 seperate designs, has that very low chance of triple fault, but it gets to be statistically low.) It would probably be good to make sure the non-managers responsible for anything safety-related have a good handle of statistics.

Midwest tornado destroys Amazon warehouse, killing six after worker 'told not to leave'

Henry Wertz 1 Gold badge

From vehicles

I thought the same at first "they make them work up to the last second?" But they were being led to shelter...from their delivery vehicles. I worked at P&G here, high rate of tornadoes (although not usually in December.) We'd go to shelter as soon as a tornado warning was issued, which is a minimum of 10 minutes ahead of time, usually more like 30. (a warning includes a storm favorable for forming a tornado, funnel clouds, etc., not just tornado on the ground.) I would be surprised if Amazon did any different.

I could be wrong but what local media indicated (we are about 200 miles from this facility), they were being led to the shelters *from their delivery vehicles*. staying in a van in a tornado is in no way safe, trying to outrun or outmaneuver a tornado in a vehicle won't work (storm chasers do this, but they have live radar feeds, maps ready, etc., and professional knowledge on how the storm and tornado are likely to track.) So it was just the bad luck of getting there a minute or 2 too late to be in shelter when the tornado hit.

Netgear router flaws exploitable with authentication ... like the default creds on Netgear's website

Henry Wertz 1 Gold badge

What bombastic bob said

What bombastic bob said... my older devices had default username and password, posted on the website. My newer stock devices have the what appears to be a randomly generated password, printed on the device so you can still find it after factory reset. dd-wrt (after some version) used user: admin (or maybe root), password: admin but requires you to set the password first time into the web interface (...which you go into to set up the wifi network name etc. so it's not a step people are going to skip unless they really want a network named dd-wrt with no encryption.)

Microsoft 365 admins 'flooded' with bulk and bogus notifications for over an hour

Henry Wertz 1 Gold badge

Do they even have a rollout process?

Do they even have a rollout process? Honestly, to me it seems inconceivable that someone within Microsoft wouldn't have tested this first, been spammed by notifications, and not rolled it out further. Oh well 8-)

Why your external monitor looks awful on Arm-based Macs, the open source fix – and the guy who wrote it

Henry Wertz 1 Gold badge

Port EDID code

Sounds like a matter of porting the EDID handling code -- I would guess it's probably written in C (... or maybe ObjectiveC or Swift?) and could just port over the x86 code to replace the current iOS-derived code.

NixOS and the changing face of Linux operating systems

Henry Wertz 1 Gold badge

Nix

I've used Nix, a software project I was taking a look at used Nix for reproducible builds (ultimately building firmware images to load onto a device.) I think some packages (edit: excuse me, closures*) had prebuilt binaries (with the option to build from source, but the binaries had checksums etc. to ensure they were identical to what the build would produce anyway.) The source builds, it has some tools that strip out whatever variant information (dates & times I suppose) that the compiler or linker throw into the binaries so it can directly check the result of the build is identical to what it should be.

I found it rather difficult to use; but, it put's everything in /nix/, there's subdirectories under there with the inscrutable hashes as alluded to by TFA. Each app has the ldconfig, path, etc. set so they're not expecting anything in the traditional /bin, /sbin, /usr etc. (your home directory does stay in /home, at least running Nix on a normal system it does.) It has a vaguely FreeBSD Ports or Gentoo portage style set of directories with available "packages" (I don't know if that's what Nix calls it) in there.

But, if you have multiple programs that are built on top of the same set of libraries, you do in fact have only 1 copy of those libs taking up space. But I don't know in practice how this works out; if you wanted to "update", say, gtk or libc, you either have to update a large number of packages with updated dependency (and compile them, either on their end or yours...), or alternatively the packages "drift" in what versions of libs they want and you do end up with multiple library verisons building up.

*re: closures. That was one issue with Nix, I found the terminology confusing as hell. Nix has done a mathetmatical proof that the builds are complete and reproducible -- among my comp sci. classes, I took an algorithms class that was heavy on the mathematics of proving if a bit of code was O(n), O(n^2), etc., and still found the terminology rather difficult just for the section on how you're supposed to use the darn thing let alone the proof.

Academics tell Brit MPs to check the software used when considering reproducibility in science and tech research

Henry Wertz 1 Gold badge

IRIX software

Yup, I helped out at a local lab once, they had some old data and a published paper that they wanted to run a followup on (process some new data using same techniques for a followup paper). Running the current software with the old data did not obtain the same results as in the paper (not a big change that'd change the conclusions, but still...). They had a copy of the binaries, but they were for IRIX, and the SGI it ran on had been retired. I did end up getting it to run -- Linux for MIPS intentionally picked the same system call numbers that IRIX did, and it was a simple console-based app (text prompts asking to set various parameters and what file to read the data from), so it actually ran on x86 linux using qemu-user-mips. The overhead of emulating the MIPS was more than offset by the x86 system being years newer and like 10 or 20x the clockspeed, it ran whatever it was doing in a few seconds (apparently it took about a minute on the original system.) It ran the old data with identical results to the paper (i.e. qemu was emulating properly), so they ran their new data with it and all was good.

Microsoft Defender for Endpoint laid low. Not by malware, but by another buggy Windows patch

Henry Wertz 1 Gold badge

"Server Core"

I think the root problem here is "Server Core", the "Windows 10 IoT", etc., were intended to be real stripped down version of WIndows, but Windows itself was not intended to be stripped down this far. Not that they can't ultimately get it to work; but I've read about Microsoft running into all kinds of odd problems developing both of these; removing some services was easy, others were harder to remove; removing stuff like the GUI (for IoT versions) was apparently surprisingly difficult (there were some oddly non-GUI related things that the WIn32 API traditionally required having a window handle to do).

In contrast, the Linux kernel and components running on top of it are developed independently, and usually (other than maybe systemd...bleh) consciously avoid being dependent on anything else to the maximum extent possible. So the 'cloud' versions of various Linux distros are really just a matter of pulling out most server packages and all desktop-related packages from the server version of the distro (if it has one) and streamlining the startup script(s) since you're not mounting any disks, loading kernel modules or drivers, etc.. Of course a "IoT" variant (to run on Raspberry Pi etc.) you would want to mount disks, load modules, etc. so you remove excess packages but leave the startup more intact.

Bad news for Tencent: Chinese companies steer employees away from Weixin or WeChat

Henry Wertz 1 Gold badge

Trouble brewing

Seems like trouble brewing for Tencent. I just can't see an app company doing that well in the long term when they're banned from updating their apps.

Reviving a classic: ThinkPad modder rattles tin to fund new motherboard for 2008's T60 and T61 series of laptops

Henry Wertz 1 Gold badge

IBM Thinkpad

I never had the good fortune to own an IBM Thinkpad -- but used some... keyboards were great, solid build quality, there was really not another system like them. And once Lenovo took them over and revamped them, no real replacement with similar physical qualities. I could absolutely see wanting to gut one out and put modern hardware inside.

Sweden asks EU to ban Bitcoin mining because while hydroelectric power is cheap, they need it for other stuff

Henry Wertz 1 Gold badge

How?

How?

I agree with the end goal -- but how is this implemented?

I could see perhaps, rather than an actual ban, having residential electricity prices so that a reasonable amount is billed at the lower rates, and excess billed at a much higher rate*. (And for commercial use, power out of line for the type of business you're supposedly running is also billed at a high rate.)

MidAmerican Energy (here in middle of the US) actually charges *lower* rates once your household use is over 1000kwh/month -- I just looked it up, in summer it's like 10.5c/kwh in the summer, but in winter it's like 8.5 c/kwh for the first 1000kwh and under 4.5c/kwh after that.... actually I didn't realize the bulk rate was that cheap.. that's so cheap I should get some mining gear (not!).

But, I could see instead charging the first x kwh at the lower rate and jacking it up to like 20c/kwh+ for excess usage -- that'd HEAVILY discourage cryptomining by simple market forces.

The rocky road to better Linux software installation: Containers, containers, containers

Henry Wertz 1 Gold badge

Ugh

Snap. Ugh. 1) It starts pulling down huge updates whenever it feels like it, even if you are on a low-speed connection. Forced updates and no way to hold back an update. 2) The bigger problem, the few I tried were broken -- the attempt to sandbox one kept it from being able to access my home directory... then after I persuaded it to do that it could literally open files from my home directory, but nowhere else (I want to load a file from a USB stick or something? Too bad!) Another was supposed to use OpenGL but was unable to successfully do so. And this was Snaps (made by Canonical) on Ubuntu (made by Canonical), so forget using it on some other distro if it's that screwy on the primary distro.

FlatPak? Sounds like, from a practical matter, it's essentially running a parallel package manager -- the FlatPak depends on other FlatPaks to provide base libs. I saw it pointed out that you may already have Gnome, but you install one copy of Gimp and it install's another copy of Gnome using Fedora's FlatPaks; another package will install ANOTHER copy of Gnome using freedesktop.org's FlatPaks. I mean, if it works maybe that's OK but it sounds like it could get a bit out of control pretty easily.

I ran an app or two as an AppImage, and that actually did work OK. They don't get cutesy with sandboxing, just blob up the libs and junk into a .AppImage file. To be honest the only one I've tried is rpcs3 (PS3 emulator), but it acts just like a "portable" Windows app... you download it, you run it, and it works. If you want a newer version, download the newer .AppImage and run that instead. But I've seen plenty of complaints about this format too, since just like FlatPak and Snap it's including libs that are already on your system,. Primary complaint being apps that should match your desktop theme may not since the AppImage will be looking for whatever theme it may have within the AppImage file itself.

I've also heard these are all a bit of a PITA to package, to the point that some people "drank the Koolaid" and thought they should definitely ship an AppImage, FlatPak, or Snap, but could not sort out how to get their package to do so. Apparently this is not a terribly easy process.

Seaberry carrier board turns a Raspberry Pi into a desktop PC with 11 PCIe slots

Henry Wertz 1 Gold badge

ARM desktop

I could see doing an ARM desktop though. I ran a Acer Chromebook with a ARM (Tegra K1), Chrubuntu ran sweet off it. The Chrubuntu is regular Ubuntu with some tweaks for the chromebook (power management stuff to handle the big/little CPU setup -- it had a low power core, which it used very successfully (22 hour battery life), more or less if it was going to run 1 core under about 800-900mhz, it'd kick over to the low-power core. OpenGL, OpenGLES, and CUDA all worked which was sweet. All the typical Ubuntu packages are available native; I had qemu-user-x86 and qemu-user-x86-64 installed, could set the package system to allow x86/x86-64 packages to be installed, it'd successfully run x86 and x86-64 binaries (.. as long as they weren't multithreaded.) I ran a Samsung color printer "binary blob" x86 printer driver under it, it ran at about 1/4 native speed but still could spit out pages and send them to the printer over wifi faster than it could print them.

So the software is there even for an ARM desktop. This one's too pricey for me though 8-).

Henry Wertz 1 Gold badge

Just a tad bandwidth restricted

Plus, you have 11 PCIe slots, but it's going to be "just a tad" bandwidth restricted -- the Pi CM4 PCIe slot is a 1x gen 2 slot, 400MB/sec. That's a respectable amount of bandwidth, but not really if you're expecting to actually give all 11 cards a workout -- that's 35MB/sec per slot with no overhead (would not be surprised to find it's closer to 30MB/sec, PCIe is usually point-to-point so divvying up a PCIe lane 11 ways probably does incur some overhead.)

Do not try this at home: Man spends $5,000 on a 48TB Raspberry Pi storage server

Henry Wertz 1 Gold badge

PCIe speed

I wondered just how over saturated that bus was. Well, the Pi CM4 has a single 1x PCIe Gen 2... that is 500MB/sec. That's pretty speedy but yeah, having a whole set of 2GB/sec+ SSDs loaded in there is definitely going to make the bus the limitation 8-)

Another brick in the (kitchen) wall: Users report frozen 1st generation Google Home Hubs

Henry Wertz 1 Gold badge

Regression testing?

Seems like it'd be good to regression test -- have a virtual device in qemu with each old firmware, to make suire (at a minimum) that after installing latest firmware the device can boot enough to stick working firmware on (obviously it'd be better if it actually booted to normal operation.)

When the world ends, all that will be left are cockroaches and new Rowhammer attacks: RAM defenses broken again

Henry Wertz 1 Gold badge

Sort of. There were a few game exploits and such where one would "glitch" the game (I recall on Nintendo... second hand, I didn't own one), they would reset it rapidly or jiggle the cartridge at the right time or whatever, it'd either crash (if the wrong bits flipped), or flip the right bits and they'd get whatever powerups or whatever the glitch would gain them.

Also the directv and dish network hackers 10 or 15 years ago (edit; probably more like 20 years or more) would do similar glitching when trying to read out keys on access cards or whatever they were doing,

Henry Wertz 1 Gold badge

Re: @msobkow

"When it comes to the PC on reception, the tradeoff is how much time will be lost if the PC falls over or applications fail because of bit errors."

But if that 1 o'clock appointment becomes 3 o'clock because bit 2 flips, that's a problem. I had one dimm go bad (like 20 years ago) no crashes but I wonder why firefox's "file" menu had a typo in it (yep, the flipped bit flipped menu text rather than code, by dumb luck.) It ALSO at some times used the bad RAM for the disk cache so (after I finally realized the ram was bad and replaced it...) I was glad to see Ubuntu has a procedure for reinstalling *every* package on the system, since it became apparent several were corrupted at installation time.

Henry Wertz 1 Gold badge

Alarming

I find these attacks alarming. Not worrying about myself being exploited. But I would REALLY like to think I can access regular, non-overclocked RAM in any way I want without flipping bits on it.

Remember SoftRAM 95? Compression app claimed to double memory in Windows but actually did nothing at all

Henry Wertz 1 Gold badge

Outrage and Linux

I remember some magazine review -- don't know which -- that literally concluded "SoftRAM 95 does what it says" (so there was a bit of a kerfluffle when it came out this software did nothing, since it also indicated this magazine was not actually testing utilities before giving them the thumbs up.)

As for Linux's stuff for this -- it's crap. zram is not a swap cache, it's swaps into a compressed ram disk (so some android tweaks do turn this on, since it allows gaining "some" ram without swapping, which should not be done to mmc or sdcard.) zswap? it's a compressed swap cache, but only up to 2:1 (with zbud) or 3:1 (a modified zbud that at least allows 3:1) because that's how it stores compressed pages (so a page "can" compress 50:1 but will still take 1/3rd of a page.) Worse, there's no page eviction!!! (Least recently used zswap is not swapped to disk when the zswap is full.) So your first swap is compressed and stays in the zswap cache, it's NEVER swapped out, so you then have a full zswap cache and any additional swap going straight to disk (disk swap is not compressed by this either.)

Zuck didn't invent the metaverse, but he's started a fight to control it

Henry Wertz 1 Gold badge

Re: It ain't got that swing

"When the technology is in place to mimic Snowcrash or Ready Player One, that's when the Metaverse can be made real."

That was what shocked me about this "Meta" stuff, the low quality of the graphics.

Second Life already has areas with Ready Player One-quality graphics (it varies of course, since everything is user-supplied you have areas that were designed in 2005 or earlier, and it shows.) This has not helped their user figures really.

The technology to do that is here now, it's just a matter of finding any practical use to get people to want to use it, and I don't see what that use case is. For example, people already do video chats with multiple users in their little boxes, and if they want it to look whimsical they have trick backgrounds and filters; I have heard no desire from actual people (as opposed to Zuckerberg et al. trying to drum up interest) saying they'd rather be wearing VR goggles to pretend they're in a meeting room or some other shared space instead.

Henry Wertz 1 Gold badge

sadville

Just want to mention, again, that Sadville ("Second Life") has been around for something like 15 years. If one wants to see where an open-ended VR type system can end up, take a look there. FYI, their main income is porn & online gambiling; but also anything else you'd expect.... there's like 1000s of square clicks of space in there, with a mix of still-active areas, abandoned areas that have been turned back into a natural-looking environment (forest, plains or prairie, snow prairie, desert), other abandoned areas where they buldings and such were left up. There's a wide variety of stuff in there, a nice space museum, car racing, tours,

skiing areas, some areas that are built just to be scenice; I saw a full-scale deathstar just floating above some areas, and on and on.

Originally when subscribership declined, they began turning off mainland sims... but realized 1) It'd look pretty screwy to have a "continent" with all these holes in it... and 2) Originally every sim was on a dedicated machine, shutting down abandoned sims would be a direct cost saving. But by the time their population peaked and began to decline, they were running multiple sims on a single machine, and realized they could simply shut down sims with nobody in them... they fire up if anyone goes in or near the sim and I suppose shut down after some minutes with nobody in it.

HP's solution to running GPU-accelerated Linux apps on high-end Z workstations: Rely on Microsoft's WSL2

Henry Wertz 1 Gold badge

Time to not go for HP

Time to not go for HP! I would not pay a Microsoft license just to use it as a hypervisor to run Linux. I don't want the support issues of supporting a whole copy of Windows, just to have it running as a hypervisor that I'm running Linux on. I don't want the slowdowns of having network and disk I/O go through 2 sets of drivers and 2 disk caches for such an unncessary reason. I don't want the support headache of making sure a copy of Windows is fully patched, updated, and secured, to have it phone home, to have it decide on it's own schedule that it's going to reboot or do background tasks and on and on. Finally, it's pretty inefficient to have Windows using all that disk space, generating all that disk I/O (you know it won't resist indexing and whatever the hell else in the background), and using all those GBs of RAM, (plus the inefficiency of having to statically allocate some fixed amount to the Linux VM) when you JUST want to run Linux on it.

And, really, the NVidia driver situation is pretty easy. It sounds bad when summed up like in the article but it's not bad at all. If your distro has it, then install the nvidia package. If your distro doesn't have it (or you want to download it directly from NVidia instead of your distro's packaged version), then download the one from NVidia and run the installer. If it complains, quit running a bleeding edge kernel. Most distros DO NOT run a bleeding edge kernel unless you install one manually (right now that's 5.15.2), they run one that's had a few months for bugs to be worked out, and those few months give nvidia time to get their driver up and running too (for example the "edge" kernel in Ubuntu is 5.13.0-21, 2 series behind the bleeding edge and "regular" kernel's a bit older (with bug fixes backported from newer series).). It's true that older drivers for old cards do not support new kernels, but those drivers are for like Geforce 4 series and older; anything that supports CUDA will work on the current driver. As far as I know noveau is actually feature-complete on these old cards. the support missing from noveau for CUDA, Vulkan, and newer OpenGL features are not supported by these older cards anyway.

Billion-dollar US broadband bonanza awaits Biden's blessing – what you need to know

Henry Wertz 1 Gold badge

Profitability

Thing is, broadband doesn't have to be fiber. 100mbps down and 25 up is pretty easy to maintain with modern wireless ISP equipment, the low population density that becomes a crippling disadvantage to fiber becomes not that big a deal if you get some wireless hardware with some range to it.

Problem in the past has been this cash being doshed out to companies that then spend it to build out (at the time) their cable and DSL networks in areas they were going to build them anyway, as opposed to the underserved areas it was meant for.

Henry Wertz 1 Gold badge

Hope it's spent well

I do hope it's spent well. Problem with past broadband investments, a large amount went to large telcos with some buildout requirements but they'd be over 5 or 10 years, so 5 or 10 years later when they didn't meet the requirements nobody came and got the money back from them, it just went into their coffers.

Swiss lab's rooftop demo shows sunlight and air can make fuel

Henry Wertz 1 Gold badge

Mad Max

Well I guess if I end up in some dystopian Mad Max style environment, at least I know it's possible to (eventually) produce some gasoline.

Keep calm and learn Rust: We'll be seeing a lot more of the language in Linux very soon

Henry Wertz 1 Gold badge

Rust's memory safety

A quick comment on how Rust guarantees memory safety -- the language is arranged so variables are immutable (created then can't be changed) unless you specify it's mutable (can be changed later.) The variable is "owned" by a particular bit of code, if you call a function and pass it to that function, it owns the variable unless it's returned at the end. Things like this let memory safety (bounds checking, not using out-of-scope or freed variables, and so on) to be checked AT COMPILE TIME using static analysis. It's very pedantic, any potential safety issue causes a compiler error. Since this analysis is done at compile time to show memory safe variable and memory usage, there is no overhead at runtime. There is an "unsafe" trait that can be put on functions, There's no sensible analysis to be done if some function's calling out to a C function or reading or writing some piece of hardware or whatever, if you're doing something like that that'd make the compiler barf you mark it unsafe.

Nvidia open to third parties making custom silicon tuned for CUDA applications

Henry Wertz 1 Gold badge

Makes sense

Makes sense. NVidia was hoping in the past that CUDA would catch on as an industry standard, effectively have a nice big CUDA ecosystem but NVidia cards would have a head start implementation-wise for quite a long time. This didn't really work out (ROCm for AMD stuff now has CUDA compatibility, but there's plenty of fragmentation between OpenCL, CUDA, ROCm, Vulkan Compute, whatever MacOS uses (Metal?). If 3rd parties made CUDA-compatible compute systems that'd of course give CUDA an advantage compared to the others.

What, Uber charges disabled people fees for taking a while to get into their ride? Doesn't seem fair, says Uncle Sam

Henry Wertz 1 Gold badge

Yeah that's not cool

Yeah that's not cool. You would hope it would have not had to come to a lawsuit, that they should have distinguished from the start between people screwing around and not getting in the car (... or actively saying "Hey could you wait 5 or 10 minutes") and people having trouble getting into the car. I'd hope if someone had that much trouble that it takes over 2 minutes to get in the car that the driver would offer to help, but I do realize maybe they won't.

FYI: If the latest Windows 11 really wants to use Edge, it will use Edge no matter what

Henry Wertz 1 Gold badge

Re: Windows 10 last version used

"I wonder if I can run the only few games I play on Debian Linux with Wine (32 bit since I am playing mostly old games."

Almost certainly yes. Ubuntu at least (for the purposes of wine I don't think Ubuntu versus Debian makes any difference..) , the "wine-staging" installed "wine-staging-amd64" (64-bit support) and "wine-staging-i386" (32-bit)... which is a real trip, it installs like 100 32-bit packages to support it (32-bit libc and base libs, 32-bit graphics libs, 32-bit mesa, 32-bit video codecs, windows supports scanners so it goes ahead and installs 32-bit SANE (Scanner Access Now Easy, Linux's scanner system), and on and on. But no matter, I'm not sure if all the 32-bit support libs even add up to 1GB of space.

My friend's running Tiger Woods '99, a few of his other late 1990s to early 2000s games ran without a hitch, one had some nasty version of securom (came out in 1999 and turned out to not even be compatible with windows 2000 let alone xp, 7, etc., or needless to say the enviroment wine provides); naturally it ran once a "no CD" crack was applied. Two didn't run, and apparently don't run in newer Windows versions either; there were patches for both to make them run at least up through XP or 7 (I think they were written for like DirectX3 and patched to be compatible with like DirectX7 or so)... these patches made them run in Wine too.

Fun thing on games that need the CD put in? You can double-click the ISO and Ubuntu (and probably most Linux distros) has a built-in ISO automounter (it does a "loopback" mount of the ISO, telling the kernel to treat it as a CD; Linux mounts the iso9660 or UFS (for DVD) filesystem, and this shows up in Wine just like a physical drive with a CD or DVD shoved into it). No fuss no muss. I remember using Alcohol 120% back in the day for that.

It's pretty good running new games too, Valve has very good game support in the Proton system on Steam, Proton is a patched-up version of wine so these patches get upstreamed and filter back to wine over time.

Henry Wertz 1 Gold badge

I actually don't think it's a big deal

I actually don't think it's a big deal. I would oppose it for two reasons -- 1) It takes away user choice, if they're specifically running something like these apps to redirect these URLs then that's what they want. 2) You now have a API with random exceptions thrown in, that's not clean.

On the other hand -- 1) Reduces security risk (of some unauthorized app claiming to be the URL handler.) Of course this line of argument leads to the walled garden iphone/ipad type approach, and I'm not into this at all. 2) It is stuff built into Microsoft's menus and such (.. I assume it's the included widgets using this edge-only url, not forced on third-party widgets.. if it's forced on third-party widgets that's crap IMHO). In the distant past there was a .hlp/.chm viewer for help pages, the weather and news app would have probably pulled headlines, articles, etc. and displayed within the app, etc. In other words you would not have had control over the appearance of these elements anyway.

To be honest, I think I'd prefer my "real" browsing going on in firefox (and if you prefer Chrome, Chrome) and have the Microsoft menu-based stuff come up in it's own session anyway. I'm not a fan of taking away that choice, but I can see why they might want to do it.

That said... I run Ubuntu 20.04 with gnome flashback desktop (I'm considering a switch to kde, I messed around with it in a virtual machine a few months back and it looks pretty good. For those who don't know, you can log out of your desktop and this gear icon on the login screen lets you switch between installed desktop environments, even if you have it set to auto login on bootup; so trying out a new desktop environment is pretty easy.) Maybe if I actually used Win10/11 I would find way to many "forced Edge" links and be annoying.

The return of the turbo button: New Intel hotness causes an old friend to reappear

Henry Wertz 1 Gold badge

Wow...

" I would lucky b**tard to anyone who has a Scroll Lock key "

You're right! I have what I thought was a full keyboard, including a number pad, on my Inspiron 3505 -- but now that you mention it, it doesn't even look like scroll lock's available using "fn-some other key"!

Wow... games really check if you have a specific style of CPU and refuse to run? Gross. I would expect them to check for available instructions (if it needs something that older CPUs don't have so you don't get partway through game load THEN have it crash), and AT THE MOST run a microbenchmark and print a warning that your system may be too slow if the microbenchmark says it's probably too slow. I would NOT check based on (in this case) "Atom"... I mean, I couldn't forsee some CPU using Atom cores as low-power cores in a faster CPU, but COULD forsee faster Atoms coming out in the future which would have been fast enough to run the game.

Reg reader returns Samsung TV after finding giant ads splattered everywhere

Henry Wertz 1 Gold badge

Parents lucked out

My parents lucked out (either the TV is just too old -- which I doubt because it still gets software updates -- or they decided this would not fly in the US. Which seems odd, given the absurd amounts of ads everywhere else on TV but who knows?)

They do have a Samsung, a 4K smart TV, and it doesn't have ads plastered anywhere. Really lucked out I guess.

It also doesn't get to collect much info, since they just leave it on "HDMI 1" and have a Dish Network box running on it. (Of course, Dish is hoovering up all there private info instead of Samsung then.) (They bought it secondhand from a friend so there's no irony-based problems of buying a smart TV and using 0 smart features on it.) To use that smartness a little bit, I did find the built in video player plays stuff off Universal Media Server just fine (once I persuaded UMS to quit trying to transcode... which is trickier than it should be, several options that are supposed to do that DO NOT WORK). It seems to play any video I've thrown at it; it claims to play up through 4K 12-bit H.265 (which I have not tested but did play a 4K H.265 10-bit with no issue) so that's nice.

Microsoft: Many workers are stuck on old computers and should probably upgrade

Henry Wertz 1 Gold badge

Uh-huh.

Microsoft releases an OS that only works on nearly-new devices, therefore everyone MUST upgrade. Right.

Two points here:

1) I have quite elderly hardware (a ~10 year old desktop at home -- i5 750 -- which I did upgrade with 16GB for a RAM-intensive project I was running a few years back) and it's fine, even running Virtual machines and such. It has a GTX650 in it so I can even run my games on it. My parents have an even older Core 2 Quad (Q6600), and feel no need to upgrade. It is old enough to be "working harder" (you can see the CPU usage get higher on it compared to a newer system) but not actually max out and make you wait for it. But Ubuntu runs a treat on it, Chrome, Skype, Zoom, etc. are all just fine on it.

2) With the current chip shortage and zeal for vendors to cut costs to compete with Chromebooks (while paying Windows tax), etc., some of the new hardware has dismal specs, I was shocked to see one system after another shipping with 4GB RAM, dual core CPUs showing back up in more and more systems, and so on. I'd hate to buy a new system only to find the real specs are LOWER than what I already have (i.e. a newer design but less actual processing power, storage, and RAM.)

For me, the cutoffs on replacing a system would be simply 1) If it's 32-bit, for a desktop at least it's time to replace (Ubuntu 20.04's not even available for 32-bit Intel, Chrome's not built for it, I don't think Zoom and some other stuff is either.) 2) Does it have enough RAM, if not can you add enough RAM? (Ubuntu with "flashback" desktop will get by on 2GB, but I'd add RAM or replace the system if I had 2GB.) 3) Does it have enough processing power? I'd say more or less dual-core on up (of course if you're video encoding or doing other CPU-intensive stuff where you start it and have to wait, you probably want to buy as fast and many cores as your budget allows.)

Volkswagen to stop making its best-selling product for Wolfsburg workers: VW-branded sausages

Henry Wertz 1 Gold badge

Gross

Gross.

I'm all for having more veggies and less meats; but I am not a fan at all of taking vegetables and pretending they're a meat. It just doesn't turn out well*. A vegan bratwurst sounds like about the worst thing I've ever heard of. If they want to cut meat consumption at the plants, I would consider offering a choice of the current currywurst, or a somewhat smaller one with side salad or plenty of toppings to pop on top (do they do that in Germany? We only usually put sauerkraut and possibly onions on a brat here but I could see a variety of nice toppings to put on there.)

*I tried a sample of one "veggie burger" that was acceptable, but apparently it's got far more unhealthy stuff (fats and sodium and so on) than having a regular greasy hamburger.

New World: Grindy? Check. Repetitive? Check. Fun? We hate to say it... but check

Henry Wertz 1 Gold badge

Re: PC only

"@Binraider From the official game website FAQ "New World is only available on PC." So I assume that means Windows only. :( "

Don't know; I have VERY good luck running under wine on my Linux boxes* . A high percentage of Windows games run under it just fine. Steam on Ubuntu, I've had 1 game not run under Proton and several dozen successes, I would not be at all surprised to find out New World runs just fine under it. There's a checkbox that says something like "Use Proton for all games", and instead of only listing games Valve or whoever have specifically tested with Proton, it lists all of them. Both wine and Proton provide (up to the limits of your video card) DX 9, 10, 11, 12, OpenGL 4.x, and Vulkan support.

*Well, my old notebook had an Intel HD4000, DX9-era so that limited gaming quite a bit with newer Unity engine lifting minimum requirement to DX10.1. Linux & Wine cannot perform miracles. My desktop and new notebook support Vulkan though. My friends Sandy Bridge supports DX 11 feature level 10.1 under Wine and Proton.. which is funny since in Windows they only shipped DX10 drivers for Sandy Bridge... but at pretty questionable framerates and lower graphics settings since it's not a fast GPU at all.

Henry Wertz 1 Gold badge

"

"Though the game starts off with NPC quest-givers fully voiced, this declines over time to the point that side quests just have some text."

That seems like a pretty cynical design decision to lure players in with a bait and switch."

Oh yeah, I played Age of Conan and it was just like that. Funcom had fully voice acted everything up through like level 20, and several cities with unique architecture; then apparently ran low on cash (enough to finish and launch the game but....) so the rest has little to no voice acting, and loads of "cookie cutter" towns all about (exact same buildings in the same order with the same NPCs, as opposed to at least randomizing the layout a bit even if they're the same buildings.)

"The blowing up graphics cards ended up being due to poor manufacturing I believe though."

Yup 100% certain, running a CPU or GPU 100% should never result in it burning out; it should be adequately cooled... but barring that the thermal throttling should kick in and protect the hardware. However, with modern CPUs and GPUs there's usually several adjustments for some temperature and voltage cutoffs (it'll throttle the CPU or GPU when these are hit). Gaming motherboards and GPUs love to tweak these thresholds up a bit. Do it with some care, and you end up with a CPU or GPU that runs a bit hotter before it starts throttling, it gets better benchmark scores but stays below critical temp. You may argue about reducing chip life but it won't go up in smoke. Don't be careful with these adjustments, and you've disabled the "critical temperature" cutoffs, demand exceeds cooling capacity for too long and the chip goes up in smoke (... or you keep the GPU or CPU itself below critical, but the nearby components get too hot and fail.)

Upcoming Intel GPU to be compatible with Arm

Henry Wertz 1 Gold badge

Re: Compatibility?

My guess -- and this is just a guess -- is that although current Intel GPUs present an interface to the programmer that appears to be PCIexpress, that (since they are generally built into an Intel chipset, not on an add-in card) they are actually using whatever interface suits Intel. So Compute Express, rather than being some "standard" that Intel and only Intel uses (... no problem when it's an Intel-compatible board taking only Intel CPUs to connect to an Intel chipset), it's one that other vendors use (... if it catches on at all.)

The standard on ARM (for on-board peripherals) is AMBA -- this was originally an ARM-only "standard", but is now used by MIPS, PowerPC, RiscV, etc. systems simply due to it being royalty-free, and additionally simply by ARM's "momentum" meaning there were far more AMBA-compatible peripherals available than for a MIPS or etc. (These peripherals are not usually purchased as physical chips, they are "resource blocks" that are placed on the main chip, the ARM, MIPS, or RiscV CPU and the peripherals will be laid out on a single chip.)

I wonder if this Compute Express is going to be like an AMBA extension (so it can fall back to AMBA speeds) or if the CPU must support Compute Express? I mean, either way it's fine, but of course Intel may sell more GPUs if one can decide to support full-speed Compute Express, or not bother and get whatever speeds they can over AMBA.

Real-time crowdsourced fact checking not really that effective, study says

Henry Wertz 1 Gold badge

I know this doesn't work

I know this doesn't work. I watched a game show where the person asked the audience, why does stuff weigh less on the moon than earth. a) lower gravity b) low air pressure. Something like 80 percent picked the wrong answer, so the contestant went with the "wisdom of the crowd" and lost. I mean, even if they didn't know the answer, they use low pressure chambers for medical and other uses on earth and the stuff does not start floating away!

Assange psychiatrist misled judge over parentage of his kids, US tells High Court

Henry Wertz 1 Gold badge

Wouldn't that increase the risk?

Judge says "having children to protect and raise reduces risk of suicide". (Edit: Never mind, it's some asshole from the US pretending they know what the judge would have said. Nevertheless... I'll leave this anyway:)

The judge is not a psychiatrist. They should ask an expert about that. To me, I would think having children that you know you will probably never see again would INCREASE risk of suicide (never seeing them again either because of being stuck in either a hellish US prison, or an Australian prison (hellish or not; I don't know if AU follows the US's low prison standards or not).) But, I'm not an expert either. To me it seems inappropriate of the judge to have said they would have changed their decision based on their opinion of how psychology works, rather than saying they would have thought this was possible and asked a medical expert further questions about it.

Zuckerberg wants to create a make-believe world in which you can hide from all the damage Facebook has done

Henry Wertz 1 Gold badge

Re: second life vibes

Exactly. You beat me to it. Second Life already exists, and has been around for like 15 or 20 years. You can do EVERYTHING in there he goes on about (it has a scripting language, so you can even do the scripting an in-SL object that interacts with the outside world.) And Second Life looks lots better than Zuckerberg's demo too (... in general, all graphics are user-supplied in SL so there are some historical areas intentionally kept as they were in like 2005 or whenever.)

I had assumed (given the bubble in users on there years back, and the drop off in users afterwards) that Linden Labs was still in business due to cost cutting (originally 1 SIM -- 256 meters x 256 meters -- was on 1 server, they can probably combine 50 or more onto one now) but it turns out they are more profitable than ever -- due to porn and gambling. Make no mistake -- there's still a reasonably large number of users in there actually creating stuff, interacting (non-pornographically), and so on, they just don't bring in the revenue that people throwing $L (Lindens) into slot machines as fast as physically possible does.

Ironically, Second Life initially did have VR goggle support -- for the 1990s era extremely expensive ones. They dropped it, and by the time Occulus Rift etc. came out, they apparently found their code base was too antiquated (... it is still OpenGL 2 based) to add current VR goggle support to it.

If you're using this hijacked NPM library anywhere in your software stack, read this

Henry Wertz 1 Gold badge

Sinclair

Yup, my parents watch local news off KGAN (sinclair station); it is still autonomous enough that they didn't go off the air or have dead air instead of ads or whatever.

But it must have killed the teleprompters, the one day they switched to another channel because the newscasters were just like "bare with us we are having technically difficulties", next day (and still I think...) they've been reading off of like 6 inches of printouts sitting in front of them.

Microsoft emits more Win 11 fixes for AMD speed issues and death by PowerShell bug

Henry Wertz 1 Gold badge

L3 cache?

I still don't understand how Windows is influencing the L3 cache at all, or why. I mean, Linux will measure cache sizes (prints it during boot, and I assume uses this info to adjust some internals to fit in cache), and on some architectures where the OS must take care of it it enables the caches on early boot. AFAIK that's it, I don't think there's even a sysctl or entry in /sys to turn caches on or off.

Microsoft admits to yet more printing problems in Windows as back-at-the-office folks asked for admin credentials

Henry Wertz 1 Gold badge

Re: Easy Alternative

Yeah for real. The Linux HP driver, there's HPLIP (a thing that notifies if printer is out of ink or paper, so optional really); and the cups printer driver and sane (scanner access now easy) scanner driver, these drivers tend to be quite small and HP's are no exception.

I think in Windows, there's GUI components for both of these so the actual print & scan drivers are far larger; a photo management app, scanner program, probably a banner and poster maker, some application to guide through buying ink from them, and so on.

Henry Wertz 1 Gold badge

Re: Easy Alternative

Yes. Pixma *they have a MG7720) is triple-supported -- cups itself has direct pixma support. It's supported again by 2 driverless modes (Aren't industry standards grand? There are *4* "driverless" mode printing "standards", Apple uses 1 (Airprint) and there's 3 others, Ubuntu supports all 4.) Finally, Canon supplies a driver, it has a bit better color accuracy than the others so I have their computer using that.

(This Canon driver annoyingly is a x86 binary blob style -- I did actually install it on an ARM Chromebook I had Ubuntu on.. installed qemu-user-static, this adds some "binfmt" hooks so non-native binaries are *automatically* run under qemu. I added x86 and x86_64 architectures in I think /etc/apt/sources.list, and when I installed canon driver it installed some x86 libs, and it actually ran fine. A bit CPU-intensive (emulating x86 bin on ARM) but it stayed ahead of the printer so no ill effects compared to native.

------------

The advice to check for Mac support is spot on -- even in the prehistoric days, Mac support generally indicated a printer with mutli-platform support in general and avoided "winprinters". More recently, OSX and Linux distros both use cups so printer support is virtually identical between the two.

'Windows 11 has been successfully downloaded,' says update for Xbox version of Microsoft Flight Simulator

Henry Wertz 1 Gold badge

Wow that's dumb

Wow that's dumb. Some advertising for WIn11? Fine. Innacurately listing an update to flight simulator as "Windows 11?" Dumb, I expect my updates to accurately list what is updated.

Apple's Safari browser runs the risk of becoming the new Internet Explorer – holding the web back for everyone

Henry Wertz 1 Gold badge

Fanboiism

Sounds like a bit of Apple fanboiism to me.

The 71 percent compliance is not Apple not implementing the web as determined by Google. It's a standards complaince test, not a "What Google does" test. Not implementing something like USB access is sensible, but most of this is simply Apple not keeping up with standards.

Intel teases 'software-defined silicon' with Linux kernel contribution – and won't say why

Henry Wertz 1 Gold badge

mainframe model?

This is something IBM has done with their mainframes -- they are sold over-provisioned (extra CPUs, extra RAM, etc. already installed but inaccessible until the user pays for it).

And more specifically related to CPU features... At some point (around 2000 I think) as they saw interest in running both the existing mainframe workloads and Linux, they found a few CPU instructions they can disable that Linux does not use* but mainframe OSes do, and from then on customers have been able to pay some amount to enable an additional CPU or some lesser amount to enable it for Linux workloads only. In other words, paying extra to enable some additional instructions.

*I suppose gcc or clang don't generate these instructions, so Linux kernel + applications are not going to use them.