* Posts by Lee D

4231 publicly visible posts • joined 14 Feb 2013

Worst. Birthday. Ever. IPv6's party falls flat

Lee D Silver badge

Re: Lack of commitment

Living off fat consultancy cheques and retirement obtained through that exact scaremongering.

Lee D Silver badge

Re: Who in their right mind wants cloud based...

To be honest, I'd be quite happy to have something upstream that allows me to control incoming traffic.

You'll always need equipment on the premises doing the basics, but how nice would it be to be able to ensure that even if someone gets in your network, hacks your router, etc. that you have another line of defence in the form of, say, just a basic firewall in the cloud? It would help against DDoS, too. Tell the upstream system to not send that junk down to your own line before it can even affect you. Block SMTP ports at the border, or even "all outgoing SMTP that doesn't use the ISP server" or similar (so even if you're hacked, people can't send out email from your router). Block UPnP and recent attacks before they hit your home router, etc.

I think it's a service from an ISP that I would pay for... especially if it used just a brand-name firewall (i.e. host a virtual instance of Smoothwall which lets me configure it as front-end to my connection) and allow you to keep stats on everything as such things do... so you can work out that most of your incoming stuff is for YouTube or whatever. I imagine some high-end places do this for you already, in fact.

If you're worried about your ISP snooping, then you should be treating every packet to them as untrusted anyway, so it wouldn't make any difference. But I remember PlusNet, back in their proper techy days, would block your web if they detected open port 139 traffic, to warn you that you were filesharing to the world. They obviously had systems on their end, and then you just needed to sign in and tick the "Yes, I know this is stupid" page if you actually wanted that. Same idea, just customer-accessible. I can remember thinking at the time "if only I could use that myself to block all the junk that comes down the line before it actually does", back in the early ADSL days.

Hell, if it were a Smoothwall-like appliance, I'd quite like to set it up to warn me about usage during the day, particular alerts (IPS etc.), reverse proxying, etc. before they get to send packets down my home connection.

Lee D Silver badge

Then treat IPv6 as a separate network entirely. You don't have to touch, fix, fiddle or break your existing systems whatsoever. Just present an IPv6 interface and push that separately (NAT) as it it was coming from one of your own spare external IPv4's.

75% of mobile users coming in over IPv6 is significant. 20% of Google searches too, if you read their own stats. Because 4G etc. specify IPv6 support as a requirement.

ISP's will run out of addresses. Think: Are there going to be more people in the world tomorrow, or less? More online, or less? More devices they own online, or less? Eventually they will run out and I assure you that moving your whole ISP to a NAT configuration while simultaneously trying to get new customers on-board without enough address space is a world of hurt compared to making your IPv6 testing plans go live.

It's all excuses, from what I see. "Because we don't have to yet" just stinks of poor planning. Imagine if they said that about Apache upgrades, HTML5 support, SSL cipher upgrades, etc.

ICO smites Bible Society, well fines it £100k...

Lee D Silver badge

Even Jesus survived a reboot.

In defence of online ads: The 'net ain't free and you ain't paying

Lee D Silver badge

Re: Sorry, but no.

You can't be a parasite if people are giving away free food that you choose to eat.

If you don't want people to consume without paying, make it pay-for. Then watch how nobody cares enough to pay for it.

Literally, your "value" as this kind of site is greater to the advertiser than it is to the readers.

And your content isn't good enough for people to pay for.

This isn't an insult - the same is true of billions of sites. Including all my own, anything I've ever written etc. It would be nice to be paid for the tutorials, software, in-depth researched articles, etc. that I've wrote for all kinds of subjects over the years. But if I relied on that, I would literally be permanently unemployed.

My father-in-law is a published author. 30-something books to his name, published over 30+ years, with big-name publishers, one of them was a "standard text" for over a decade. It's a game when I got to a bookshop or library to find one of his books and it's (usually) possible, even in foreign languages.

You know how much he makes? A pittance. He makes more money from Amazon Kindle than any of his agent-published books, and he describes that as "enough to have a coffee in a cafe each day".

If he can't make a living on selling content, and that's after 30 years of effort, then for sure some blog-writer or article-submitter isn't going to, and neither is a website facilitating that for "free" to its users.

People won't pay to read an opinion.

Lee D Silver badge

Re: This generation has decided not to pay for anything

"As a software developer who makes Windows desktop applications..."

Oh come on. Up until then it was just trolling, but do you honestly think people pay to put software on a desktop nowadays and that being a viable income stream?

If it's not web/cloud, nobody's going to bother unless it's something niche and high-end (e.g. needs some serious processing / GPU usage, etc.).

Lee D Silver badge

I pay for content. I paid for shareware. Everything from WinZIP/PKZip to Z80 Spectrum emulator (Gerton Lunter).

I pay for my movies. I pay for my games. I pay for my applications. I pay all required and proper licensing. Hell, I paid for my AVATAR that I use on websites... literally the image I upload in the "user avatar" box on forums that support that. I paid for it. I literally have the email that proves I can use it for the purposes I use it for, from the original artists. I paid for every artist-created pixel I use in the games I write, every font, every MIDI note, every library that requires it (but I tend to be open-source).

And I don't object to advertising. But I object to this:

- Imagine the Internet doesn't exist, but "Internet ads" pervade other media that we used to be familiar with.

- The front page of your "free" newspaper has a huge headline and "continued on page 3". You can't turn to page 3 of your newspaper without inserting a card that you only get by subscribing. Newspaper goes in the bin.

- You turn on your TV but before it will let you change to ITV, you have to sit through 30 seconds of ADDITIONAL adverts. If you switch back to Channel 4, they throw adverts at you again. And then back to ITV and you have to wait another 30 seconds. Oh, and the volume control is disabled and you can't do anything else while the 30 seconds is happening. Oh, and the normal program adverts are still in there half-way through the program too.

- Additionally, 1/3rd of your TV screen is just constant flashing, blinking, repetitive adverts at all times, no matter what you're watching, usually adverts for bigger TVs.

- The post comes through. It has the usual junk mail. But when you try to dispose of it, it jumps back out of the bin and tries to jump in front of your eyes again. It does this several dozen times. Every time you read the advert, however, it automatically grabs you by the ears and tries to make you stare at other adverts in the area. Sometimes it tries to mug you for your credit card or phone.

I'm not sure we'd tolerate those.

Additionally, the simple fact of the matter is: I won't pay just to access a website. Especially not websites that do things I don't care about, and don't cover enough stuff I do care about. It would be like asking me to subscribe to a knitting magazine.

Honestly, I wouldn't pay for The Reg. It's a nice forum. There's good discussion. The occasional article is interesting. But I wouldn't pay £1 a month for it. I'm really struggling to think of ANY of my usual websites where I'd do that. I bought a subscription to Slashdot once. I bought a subscription to SoylentNews once. I paid for LWN.net subscription once. I reckon it would work out between 1p and 10p a year for those sites, overall, and that was because at the time I was on them multiple times every day and found them really engaging. But... I wouldn't pay 10p a year for any site, I wouldn't pay 10p a year for Slashdot any more, and I wouldn't pay 10p a year to a whole bunch of other things that I use a lot more often. I probably wouldn't pay 10p a year to access BBC News (ignore anything about TV Licence, etc.).

I might pay a few quid a year to access iPlayer, however. No different to Netflix, TVPlayer, Amazon Prime, etc. But I don't pay to access my Google Play at all. I've paid for titles, but not for access.

Not everything has a "right" to make money, no matter how much effort goes into it. When a multi-million-dollar blockbuster movie employing thousands of people has me umming and arring over a £5 DVD purchase (hell I even nearly didn't use a £3 cinema ticket recently until I realised I could watch Deadpool 2 with it, and even then it was a close-run thing) what makes you think I'm going to pay fractions of that for literally half-an-hour of a journalist's time on a short, non-techy article that's often beaten by some funny guy pounding out on a blog for free?

I'm glad The Reg exists, but if it didn't others would. They always did, long before the days of online advertising, and probably always will. It'll be amateurs who get popular and then it turns into a job and then ten years later they realise it's become unsustainable and they move on, and then someone else will pop up, and so on.

My brother used to make a Scouting site that got more hits than the official Scouting sites for basically all of the 2000's. He never charged a penny, it was a labour of love, and it was serious amounts of expertise and information used to create a very popular and useful resource. Advertising got him a deal with Millets and Blacks and places like that for a couple of hundred pounds a year, and then Google ads knocked it out of the park overnight but still it wasn't more than a few hundred a year. To be honest, I don't think if you added it up it would cover even the hosting costs over its lifetime, even subsidised as they were by a very charitable commercial hosting company.

Content just isn't in the realm of being worth paying for. People might pay for a newspaper - what? 50p? It's a long time since I had to buy one, as I only ever read The Metro if I'm stuck on a train and otherwise I use online resources. Let's say that 50p a day to one company is the absolute most people will pay for all their news content. It just means that there's too many players for anyone to ever make money on that basis.

Sorry, but if advertising works: Good for you. Keep it sensible and out of my way and some other mug in an advertising brokers somewhere will pay you to write a blog and call yourself a journalist. Cool, for us all.

If it doesn't work: Nobody else is going to pay for this stuff.

Now the exception MIGHT be - and I can't see it ever happening - if a bunch of popular sites/services all got together and said "if you pay £5 a year, you get access to all these sites..." and it included a bunch of things that I could see myself using... then it's possible I'd do it. I'd have to get as many hours of entertainment / information out of it as any other service, but it's more viable that way. And, crucially, it would have to involve multiple rivals sites that are in direct competition with each other - the sites I go on to find the stuff that The Reg doesn't cover, or covers the other side of, etc.

So... don't complain about advertising. It buoys up a lot. But it's in no way necessary that I have to feel grateful or somehow change how I consume advertising in order to allow that. If advertising changed (and online advertising has been very nasty at times), or the content quality dips even a fraction, then I just stop consuming that content. It's not the end of the world at all.

And as one of the only people on the planet who bothered to pay for WinZIP or my online avatar, you can be sure I'm one of the people most likely to give money for things I find useful.

As time goes by, it's only going to get worse. Open communication means that the traditional media etc. industries are going to get more and more stretched. When something happens now, do I just read BBC News coverage and nothing else? Far from it. I will dig into everything from major news channels, to Twitter accounts of those involved, to official statements from those involved, to commentary from random people, etc. I take it all with a pinch of salt but it's the only way to get all sides.

When Spectre/Meltdown hits, do I just pop on the Reg to see what everyone else did? No. I'm all over multiple resources.

As children grow up in that environment, they are not going to pay to see things any more. It's already alien to pay for TV to most of them. Podcasts are basically "free radio". They skip adverts routinely. They share Netflix with all their mates. And so on. Nobody's going to pay for content no matter how much you beg.

When there are millions of content producers, convincing millions of people to pay for even one of them is extremely difficult to do, and not likely to net much. But when the advertising dries up, the only thing that will die is commercially-reliant stuff. All the big business. All the hobbyists/enthusiasts/interesting people will still be around.

The advertising world toyed with micro-transactions but I can't imagine a microtransaction that you could charge to pay for this article, for instance. Or my comment (which may well be longer than the article - and I did it for free!). You'd literally be into fractions of thousandths of pennies. Because otherwise my daily media bill would be through the roof.

(Ironically: People are still paying £100 a month for their Sky... I can't work that out at all).

Enjoy it while it lasts. But don't expect to guilt me into enjoying being subjected to it.

Oddly enough, when a Tesla accelerates at a barrier, someone dies: Autopilot report lands

Lee D Silver badge

Re: Fire Department

The problem is the cell-compartmentalisation. It takes only one cell to become a thermal runaway and you have flames. But cooling them involves pouring water on the middle of a battery that's almost entirely solid metal contained in a metal box. It's hardly accessible.

It's not going to go "boom" on contact with water, but it's going to expand, release gas, put pressure on nearby cells, all contained in a fixed size box with thousands of other cells that were - until then - untouched.

And as shown - even days later the tiniest sliver of stray/punctured/scored metal from the accident shorting out the cell starts a fire again.

I have seen a Macbook keyboard physically expand upwards and then ping keys off it, over the course of just minutes, because a battery got wet. The battery was literally 2-3 times its original size and punched/warped the aluminium casing in a matter of seconds. That's not helpful.

Lee D Silver badge

Re: Nothing is right first time

Gosh, if only you could trial them at slow speed on things that are lesser risk, in areas where it's safer to go wrong.

Everything from golf carts ("Drive me to hole 9") to milk floats, theme park transport to warehouses.

No, nobody did that. Nobody bothered. It was straight into "self-driving car, but it's not really self-driving, but everyone think it's self-driving, smack bang on public highways with all the other drivers at 80mph".

There's a little train for the kiddies, that's actually just a wheeled vehicle, that drives around my local shopping centre. Pay £1. Stick the kids in. You loop around the upper level and come back. There are low-speed pedestrians everywhere, the train makes a noise to let you know it's coming, it travels at about 5mph with some minimum-wage employee on the controls, past other shoppers, in a controlled environment, on a fixed route (that isn't railed or anything, just drives through the shoppers).

That would be INFINITELY safer to test on, especially as regards "What if someone just steps out in front of us". Worst case, you'd catch a pedestrian on the ankle. I see no evidence that Tesla's etc. code has been tested significantly in such environments. Hell, a self-driving shopping cart! Genius! And a sub-product you can actually sell even if you can't scale it.

But these things are still making stupid mistakes and are entirely unpredictable.

Lee D Silver badge

Ignore ALL the auto-pilot stuff. Let's pretend that doesn't exist. Let's pretend it IS just cruise control.

Every modern car I know with cruise control has auto-braking on it. It can follow any solid object and stop if it detects an obstacle BEFORE it smashes into it and puts your brains through the steering column.

Whether or not the Tesla was "auto-piloting" by any definition, why the hell did the car not say "Hey, that's a solid object and I have less than a second before I die a burning death of carnage and flame" and at least to attempt to brake?

The answer? Because it's SIMPLY NOT CAPABLE OF DETECTING OBJECTS. Simple at that. It has no idea if there's a solid impenetrable concrete barrier in front of it, approaching at speed. It simply will not brake.

Whether or not you think you can trust the thing in a lane on a highway, it can't even do the simplest, most basic thing with all its fancy sensors. Detect a solid object directly ahead within the path of its steering that approaches at a normal vehicular speed. Whether that's a lorry or a concrete divider.

Given that it can't even sort the basics out, using an array of sensors, what the hell are people doing entrusting lane-control, speed management, etc. to it via a camera?

Notice: IT DIDN'T EVEN TRY TO BRAKE. Not even at the last second. It just doesn't know that the nearest object is 10m away, the car is travelling at 10m/s, yet that object is coming towards it at 10m/s and it has less than a second to DO SOMETHING about it, and it can't even be bothered to beep, brake, swerve, avoid, or display a fecking popup on the dash.

These things are deathtraps if they can't do the simplest things. Things that "non-intelligent", non-autopilot cars can do as a matter of course with their cruise control. Things that are increasingly STANDARD on modern cars, let alone some extra-bolt-on to some (let's face it) prototype boy's toy.

I'm not suggesting reliance on them. I'm not suggesting that the driver was attentive or shouldn't have been in control. But, ffs, come on. It just smashed into a solid object which happened to be approaching it at the same speed as it's wheel-speed, and it didn't even notice, care, react or warn of anything. It just went bang. Lanes or not. Accelerating or not. Human or not. Just smack, straight into it.

Stop trusting your life to this junk.

Plans for half of Europeans to get 100Mbps by 2020 ain't gonna happen – report

Lee D Silver badge

I live in London (Greater London, but still a very, very urban area).

When I moved into a flat last year that was only built in the 80's/90's, the options available to me were:

- ADSL, guaranteed minimum of 1Mbs. Maximum 8 theoretical. "Likely": 3.

- VDSL, guaranteed minimum of 5Mbps. Maximum 75 theoretical. "Likely": 5

I didn't even bother, and just bought a 4G box instead (no guarantee at all but has never been less than 10, I've seen it go to 40, and usually it hovers around 30 down, 20 up with pings around 20-40ms).

Monday: Intel touts 28-core desktop CPU. Tuesday: AMD turns Threadripper up to 32

Lee D Silver badge

Re: Gimme speed

Few things are CPU limited that can't work better with a bit of rejigging and some parallel processing (eg. GPU processing).

But things plateaued really quickly because they hit physical boundaries.

Nothing stopping people making a core without a consistent clock across it. It's perfectly viable, theoretically. But it would mean architecture changes, most likely. Or it's performance for synchronous tasks would just fall back to "waiting for everything" and you would see no speed gain.

Heat and chip size are limiting... you need a very tiny, very hot chip, which is really bad for materials that you want to cool, where you just want everything to be spread out and cool. It's like putting a soldering iron bit on your motherboard, basically. Just because it's small doesn't mean you can stop it destroying itself / it's surroundings by blowing a fan near it.

I think we'd see much bigger gains, anyway, from things like memory that's closer to the chip without relying on tiny local caches to keep the CPU fed (isn't that the problem with things like Rowhammer, etc. too?). If we could bring the RAM into the CPU, and things like persistent RAM, then you'll probably see greater performance increases as the 3GHz CPU will always be kept busy as opposed to a 5GHz CPU that's constantly waiting on the RAM for data.

To be honest, I'm at the point where - despite as a kid looking at a 4.77MHz chip and being unable to imagine the speed of 1GHz, and then achieving it in only a few years - I look at the top-of-the-line chip frequencies and don't see them changing anywhere near as much in the next decade or so.

With virtualisation, parallelisation, etc. however it won't matter much for almost any "ordinary" workload. And HPC is moving towards GPGPU, custom chips etc. anyway. We'll see a quantum computer before we'll see a 10GHz home machine.

I think I'd rather my servers had 100 cores idle at 3GHz than anything else anyway. VM running slow? Add another half-dozen cores and some more RAM into it. Pretty much normal stuff (SQL, etc.) will scale just fine.

The problem there is the licensing is going to become insane unless revised (but I run Windows Server Datacenter anyway, so I don't particularly care for most things!).

It will lead to the point, though, where one server could in theory allocate 10 cores per client (to things like terminal services, etc.) and be just as fast as anything you could do locally, and at that point you might see a push towards thin-stuff again. Until the next fad-cycle, of course.

EE seeks guinea pig millennial hipsters for 5G experiments

Lee D Silver badge

Re: Wireless stuff is never 'low latency'

Agreed. I ran CS servers for over a decade.

However, DECENT Wifi, with proper signal strength, QoS, etc. can match cabled latency good enough to enjoy the game, no problem. It would be as good, but the QoS, channel-sharing, downgrading abusive wifi clients, etc. matter much more than "it's wifi".

When my gf used to watch videos online, my ping went through the roof, even wired. Proper QoS priority on the packets meant she could do what she liked unimpended, and I still got sub-20ms ping.

People used to accuse me of having an advantage "because it's running off his computer". At which point I'd point out that the server was in France and I'm British. Literally, playing on a gaming laptop, over Wifi, over bog-standard Virgin cable, to France was faster than almost everybody's "gaming" setup. I even got into an argument with some guy who had one of those "killer" network card things.

With a decent router, 4G or Wifi is more than adequate for most people, and you have to be properly professional to require sub-50ms latency, most people cope quite happily with sub-100ms.

Lee D Silver badge

45ms is low-latency?

Currently on 4G.

Speedtest - ping 21ms. Jitter 3ms.

That's on a Giffgaff SIM with a Samsung mobile. At home I have a 4G Huawei box on a Three SIM and get similar.

You're gonna need a lot less than 45ms to sell that to me as "low-latency".

Stern Vint Cerf blasts techies for lackluster worldwide IPv6 adoption

Lee D Silver badge

Re: Internal fun...

That's a really poor excuse. Why would you need to?

In my entire network, I "know" two IP addresses. One is the gateway. One is the DNS. The gateway is also a DNS server. Everything else is in DNS.

Hence, no matter what I do, I never need more than those two. And I need those precisely when: Setting up some brand-new, from-scratch settings for, say, DHCP. Which happens, what? Once in each job? Maybe once every few years at best?

Literally the last time I read out an IP address? I can't remember. Machine imaging? No address needed. Configuration? No addresses needed (it pulls from the image or the DHCP or the network in some way). Giving remote access to someone? Cool... remote1.domainname.com (which is the first external IP address... remote2 is the other one... I have LITERALLY no idea what they are, and don't need to know. They are documented somewhere, but I've never had to type them or read them out except to create that DNS entry). Printers? Nope. All DNS nowadays (and no user needs know the IP and I don't WANT them to know, and knowing doesn't even help them, as they are VLANned and go through the print management system anyway - which is only thing that actually talks to a printer direct - and the IP of the print management that shares out all the user-visible printers? Yeah, that's in DNS somewhere).

Hell, I don't WANT to hard-code IP's into the system (into devices may be different, but you can usually always let them get a DHCP allocation and then just reserve it). It makes replacing a machine so much more tricky. Wanna take serverA offline? Okay, spin up serverB. Make sure it's working and synced.

Change serverA DNS record to point at serverB instead (hell, you can CNAME it, still no IP address!). Hey presto, done. Did it all go wrong? Remove the CNAME. Did it all go right? Retire serverA. What their IPs were is literally unnecessary for anyone to know.

And I've yet to deploy a service where it demanded an IP address and wouldn't accept a fully qualified domain name instead (an IP range might be asked for, but I can't think of anything I'm deploying that requires a specific IP address - and if you have half a brain, your IP range is easily discernible from your any of your IPs and your subnet - and you use, say, the first address as the gateway, the second as a DNS, etc. so it's easy to set up anything you do have to touch).

About the only thing I know that might demand IP is things like HA heartbeats and stuff but even then I'm pretty sure you can just use DNS addresses. I very much doubt that Google are sitting them coding in thousands of individual IP addresses.

Lee D Silver badge

Re: They are just being frugal

I would prefer my tech journalists to stop whining about poor take-up of IPv6 when there is literally no reason not to instruct their host "turn on IPv6" on a test domain. It's seriously overdue given the amount of ribbing they want to give others, not that it's a vital technical resource. It's like mocking everyone for not using Windows 10 when they're stuck on XP.

Hell, it would make a whole series of interesting articles: How The Reg went IPv6, the problems we hit, and why haven't you done it yet?

Instead we get junk like this:

https://forums.theregister.co.uk/forum/1/2018/05/24/open_source_mano_release_four_lands/

Where there are literally two posts in the comments... me asking "WTH is this", and some random guy commenting "I don't know either", and that's it. Still none the wiser.

But yet every month, without fail, we get more and more dire warnings, articles about IPv4 allocations running out, coupled with statements like "It's about time everyone moved on because there are no addresses left", etc. but The Reg make no efforts that way. Not even tokens. Not even tests.

To be honest, the Reg reader survey is entirely "BI"-focused, yet most of their readers appear to be techy and wanting to forget just about anything to do with such managerial buzzwording - science, tech, sure. I'd much rather read a good article on the systems behind the Reg and what they use and how they implement a major project, than some guff. I suspect, however, that it would be technically embarrassing for them.

Journalists are journalists. But if they're sniping at the big places for not being ready, then they cannot ignore their own, much smaller, much easier, neglected internal system. Has The Reg updated their routers etc. against VPNFilter? What are they doing about Spectre/Meltdown? And how can they claim to be a good source of advice, articles and news about such things if they can't manage the basics?

Create ipv6.theregister.co.uk

Add an AAAA record to it.

Activate IPv6 on the frontend servers for that subdomain.

When it works, advertise it as a "prototype" and then start fixing things like logins, logfiles, etc. to work on it.

Even if it takes a year, two years, to get up-and-running you can THEN be sarky about places not supporting IPv6.

Lee D Silver badge

"PS: Yes, yes, we know, The Register is still IPv4, and not on IPv6. Word from our sysadmins is that we'll migrate Soon™"

Hoo-fecking-ray.

At least you're acknowledging it now, but my comment history shows a similar response for the last... five... six years? I can't be bothered to go back further than that.

Hear that? Of course it's Indiegogo's deadline for a Vega+ whooshing by

Lee D Silver badge

Re: Think of it as a donation, not a purchase

I have backed:

Defence Grid 2. Completed it. Got an ATI graphics card for a dirt-cheap price on that too.

Elite:Dangerous. Got it. Don't really play it that much, to be honest.

Dicecards (physical playing cards). Got it. Use them. Keep them.

Dead Man's Draw (physical playing cards). Got it. Love it. Play it all the time.

Joking Hazard (physical playing cards). Got it. Have played it with friends.

In terms of value for money, I'm way ahead of the game because:

- I researched before I did anything.

- Most of those projects were big-name things (sequels, from famous cartoonists, had already put out a popular and successful app but then made a physical game, etc.).

- It was throw-away pocket-money.

- They shipped on time.

Another one I backed was cancelled and refunded (because they got an investor, so didn't need the kickstarter investment, but I still bought the game anyway!).

Crowdfunding is great. DG2 wouldn't have existed without it, plus a lot of other things (Satellite Reign, as you point out). But it's not investment as you have no way to scrutinise things properly, and it's not a purchase/preorder as you have no guarantee at all. It's a "community project". It can fail miserably. Having seen projects raise a million dollars to produce a multi-colour LED torch, personally I think people get blinded into backing ridiculous things.

But if you just see it as "I like their other work, it'd be good to bung them a tenner to say thanks and maybe get a sequel", then it's absolutely fine. To be honest. If some of those projects had just put up a donate button, or a "buy three copies at a reduced price to send to friends" deal, I would have done that anyway.

Watchdog slams TSB boss for underplaying extent of IT meltdown

Lee D Silver badge

Re: You ain't seen nothing yet...

They already have.

I'm using it.

The transition was seamless, the old one still works, and I've seen no problems (apart from the fact they juggled things around).

Pretty necessary given that they won't be able to use a pan-European app after Brexit, presumably.

I see a satellite of a man ... Galileo, Galileo, Galileo, Galileo, that's now 4 sats fit to go

Lee D Silver badge

Re: What's the point of this?

Commercial GPS (e.g. shipping, air travel, etc.) is a pay-for service incorporating emergency signals (normally GPS is receive-only for us mere mortals but it makes money from commercial usage.

Lee D Silver badge

Re: Receiving the signal

I believe for consumer purposes, you won't notice any difference if the UK is involved or not. Like you can already pick up GLONASS and everything else if your phone supports it, the core product for the public is out there and usable.

However, you need a phone that supports GLONASS, they aren't (I believe) transmitting a plain GPS (as in the US GPS system) signal, they transmit their own.

The next-gen of phones will likely all support GPS, GLONASS and Galileo. Whether it will make your satnavving any more accurate is questionable, really. It's supposed to but I doubt you'll see much practical difference.

The real problem is that we'll have to pay through the nose to use the interesting stuff (commercial, military, etc. for everything from planes to cruise ship).

Four hydrogen + eight caesium clocks = one almost-proven Einstein theory

Lee D Silver badge

Re: "...0.00000022 plus or minus 0.00000025...."

"The whole concept of absolute proof ought to give any self respecting scientist hives."

(Mathematician waves serenely from the sidelines)

Lee D Silver badge

Re: This is why science rocks

Placebo is, however, no better than Placebo.

Hence, none of this should be prescribed in preference to... a sugar pill and someone saying "Yeah, this'll work".

Placebo is indeed very powerful. But MATCHING placebo just means "as good as anything else that does nothing whatsoever". If it's ever proven better than placebo, it is virtually medicine. But being "as good as" placebo means absolutely bog-all except that nobody should ever be paying more for it than they would a placebo.

It's like saying "The Emporer's new clothes are as good as wearing nothing at all." Then save the wastage of money and wear nothing at all.

Dual-screen laptops debut at Asus' Computex chat

Lee D Silver badge

Great, if you want to run a Nintendo DS emulator, or have two tablets sellotaped together at the hinge and call it a "laptop".

Otherwise? I honestly can't see the point at all.

Is it really that hard in this day and age to make a thin tablet, that slots into the top of a laptop screen (behind the laptop screen itself). When docked, single screen and it charges up the second. When undocked, it's a wireless screen / tablet that connects to the laptop and acts as a second-screen. Or functions entirely independently. At least for a few hours.

Kids bothering you? Pull out the screen and let them use it while you use the laptop. At a presentation? Pull out the screen and use it as speaker's notes/input to the main laptop. Working on a train? Tuck the screen behind the laptop screen and let it charge from it. I could literally glue a tablet charging piece of plastic envelope to the back of a laptop screen, run a few wires for USB charging, and find some software to use a second computer as an extended monitor (Chromecast-esque), and hey-presto, same idea.

I honestly can't fathom "notches", the little application strip thingy on the new Macs, or any other similar nonsense. I never figured out that gaming keyboard that was OLED on every key either. But it seems to me that nobody ever does the obvious.

Hell, sell the second screen as a tablet in its own right and save all the "unique" manufacturing costs.

Clock blocker: Woman sues bosses over fingerprint clock-in tech

Lee D Silver badge

Re: Salt free

Do you store the user's data in a way which can be linked to a person?

If yes, you're subject to proper control of that data.

Whether it's a hash, an employee number, a photo, a fingerprint, or their favourite toilet paper, you have indicated that Fred Bloggs has this property X. And you're storing both the name and property X and transmitting it off-site.

Bang. Subject to DPA/GDPR (in the UK/EU), and similar laws elsewhere. That person has the right to review, authorise, request deletion, etc. any usage of that data.

(P.S. Even if you anonymise entirely, you still need permission to store and use for given purposes. Otherwise, literally, I could take your work photo... spread it around the Internet and use it for stock-photos without your permission. Or a picture of your fingerprint. Or a copy of your DNA. Or a picture of your house keys. Or anything. It's all the same.).

Russian battery ambitions see a 10x increase in power from smaller, denser nukes

Lee D Silver badge

"With a power density per cubic centimetre of 10 microwatts"

"have emitted a prototype nuclear battery packing 3,300 milliwatt hours of energy per gram"

Hold on a mo. Something's not right. Either that's a fecking huge battery or it literally generates an absolute damn pittance of electricity that you could probably get from heat-capture of a human body / capturing stray Wifi.

Is 10 microwatts really all that useful?

You have suffered without red-headed emoji for too long. That changes Tuesday

Lee D Silver badge

I find it quite disconcerting that it took just as long to integrate ancient Mayan numerals (which are pretty much one of the first examples of a written numeric system) as it did to put in a picture of Broccoli.

I mean... seriously. Emojis, etc. - sure. But do the important things that are much closer to the purpose of Unicode first.

No researcher out there is quietly crying out for a symbol of broccoli in their fonts in order to record, categorise and translate their ancient documentation.

Visa Europe fscks up Friday night with other GDPR: 'God Dammit, Payment Refused'

Lee D Silver badge

Re: Cashless society

- I don't carry cash.

- I've never had a card cloned (stop letting it out of your sight / refuse to plug it into anything dodgy-looking).

- I have an app for my bank that tells me if anyone is doing anything on my card, and specifically warns me immediately on big transactions. You can't do that with cash, and it's a prudent thing to do on any account.

- I carry several different types of card (it's not hard, a pre-pay MasterCard/Visa card costs you virtually nothing and works in every country I've ever tried it).

- At the end of the day, I'm not reliant on the card to purchase anything THERE AND THEN. Any significant purchase is considered, planned and done in advance. I've not going to starve even if Visa goes offline for a week.

Do you people not use Direct Debit for your bills, your mortgage/rent, your council tax? How is that any different?

Side-point: I've never been mugged or burgled. But for sure if I had, I've rather they stole cards I can cancel than cash that would just disappear.

Stingray phone stalker tech used near White House, SS7 abused to steal US citizens' data – just Friday things

Lee D Silver badge

Re: I am still surprised

If you're using cell towers to communicate anything even vaguely important without treating it as an untrusted medium, you're a damn idiot.

For any proper scenario, it really doesn't matter if you broadcast your classified information over the 10 o'clock news. That's PRECISELY what modern encryption is designed for, and to facilitate.

As such, it should NEVER be a national security issue, and certainly shouldn't be anything even vaguely approaching important for somewhere like the White House. And tapping into SS7 etc. and redirecting people's mobiles to proxy them can be detected quite easily by the companies in question. Has anyone even asked if this is actually done DELIBERATELY to stop, say, terrorist attacks and bombs coordinated over the cellphone network near the White House?

But if you're that stupid as a secure government entity to just use a commercial network as-is, without bothering to encrypt information over it, then pretty much you've already lost anyway. You may as well just open the doors and let the Kremlin into your files.

You should find out what's going on in that neural network. Y'know they're cheating now?

Lee D Silver badge

Re: Such problems were known _way_ before the current hype

This is precisely the problem.

What "neural nets", "machine learning", etc. are actually doing is "brute force to find a set of conditions that result in the desired criteria the majority of the time".

- "That's a banana."

- "Okay. All bananas are 400 pixels wide."

- "That's NOT a banana."

- "Okay. Most bananas are 400 pixels wide but they all have a white pixel in the top left."

- "That's a banana"

- "Okay. Most bananas have a white pixel in the top-left, are 400 pixels wide, and look a bit yellowy overall".

- "That's a banana"

- "Okay. Most bananas have a white pixel in the top-left, are 400 pixels wide, and look a bit yellowy overall, and say Getty Images on them".

...

And so on. In between, there is NO introspection into the criteria that are being inferred upon. And it will fit the training data, for the most part. And if the training data is large, you might get lucky and it might be useful enough to put a set of ears on a webcam image in roughly the right place. But the training data can't be COMPLETE and so you cannot use it with any surety. This is why "machine-learning AI self-driving cars" are basically suicide-murder-boxes.

Not only that, they plateau quickly because they can't "unlearn" those early basic assumptions (because you can't even tell what they were, let alone itself!), so trying to train it to recognise planes and/or apples without literally starting from scratch is almost a complete waste of time.

Say it takes 1,000,000 pieces of training data to recognise a banana... it surely takes 10-100m pieces of training data to "untrain" it or retrain it to also recognise other things, and what's "not a banana". Literally, you have to show it enough "not a bananas" for it to the be retrainable on "is an apple" without just assuming everything that's not a banana is an apple.

"AI", "machine learning", "neural nets" are all toys. Sure, they can do some funny things if you let them, but they are uncontrolled, uncontrollable, single-purpose toys.

At no point has anyone made an AI that literally can say "Hold on, so that's not a banana? But I was using this criteria. Can you tell me what the difference is between something that meets all this criteria but isn't a banana?". And yet that's a classification game we play with kids in primary school, where you make a "20-questions" like tree to identify species, etc.**

The day we have an AI breakthrough is the day we have a computer that you program/operate by just clicking at the screen, and a big "Yes/No" switch to tell it off until it understands what it is that you want it to do.

Clicks icons.

Loads up PDF in Microsoft Reader.

Hastily press the NONONO! button.

It reverts back a bit, closes Microsoft Reader, opens it in Notepad.

Hastily press the NONONO! button.

It reverts back a bit, closes Microsoft Reader, opens it in FoxIt

Press the Yes! button. Now it knows what you wanted, and changes your settings to reflect that.

To program:

Hold down the "programming shift modifier" key.

Click button on screen.

"Alright, so what do i do when you click that?"

Click-and-drag to the printer icon on the desktop.

"Ah, right, so you want me to print something when you click that button".

Press the Yes! button.

"Oh, I'd print this... <shows screenshot>"

Press No button.

Hold programming modifier.

Click-and-drag around the current window in the screenshot.

Press Yes button.

(** Ironically, I can remember an early piece of programming I did was to make a game where you give the computer the name of two objects, it asks you for a question that would distinguish them, then you give it another object, it runs through the question, and builds the tree as you answer Yes/No to the distinguishing questions. Each time it ends up as something it doesn't know, you get to type in a question to distinguish it from the nearest thing. Does it have four legs? Does it live underwater? etc.

The computer had no intelligence, but you classified things by having it demand a distinguishing question. And after sufficient such training, it could play a decent game of 20 questions (for at least 2^20 possible objects!)

Lee D Silver badge

"Correct", "useful" and "misleading" are very different things.

Send printer ink, please. More again please, and fast. Now send it faster

Lee D Silver badge

Re: In the early days...

Which is why any place dealing with receipts will photocopy or scan the original as they receive it, and attach it straight to the expenses form / original invoice.

HostingUK drops offline after losing Farmer vs Fibre competition

Lee D Silver badge

You mean in the ditches, near the edges of the worn-hill, over the tree-roots, and right under the electric fence-posts?

Lee D Silver badge

Re: DR

A cloud is great.

As ONE item of redundancy.

You could spread across several different clouds, across in-house and cloud, across in-house, externally-hosted and cloud. But a particular cloud is ONE point of potential failure.

Anyone who thinks otherwise ends up with problems like this.

Anyone with a brain, in such an instance, would go "Okay, time to failover to our secondary site which has NOTHING in common with the primary... not a company name, not a cable, not a service, not a switch".

Whois? Whowas. So what's next for ICANN and its vast database of domain-name owners?

Lee D Silver badge

"We make the best packets. Nobody else can make packets like us...."

Artificial intelligence... or advanced imitation? How DeepMind used YouTube vids to train game-beating Atari bot

Lee D Silver badge

Re: Delusion

I happen to think that video is the WORST POSSIBLE WAY to learn. I've thought that even since the "OU on BBC2" days. It's slow-paced, information-sparse yet data-heavy, unsearchable, etc.

Kids do watch YouTube to learn things but they are just sitting watching YouTube, not learning. I would watch YouTube to learn, say, how to unscrew that odd bit on my car that I can't visualise from the instructions. But my primary reference is never YouTube.

It's extremely annoying (and worrying) that when you search for the simplest of things, say, how to do X in Windows, how to change a battery in this item, how to repair a part of a laptop etc. that the top hits are all YouTube videos with 2 minutes of advert, 10 minutes of a bloke waffling, and about 15 frames of the actual thing you need to see in order to do it. Because they are top of the search, people ARE using those as a reference, or they wouldn't be at the top.

Sometimes you need video. But my rule is simple:

A picture paints a thousand words

A video paints a thousand pictures

(P.S. also a good rule of thumb for file sizes with people who don't understand why they can't email a 4 hour video...).

Hiding the information for "where to go on your game to find the secret level" in a video is a thousand times more information than necessary. A screenshot on a page would do.

Sadly, children are losing the ability to "learn"... which involves doing yourself (if everyone who just saw/read something became an expert, then we would all be geniuses already), repeating, and not just copying what someone else did.

There is an entire generation that, because of our own generation's surprise at being able to search for core information and finding it on any subject, believe that all you need do is search and watch someone else to understand something entirely. We've taught kids that "finding a way to access information is the same as knowing everything" (don't even get me started on VERIFYING information).

I actually find the trend of children WATCHING other people play computer games for hours on end incredibly disturbing... it's like a whole level of laziness above "my kids just sits there playing games". They are losing their reasoning skills, they are losing self-exploration, and they are losing the desire to actually learn and hypothesise, and replacing it with some Borg-like hive-mind where only one person actually knows how to do something and everyone else just watches him.

That someone thinks this is a good model to train an AI on "because that's how we learn" shows you how ingrained that culture has become in such a short amount of time.

Lee D Silver badge

Re: Pointless

That's STILL NOT AI.

That's just a brute-force search on a buggy implementation of a single port of the game. It's like saying "you learned how to win at football" when you played on a pitch at your local park where you could kick the ball off the local doctor's surgery, having learned the point on the wall that makes it bounce above the goalie and score a goal in an oversized goal. The results are not transferable, they aren't "intelligent" (they just tried every possible direction) and it's certainly not learning or inventing.

Inventing is a matter of "skipping over the missing step". You don't need to learn every possible draughts/checkers opening, if you are intelligent. You can sit, and based on a limited knowledge of the rules, no database, and no brute-force, you can "infer" a good position/move. That's intelligence. Just trying every possible move is not intelligence.

We don't have AI. Even these things aren't learning - you couldn't put one trained on one game to the other game and make progress. The progress is logarithmic... it achieves a quick result and then plateaus and resists all further learning. You couldn't then train on ANY OTHER GAME and get a viable result. AI researchers know this, which is why they always announce the FRESH result of a clean AI, and then nothing else. No AI is ever "taught after". They know not to do it, because they know it's a disaster to try. And single-purpose AI's plateau quickly and have very limited scope.

This one not only had to be hand-held in terms of watching humans play, it had to be programmed with explicit rewards related to that ("did you end up in a similar place to a human playing?"), taught how to interpret the screen, and then trained on very short sections of particular games.

This is expert systems and heuristics (human written rules). All AI we have is expert systems and heuristics. The closest you get to "AI" (as in something that learns for itself) are genetic algorithms and the like - they tend to be VERY hard to understand, direct, train and get results from, but their "insights" are gained organically and without much outside help once the universe they live in is defined. But even they have human-tuned breeding rules.

The most impressive "AI" I ever saw was a Java-based physics simulation of a bunch of joints connected in a vaguely skeletal way, with joint movement individually controlled by a GA. Someone had put it up on their university home area (back when everyone had a home folder /~username, and webspace on their uni account). It was "rewarded" by the distance it could achieve from the starting point in a given time. The "course" was randomly generated to have hills and dips. After something like 2000 generations of genetic breeding, it could epileptic-fit itself across the screen and make some kind of progress (before eventually getting stuck or reversing).

After 10,000 generations, it could form a hop-and-a-skip. After a million generations, it almost began to resemble chimp-like four-limb running. Given it was Java and the 90's, that would be months of calculation, physics simulation, breeding, etc. though (luckily you could export the generations and reseed from a certain point). It would never, no matter how long it was left running, form a consistent stable gait. It was just pretty much random twitches at timed intervals that by chance happened to get it so far until it stumbled.

It was never "intelligent". It never looked ahead and inferred how best to achieve its task, or change its tack based on the terrain. But it was damned impressive (and it's not been on the web for years now, I know, I've looked for it). And despite having a computer science degree, and having friends with PhDs in computer vision, etc. that's the closest I've ever seen to anything AI - anything changing to solve the problem at hand.

All "AI" is similar. It's either a human telling it exactly what to do and when, or random chance, or brute force. The combination of all three tends to mask the use of any one, but it doesn't form intelligence in even the most primitive way.

Just because our kids learn using YouTube videos, does not mean that showing a YouTube video to a heuristic system is "learning".

MH370 search ends – probably – without finding missing 777

Lee D Silver badge

I doubt the local miscreants would have smashed the window and hotwired a Boeing, but that's probably more likely the fate of a car left in B&Q.

Lee D Silver badge

Titanic wasn't found for 73 years, and they pretty much knew where it had gone down, it places distress calls, there were survivors and those survivors were rescued quite quickly by other ships.

And that was found in the 80's, so not a million years ago, and they had subs, cameras, trawlers, positioning systems, sonar, etc. and it still took MANY expeditions to find it.

People vastly underestimate the size of the ocean.

Greater London is 600 square miles.

"An analysis of possible flight paths [for MH370] was conducted, identifying a 60,000 km2 (23,000 sq mi) primary search-area"

They've since done about 3-4 times that search area.

You are looking for a single plane inside something the size of 38 Greater London's (just for the initial search), and several kilometres deep. With almost zero line of sight (on land, you can just look... in the sea, you can barely see a few meters through the water).

You could lose several dozen such planes under your average B&Q warehouse if it got covered by sand, debris, etc.

Lee D Silver badge

I'm wondering how much more horse manure you can fit in one post, to be honest.

It's tracked by air traffic control only within international boundaries.

Its engines etc. talked home over the course of the flight, but not for people to find the plane with (that's generally not an issue).

"Transponders"... such as? What? You think 100% live, 24/7 GPS-tracking is on every plane on the planet all feeding back to every airline? I'm sorry to say, you're wrong. Or else people may have noticed a few planes going off flight-plan very much sooner. And if they were turned off? Well, the pilot deviated course and the plane crashed, that much we know for sure. So what if they were?

Apply same to mayday/malfunction. We already had one pilot recorded driving a similar plane into a mountain because he was having a bad time.

And then you get into the real manure:

"The plane did not hit the ocean"? What did it do, levitate?

Debris field of a plane is INSIGNIFICANT. Thousands of square miles of ocean.

Chunks DID wash up on beaches.

Planes float? Really? Tell the Hudson River guy that, they were pretty keen to get out of it. It also contains hundreds of tons of steel. They don't float. They aren't designed to float. They're designed to not come down on water at all, certainly not at speed or force or deliberately.

Your kind of reasoning is exactly what wastes SO MUCH time in people's lives - either dealing with explaining this kind of manure, propagating it into the public mindset, or literally just making people sigh with exasperation.

The glorious uncertainty: Backup world is having a GDPR moment

Lee D Silver badge

Re: I think you'll find...

Depends what you read.

Going by:

https://www.education-ni.gov.uk/publications/disposal-records-schedule

Then, yes. But other places give differing advice as that's the MINIMUM required (and some of that goes up until the pupil is aged 30!).

If the table in that document isn't enough to convince government to set a single data retention standard, then nothing will.

Lee D Silver badge

Consider a school.

Current government advice for various data held is published in a nice compact table with type of data and years you need to keep it by law.

Some of it brings the limit up to legally requiring storing certain personal data for 25 years after they were last a pupil. And, no, you can't anonymise it.

As such, "right to be forgotten" for many such pieces of data is basically non-existent.

Even financial records tend to hover about the 4-7 year mark for even the smallest business, and no, you can't just anonymise them (the taxman may have something to say about that should you be audited for, say, VAT or income tax for private contractors as in IR35).

The right to be forgotten is a way off for most people and requests handled on an individual basis. But GDPR hasn't really considered it in terms of practical solutions.

UK judge appears in dock over Computer Misuse Act allegations

Lee D Silver badge

Please note... even a judge has a barrister.

There are good reasons to do this.

"A person who represents themselves has a fool for a client" is a well-known phrase in legal circles.

OnePlus 6: Perfect porridge? One has to make a smartphone that's juuuust right

Lee D Silver badge

Re: Note 4

I have never considered a waterproof phone anything that is in any way desirable.

Despite 20 years of carrying a mobile in my trouser pocket every single day of my life, I have yet to break a screen, drop it in the toilet, get it otherwise wet, etc.

I would happily sacrifice "IP67" for "IP23 and a poxy battery I can change when it inevitably dies".

Hell, put it behind a screwed panel if you have to.

"Waterproofing" of consumer gadget is literally a modern gimmick for no real purpose. We survived for decades with devices much more expensive, much more fragile, more less waterproof without issue.

No, I've never felt the need to take an underwater photo either. As far as I'm concerned the camera on my phone is for "I can't be bothered to write that all down... snapshot... done."

Will happily sacrifice all but one of the myriad cameras they have nowadays. All that zoom-level and photoshoppy crap. 3/4 of the megapixels.

While we're at it, I don't need a fingerprint sensor, a heartbeat sensor, or a "proximity sensor" either. Hell, I only rotate my screen once in a blue moon and I could do that with a physical button and get it right more often than some accelerometer/orientation sensor trying to guess whether I want portrait or landscape.

Honestly, I will throw all that away for the features listed above.

Lee D Silver badge

Headphone socket.

Stop making it so damn slim, just put a decent battery on the entire back. Nobody has pockets that can't take 8-10mm of phone. On the same note, ruggedise it so I don't HAVE to buy a case to put the stupendously fragile screens away from the edge.

The S4/S5 mini's had an IR blaster - literally TINY and works perfectly for all the household kit (I use it in work to control all makes and models of TV, projector, DVD player, etc.). People didn't move to Bluetooth remote controls yet, put those back in!

Will literally PAY MONEY for a phone whose charging port is modular and replaceable, I see so many damaged ones.

Not to mention:

- Real buttons.

- Accessible SD / Sim slots on the side

- Dual-SIM in every mass-market phone would be nice (or eSIMs)

- Clean Android install (work with the CyanogenMod / whatever they are called nowadays people to make it officially changeable from DAY ONE).

- Proper little stand on the back. We only get a stand if we buy a case. Put it in the damn phone, and make it adjustable, not single-position. Literally a 50p bit of plastic.

Rather than make the phone "different to everything that's gone before", make it "use all the best bits of the ones that went before" and encompass all the accessories people buy FROM DAY ONE.

I will literally buy the first phone that gets close to doing this.

Britain mulls 'complete shutdown' of 4G net for emergency services

Lee D Silver badge

FFS.

Spend half that money on 4G contracts.

Then spend the other half on buying everyone forty cheap mobile smartphones each.

Keep paying for Airwave for a year or two until someone comes up with a way to mark those smartphones as emergency priority phones.

Honestly, £1300 a device is a piss-take, on top of hundreds of millions of pounds to get a basic comms system running over an existing data network.

A Reg-reading techie, a high street bank, some iffy production code – and a financial crash

Lee D Silver badge

My rule:

"Program in a way that the compiler will bitch at you"

The += thing leads to problems, especially if one side is actually a pointer (either you end up overwriting pointers if you miss the + or you might end up doing pointer arithmetic by mistake).

The x = a + b thing means that if you mess up, generally the compiler will bitch about something missing or being of the wrong type, or the assignment not being possible, etc.

We have these all-signing-all-dancing tools but if you program in the way of most convenience, they won't know what you're doing is wrong.

Programming is about instructing the computer in a language it understands. You're already coming down to its level. So you may as well come all the way, especially with modern IDE assistance.

Also... compile with EVERY WARNING YOU CAN. Force yourself to fix them all (-Wall -Werror). Still no substitute for proper testing (and how can you not realise that you only added up one of a whole list of numbers?), but at least it gives you a ton of things and gets you out of the habit of possible ambiguity.

I spent my early coding years as a kid trying to turn off warnings and errors and using clever tricks. It was only later that I realised that I need to train myself to avoid them in the first place.

I don't think I have an active codebase, personal or professional, that doesn't compile cleanly on all the architectures it's aimed at. And there are numerous constructs where I just got tired of silly errors and went "the obvious way" in terms of coding it, rather than "the quick way" to solve them.

---

The best one I had in terms of a baffling code problem was caused by memset(), filling memory with '-1', on ARM, with a particular version of glibc. I was porting some long-standing code and the same code worked perfectly on multiple platforms. Worked perfectly on other glibc versions. Worked perfectly with non-negative numbers. And memset() is designed to work with signed integers by default. For some reason, the memset on that particular combination only memset every 10th memory byte or something similar, which screwed up all the future code.

When you see an explicit memset, you expect it to work. Literally, I narrowed it down to the line and then REFUSED to believe the memset could be at fault. Until I overrode it with a macro with a for loop that did the same thing. Turned out, it was a known bug in the library version for that architecture. But drove me insane tracking that down and I didn't really believe it even when someone else narrowed it down for me.

But even something as simple as a basic memset can go wrong.

(P.S. credit to Simon Tatham, whose code it was in, and who happened to work at ARM at the time).

New Windows Servers are like buses: None for ages, then two at once!

Lee D Silver badge

In-place upgrades... fine so long as you use VM's:

VMs:

Checkpoint.

Replicate.

Backup.

Remove the replica.

In-place upgrade the replica, put it on a test VLAN as you do-so.

Make sure it works.

If so, either downtime and then replace production with upgraded replica, or risk an in-place upgrade of production.

Hypervisors:

Replicate all VMs to somewhere else.

In-place upgrade.

Test.

Replicate them back.

It's 2018. Use VMs. They save your life in these situations. (I would suggest that you shouldn't need to backup hypervisors as they shouldn't contain anything over than a basic Server install, and copies of the VMs. Maybe a backup agent. That's it. Should you lose a hypervisor, it's shouldn't ever be a worry to just wipe it out and start again, it'll probably be quicker than a restore to be honest).

But nobody should be in-place upgrading any production server whatsoever without VM's.

P.S. if you're not already VM'd.... do it. You get (is it 4?) free VMs for every version of server. Literally, virtualise the server, put it back on itself running as a hypervisor. Done.

Headless man found in lava’s embrace

Lee D Silver badge

Is it just me, or does he appear to be facing forward - the hip bones suggest so.

In that case it was a full-face head-on stone.

Ouch.

Leaked pics: Motorola to add 'unpatriotic' 5G to 4G phones with magnets

Lee D Silver badge

Re: I dont really understand why i need 5g at all

If 5G is shorter range needing more base stations, surely that's going to be even more expensive to deploy, then?

I believe ground rent / leased line connectivity to cellular sites is one of their biggest costs (besides radio spectrum, apparently).