* Posts by rg287

898 publicly visible posts • joined 13 Apr 2018

The UK government? On the right track with its semiconductor strategy?

rg287

Re: As some of us said at the time...

El Reg reported on the announcements in May and took the line shared with many commentators that not subsidising new fabs was a mistake.

As I recall, the line many commentards (myself included) took was:

Nice that they've actually developed a strategy (now do the rest of the economy - UKGov has no over-arching industrial strategy), but:

1. It's table stakes. You don't need to be subsidising fabs to take a proactive interest in things like developing connections with universities, putting the right investment in place to develop clusters (whether that's for shiny logic silicon or - very sensibly - leveraging existing expertise in power silicon and other niche sectors). But they're not taking a tremendously proactive approach on that.

2. Realistically, this government is allergic to investment or infrastructure. It's cynical but not entirely unreasonable- to suggest that them saying "We're going to pursue this niche stuff, so don't expect big press about a new 3nm plant or anything. It's important but very low key" is actually expectation management for "we're promising low because we have no intention of delivering anything anyway" and nobody can be surprised when it all goes quiet and we hear nothing about it again.

3. Good luck delivering even if they do want to - they've gutted the civil service such that DBT would have a real job delivering some of this stuff even with strong ministerial backing.

4. None of this matters because industrial strategy is necessarily long-term. But this government has less than 12months in office, and is then looking at another decade in opposition. And they know it. One can have their own opinions on HS2, but the manner in which Sunak has cancelled it and is then expediting the sale of land, at a loss to the taxpayer is a deliberate attempt to salt the earth and prevent a future government restarting the project. It's scorched earth politics from a party that doesn't give a toss and is busy burning every bridge they can, to make life hard for the next government. They're not good-faith actors. If they do something good, it is purely by accident - but even then, it probably means you haven't looked hard enough to find the donor who is cashing out.

ULA's Vulcan Centaur hopes to rocket into Christmas

rg287

Re: New Galilleo Launches

Citation required.

Galileo generates its own master time (Galileo System Time) using masers in Fucino, Italy. This is determined independently and in principle, Galileo receivers can determine positioning just using that.

Of course because people use GNSS as a general time signal for all sorts of non-positioning services, it needs to be kept matched to UTC, which is calculated in Madrid.

There is also a GPS-Galileo Time Offset (GGTO) calculated to accuracy of <5ns in cooperation with the US Naval Observatory, so that receivers can synthesise signals from satellites on both networks.

But none of that is a dependency. If GST drifted from UTC, that would break stuff. If the GGTO wasn't available then it would mean the receiver could only use Galileo signals, which might cause lower accuracy or longer TTFF, but wouldn't inherently render Galileo inoperable.

Of course, if there aren't enough Galileo satellites in sight because old birds have failed or you're in a city or deep valley (or in the early days when they were building the constellation out and there were only a handful of Galileo satellites on orbit), then it will be impossible to get a fix without infilling with GPS. That's not an architectural dependency though - it's an operational one.

Boris Johnson's mad hydrogen for homes bubble bursts

rg287

Re: Electricity for heat pumps

Electrons are electrons - you push some in the South, you draw some off in the North. The rest is accountancy..

My thought exactly. Offset Spanish/Portugese demand, which leaves them surplus to sell into Southern France, which leaves the French with surplus to sell GB/Ireland.

If the paperwork on that is too complicated, then run the interconnect to France and cut the Spanish out. But it's all just numbers in a spreadsheet.

rg287

Re: Electricity for heat pumps

Why the blazing f- would anyone build an interconnect from Morocco to... the UK?

Transmission losses are projected at 13%, which is not as bad as I expected... but still significant.

Surely it makes more sense to interconnect Morocco with Spain/Portugal(1), offset Spanish/Portugese demand, which means generation in northern Spain/Portugal can be sold into southern France, and we continue to buy from the French. Surely you want generation to be as close as possible to demand?

Or at the very least, connect from Morocco to France.

But I suppose in applying a vaguely sensible engineering approach, I haven't allowed for geopolitics, or the ability of grifters to leverage government subsidies. And yes, I know this is attached to generation specifically built for export, not just a load-balancing link.

1. Yes, there's already an 800MW Spain-Morocco interconnect, with another 700MW link in the works. Which will be dwarfed by this 3.6GW link.

rg287

Re: Electricity for heat pumps

2010 - Cameron announced Hinkey C, had they not wasted years of faffing about before starting the build billions would have been saved and we'd have the leccy to use.

Well, it was shortlisted in 2010. Whilst his erstwhile (Lib Dem) Deputy PM was badmouthing them and casting FUD. Also worth noting that the 2005 Conservative manifesto made no mention of nuclear. So they were not particularly interested in energy security either.

And then the Tories delayed the whole process by insisting that the private sector fund it themselves (backed against extremely generous guaranteed strike prices). This of course was a period when interest rates were <0.5% and the government could have borrowed extremely cheaply. Far cheaper than EDF could borrow from the commercial money markets. The government dragged those CfD negotiations well into 2013 - instead of just issuing gilts and hiring EDF to build it on a contractor basis. Discounting the (not inconsiderable) site prep, major works for Hinckley C didn't start until.. 2019. We could be at least 2-3 years ahead, but for government.

So yes, the Tories did get on with it... eventually. But they still managed to do it in the slowest, least efficient and most expensive manner possible.

Astronomers spot collision between two exoplanets, both feared vaporized

rg287

<older brother deity to little sister deity> "Don't put your marbles on my model space-time! Oh now it's rolled into that solar system and broken one of the planets. That was the best one as well - I got a prize at the science fair for the fjords. Muuum! She's breaking my planets! Muuuuuuuuuum!"

Why can't datacenter operators stop thinking about atomic power?

rg287

These companies have very deep pockets indeed Apple would be the eighth richest country in the world, MS the twelfth, Amazon the fourteenth....

They really do have the resources to develop something like this, not only to reduce their own power bill, but also to sell the technology to competitors and grids across the world.

Of course they have those resources. Note in my original post "can't raise/won't commit".

BUT THEY'RE NOT.

They are investing billions in civilian-ising nuclear submarine tech. Instead of just doing the foundational R&D to bring clean fuel cycles to market.

They could do the good thing. But they are choosing to go the (relatively) low-technical-risk route and re-package existing tech.

There's no question of "will they, won't they". They won't.

rg287

Re: France is finding that nuclear power isn’t that reliable either

That is not the case , Trawsfynydd and any site on the Severn estuary proving the point.

You're evidently not familiar with the concept of an estuary. Particularly one like the Severn with a massive tidal range. Anything on the Severn is sea-water cooled. It's not "river cooled" like the Saint Laurent, Belleville or Bugey. With proper placement of intakes (beyond the lowest low-water from spring tide), you're not going to run dry!

Trawsfynydd is the only "inland" nuclear plant in the UK, and it pulls it's cooling water from a reservoir, which buffers the supply compared to direct-from-river. It's also less than 10miles from the coast. So if the reservoir cooling really hadn't worked out, hypothetically you could run some pipes down. That location also guarantees more reliable rainfall than the inner departments of France.

rg287

And that's the big hope of these massive consumers wanting to build their own SMR/micro reactors.

Because they have the resources to actually overcome those challenges in the pursuit of cheap power to feed their habit.

But they don't. That's the point. These SMR designs are mostly running naval submarine reactors on conventional Uranium cycles - which is why there's a lot of regulatory (nuclear proliferation) concerns about having small units deployed in many locations (as opposed to a handful of sites with large scale reactors).

In fairness, Thorium is only proliferation-resistant when used in a light water reactor - it still generates some nasties when used in a molten-salt reactor.

The problem with all this is:

* Grid-scale nuclear power stations are expensive - private industry can't raise/won't commit that much capital.

* Novel reactor designs/fuel cycles are expensive - private industry can't raise/won't commit that much capital.

* Much of the world's political thinking is still in thrall to a lite version of Reaganomics and so governments won't make those investments.

* Small nuclear sub-type reactors are somewhat in the reach of the likes of Google/Apple/Microsoft, even though they're less efficient than their grid-scale counterparts.

So that's what we end up with. Just as our big uranium-cycle grid reactors are ultimately derived from military breeder reactor tech, so the SMRs are just a scaled-up version of submarine tech. But it's still not actually a good fit for power production or long-term waste management.

Basically, they're all investing in the compromised designs that the military already paid for, because it's what the private sector is willing to pay for. Even though we could - as a society - get much better value for money out of doing the research and deploying low waste, proliferation-resistant technologies en masse.

It also means we spend a lot less time negotiating with the likes of Iran about "honestly, our nuclear programme is peaceful". We can just hand them the IP for a proliferation-resistant thorium reactor. We can even offer to build it for them. If they say "no thanks" then they're fessing up that their programme is a weapon programme (which yes, we know, but it just cuts the crap. Anyone shows any interest in nukes, we hand them a power plant and see if they actually want it. It instantly closes down any discussion on refining uranium).

rg287

Now if the AI can either come up with a nuclear power source which doesn’t generate any waste, or a way of safely disposing of the waste, we’d be on to a winner.

We don't need AI for that. We already have the Thorium cycle, which generates little to no Plutonium, nor nasty actinides.

It's not a silver bullet or course. You still get waste, but much less. In a reactor, Thorium-cycle burns up far more of the Uranium, far more efficiently. By contrast, Uranium-to-Plutonium cycle reactors have to pull the rods when they get poisoned (at which point fission slows despite there being loads of decent fuel still in there, which we then have to process out from the Plutonium and actinides).

But nobody is interested in funding that properly or overcoming the engineering challenges, because you can't make bombs out of it at the end.

Thorium cycle does generate U-232, which has a viciously dangerous decay chain (Thallium-238, very strong gamma radiation), but which follows the "live fast, die young" rule. U232 has a half-life of 68years - not millennia. The Thallium has a half life just under 2 years. Consequently storage is not a horribly long-term problem.

And we've already worked out how to store the remaining waste. Dig a hole in a geologically stable formation and stick our vitrified leftovers in it. It's honestly not that hard. Not that I'm a fan of burying waste in general, but it's a matter of scale and proportion. Burying all our plastics without recycling (or just reducing what we use) would be bad. But a small quantity of nuclear waste (order of tonnes)? Yeah, we can objectively get away with that in return for clean, safe power.

Software patch fixes Euclid space telescope navigation bug

rg287

Re: "the telescope's Fine Guidance Sensor"

And how much fuel did it use up by all that repositioning?

Probably little to nothing directly, as the changes look pretty minor in absolute terms (hunting around the immediate field of view, not doing pirouettes - which would have probably seen the FGS fighting with the coarse sensors and any inertial instrumentation), so the wandering we've seen was probably all done with reaction wheels.

Of course if the wheels are now wound up a bit, the first "unwind" or desaturation event (which will burn propellant) will come sooner. But it doesn't seem like it will have a significant impact on mission lifespan.

Textbook publishers sue shadow library LibGen for copyright infringement

rg287

Re: Welcome to the new corporate Register

You can wave your hands about and shout "knowledge monopoly" all you like, but this is still people copying other peoples work without permission and taking money for doing so.

The work belongs to academics and universities. But for some reason Elsevier et al are of the opinion that once you have paid them in excess of $5k for the privilege of being published in their peer-reviewed journals (which involves peer-review... except they don't remunerate academics for reviewing papers), that research now "belongs" to the publisher. Whose one and sole contribution has been to compile and edit the journal. Which is legitimate work. But does not constitute a creative contribution to the content.

Their pleas of poverty would sound a lot stronger if they weren't generating profit margins in excess of 30%.

It's long past time that a court ruled the only component of copyright the publishers can lay any claim on is layout. The text and intellectual property is neither their work, nor their property.

ISP's ads 'misleadingly implied' existence of 6G, says watchdog

rg287

But if you bought from a company called "ACME Full fibre (FTTP) internet provider", you'd be a tad miffed if you found out that they were selling basic DSL over POTS.

Ah no, that's absolutely fine and dandy according to the ASA. They dismissed a complaint by FTTP providers against incumbents (advertising coax & VDSL as "Fibre") on the basis that customers know what they're getting and it was "not materially misleading" for ISPs to describe copper hybrid services as "fibre broadband". So there. CityFibre sued them over that and sought judicial review... and lost.

And for what it's worth... I have one former colleague who insisted in 2014 that he had fibre. He still plugged the router into the phone socket, but insisted that it had been "upgraded from the exchange" without anyone drilling holes in his walls. Alchemists could have had quite the field day trying to tease out how copper had spontaneously transmuted into glass, but there we have it. The ASA have told us that people aren't confused.

6G's name is intentionally misleading. They sell a wireless service too. It's dishonest.

Is it also dishonest for Three to sell 4G and 5G services? What about voice-only plans with no data on them at all? They literally called themselves Three when 3G was launching. Is it dishonest for Virgin Media to sell services to people who aren't... okay, maybe we won't run down that train of thought.

rg287

Their argument here seems to revolve around the expectation that where a company name refers to a thing, then the service delivered by the company is expected to utilise that thing.

Quite. And remarkably enough, the provider 3, who - back in the day - launched to much fanfare as the UK's (self-claimed) leading provider of 3G have moved on to 4G and 5G.

But we have to remember that this is the same ASA who dismissed a complaint by FTTP providers against incumbents (advertising coax & VDSL as "Fibre") on the basis that customers know what they're getting and it was "not materially misleading" for ISPs to describe copper hybrid services as "fibre broadband".

Apparently customers know the difference between fibre and er... "fibre", but not fibre and cellular.

Arc: A radical fresh take on the web browser

rg287

Re: Off topic

I have long held the view that organisations need to stop assuming that everyone knows how to use Word Processors and Spreadsheets

And employ technical documentation specialists - whether that's writers, illustrators or editors. Because they spend all day in their tools and will inevitably do a better job than asking an engineer/developer/CAD-jockey to write the documentation or provide illustrations. And it'll be more consistent as a result.

Although the concept of typing pools has a poor reputation for sexual harassment and misogyny, we had them for a reason - the professional typists (usually - but didn't have to be - women) were a damn sight better at what they did than the engineers/managers/men who sent them work - faster & more accurate. Senior bods still get a PA/Secretary for this reason (it's judged that the executive's time is too valuable to be spent on booking flights or managing their calendar), but there's a case to be made that a Secretarial/Professional Services pool should still be a feature of large organisations. In the modern era of course, they wouldn't be typing up emails for people - they would be specialised in helping people prepare for presentations, prepare bid documents, internal/external documentation, etc. How many staff-years are wasted by engineers manually renumbering the pages on documents because they don't know how to use the layout tools?

AWS: IPv4 addresses cost too much, so you’re going to pay

rg287

Re: IPv6-mostly?

They could. But as it turns out, the top-level/high-profile ones don't.

<government.nl> and <defensie.nl> both advertise AAAA records pointing back to Prolocation.net, whilst amsterdam.nl is likewise "self-hosted" IPv6 with Logius.

No CF doing a MITM on them.

I'm sure there's probably some local council or school districts behind Cloudflare, but good on them for trying to lead by example.

<gov.uk> also goes to "native" IPV6, albeit on a block owned by Fastly (of California)... <army.mod.uk> hits Cloudflare...

Twitter name and blue bird logo to be 'blowtorched' off company branding

rg287

Re: Moron alert. Again

There are a fair number of artists and other creators that relied on twitter to advertise and support their work. A good alternative hasn't really arisen, and if they are transitioning to some other network they need to rebuild their entire following.

I'm not sure that's quite as dire as made out. Most of those people are on patreon and have built followings on Mastodon/ActivityPub, as well as Flickr/DeviantArt/Instagram/YouTube/Discord/Twitch and now Threads. Twitter was an important way of advertising their work, but that's really died off over the past year. The writing has been on the wall for a while.

FCC boss says 25Mbps isn't cutting it, Americans deserve 100Mbps now, gigabit later

rg287

Re: My home cable modem...

Well, I have 600 down and 20 Mbps up on my current connection. For years I have been telling my cable provider that I would pay just as much as I do now per month for 100 Mbps up AND down. They have the fiber in place. The fiber terminator hangs off of a pole right next to my apartment building in Chicago.

I entirely agree with the sentiment, but it's not going to happen because it's probably a PON architecture with more downstream channels than up, and they're not going to change that just for you. If it were a point-to-point architecture (not point-to-multipoint) where your apartment was connected to a switchport at their end, then you could certainly pick an arbitrary symmetric speed or even more upload than download.

That being said, the ratios are still open to them - G.984 offers 2.4Gb down, 1.2Gb up. That's shared with as many as 128 endpoints but still represents a 2:1 ratio, not 10:1 or worse. G987 gives 10/2.5Gb, which is 4:1. There are symmetric PON standards but they need more expensive burst-mode lasers, which they don't want to spend money on. They don't need to be giving people quite such shonky upload speeds but alas, they're optimising for people downloading the latest 30GB Call-of-Fortnite DLC.

Ofcom proposes Wi-Fi and cellphones share upper 6GHz band

rg287

Re: Interoperability vs spectrum sharing

I would much rather see a maintained separation in terms of spectrum allocation and services, but better service integration enabling switching from one to another mid-call.

I quite agree. Without wanting to sound too much like "well why would anyone ever want higher speeds?", I don't really see the value of 6GHz in wifi.

2.4GHz and 5GHz give a decent option between speed and range/penetration. I can also see the (diminishing) value of 60GHz for line-of-sight streaming applications, perhaps for VR headsets. Though in many cases the better solution is an HDMI cable!

6GHz is unlikely to bring meaningful real-world speed improvements. Some people will mumble something about high-density environments like conference & exhibition centres. But in my experience, providing decent service there has more to do with antennae design; AP placement; how well your system manages client roaming when a device keeps trying to flick between access points like a demented hummingbird and (of course) whether you actually have enough backhaul to support the traffic or whether the centre has cheaped out.

Arguably, the important bit of 802.11ax/WiFi6 was introducing OFDMA (as well as 6GHz), but you can do OFDMA perfectly well on 5GHz (and 2.4GHz), which it does.

Seeing ax advertised for corporate offices and dense residential apartments will make most people here twitch. Any high-bandwidth applications in those settings should be wired anyway. We all know this. The only way you can consistently saturate a network link is with big downloads, which are most likely to be things like games consoles. And for those, you want a wired link to get consistent low latency when you're playing. For downloading flappy birds to a mobile device, you're not going to perceive any benefit on 6GHz vs 5GHz.

6GHz does avoid the games of DFS and APs having to monitor for RADAR, but not if it's doing a DFS-equivalent of playing nicely with cellular services. Meet the new boss, same as the old boss.

This all stands in stark contrast to mobile cellular service where wringing that extra bit of performance is actually worth it for dense environments - like standing on Embankment, Times Square or in central Manchester and having full signal but garbage throughput because of contention.

SpaceX says, sure, Starship blew up but you can forget about the rest of that lawsuit

rg287

Re: "terrifying" sounds were reported in Port Isabel

Seriously, what was that about? Even the SpaceX commentator lady sounded confused when that was shown.

I assumed it meant the sweepstake was settled and the cheers were from people not buying the beers that night/supplying cake the next day.

Ariane 5 to take final flight, leaving Europe without its own heavy-lift rocket

rg287

Re: But wait! There's more...

How long would it take ESA to develop and build it's own re-usable Ariane? Given the Ariane 6 is 10 years in development, at least another 10?

Why "at least another 10"?

SpaceX started in 2002 and first launch of Falcon 1 was in 2006. In just 4 years they had conceived, designed and built an entire rocket - hardware, software and (most importantly) engines - from scratch.

Falcon 9 launched four years later in 2010 and attempted to land (with a parachute - Elon having fully misunderstood what parachutes can sensibly do). They pivoted to relighting the engines and landing under power, which was then demonstrated in 2013-14 with controlled re-entries and the first successful landing (on land) in 2015.

ArianeSpace starts with a functioning rocket, engines, some superb rocket engineers and a decade of watching SpaceX piss on their chips. It should be entirely possible for a team with the knowledge of "this can be done, SpaceX have been doing it for a decade" and granted the budget and autonomy to "get on with it" to go and pull Ariane 6 apart and modify the booster to support reentry in 3-5 years. It requires ArianeSpace to commit to it (rather than furtling around with "well maybe we could have a go at it"), and possibly poach a few SpaceX engineers. It can be done. If it isn't, it's because Arianespace don't want to rather than because it can't be done.

You're right though. Available evidence suggests ArianeSpace management would doom such a project to development purgatory.

rg287

Re: But wait! There's more...

Using taxpayers money to create jobs is an efficient use of it.

Working people aren't on dole, pay taxes, are better integrated into the community, are less prone to violence and abuses.

Entirely true. But it would also be nice if those working people were developing/building a rocket which won't be obsolete before it launches, rather than handing the initiative to Musk's effective monopoly (outside of government launches propping up the old-space incumbents, who would otherwise be looking at bankruptcy, having been totally outclassed on cost and reliability for private sector launches).

Would it be actually that terrible for ArianeSpace to hot-house a small team of engineers on a "start-up" basis and tell them "here's a fat budget, go for SpaceX" and then go hands-off?

Would it be any less efficient than the current process? They've proudly turned out Ariane 6 - a fine rocket to be sure (when it launches) - but 15years behind the state of the art (Falcon 9), and possibly obsoleted by StarShip before the end of 2024. Between ULA and ArianeSpace, is it so ridiculous for taxpayers to be saying "Oh come on one of you, take the fight to Musk. Less evolution, more revolution"?

Rocky Linux claims to have found 'path forward' from CentOS source purge

rg287

Re: "Certified"

but who also run a downstream rebuild for training, testing and/or development boxes because they don't need support on those.

Worth remembering that a free RHEL Developer account gets you 16 unsupported licences (up to 128 cores across those instances), which is 16 more than Microsoft gives Windows devs. So for training or dev or even small (self-supported) production workloads, developers can use their dev account licenses.

This policy/allowance of course remains at the whims of RHEL (beware building a business on it) and testing can be difficult because you have to activate those instances with your account credentials - as Claudio4 mentioned above, for automated CI/CD pipelines or anything where you might be standing up and tearing down instances automagically, throwing CentOS or similar at it was much more straightforward from an activation standpoint. I doubt that is unfixable, but you'd need some monitoring to avoid it trying to spin up a 17th instance and falling over.

As you mention, the main appeal of RHEL is support, and certification for vendors and customers operating in regulated industries.

Missing Titan sub likely destroyed in implosion, no survivors

rg287

Re: There's a lot of outpouring of grief for the loss of five people at sea

Quite.

On the one hand, I have no problem with the response as-was. Anyone in nautical distress should get whatever help can be mustered. That's been the first rule of the ocean for as long as there have been seafarers. And in this case, it was a decent training exercise in search and establishing the fate of the craft - even if the prospects of recovery were always slim.

It does throw into sharp relief the handling of various refugee boats and migrants though. I don't see the media spending a breathless week covering the fate of 700 drowned migrants. But 4 rich tourists? What could be more important?

rg287

I don't blame the Coast Guard for continuing the search, though, sonar data can be ambiguous and they'd want to have definitive evidence before giving up.

Also. Morbidly. If you haven't got another vessel in immediate distress demanding your attention and would simply be at standby, then this is a decent training exercise on search and (maybe) recovery, or at least establishing the fate of the craft even if you don't get anything back.

rg287

Yeah, there should have been no rescue efforts at all - seriously. Those are just the government doing unnecessary safety meddling in the industry with taxpayer money. 'Pure waste!' as he put it.

No no, anyone in nautical distress should have a reasonable expectation of receiving at least a best-effort attempt at rescue. That's a long-standing law of the sea.

What this has highlighted of course is how the "impossible" situation of stopping migrants drowning in their thousands is nothing more than a policy decision - because there's plenty of resource when there's a political/media will to stage a large scale search (and - if it hadn't imploded - "daring rescue") when its a handful of millionaires on their holidays.

I do agree with the sentiment in so much as there's been some fawning over these "brave explorers".

They're not explorers, nor adventurers. They're disaster tourists, rubbernecking a mass grave which has been extensively profiled and documented by far better hardware than they had. The billionaire amongst them probably spent more money vetting their chauffeur than they did on due diligence into the company selling sub rides in an experimental harbor-freight special. He could easily have afforded to charter DSV Limiting Factor, or even commissioned an Alvin-class sub from Triton.

Don't panic. Google offering scary .zip and .mov domains is not the end of the world

rg287

"other people also have problematic TLD so Google creating more isn't that bad"

Yeah, the whataboutism is deafening.

.com dates back to a simpler, more naive time when not many people were using the internet, and now we're stuck with it.

.sh is potentially quite dangerous, but most people don't know what a shell file is to start with and hopefully (!) won't try and run it. It won't get you very far on Windows anyway.

.zip and .mov are extensions people know and recognise. They might even expect to receive legitimate emails with attached zip archives (or links to). The fact that .com and .sh exist are not good reasons for ICANN to allow other common file extensions as TLDs.

There are a bunch of active countermeasures out there... but it makes no sense to rely on active countermeasures to address a passive risk. That's poor security design.

.zip is confusing. "url.zip" does nothing that bit.ly doesn't already do. The world simply doesn't need them. They're just going to end up on arbitrary block lists with most of the other wanky gTLDs.

The Hubble Space Telescope is sinking! Two startups want to save it for free

rg287

Re: But why...?

I perhaps wasn't clear, but when I referred to Hubble 2.0, I wasn't referring to an all-new telescope design. I was referring to literally a second Hubble.

By the time you'd done the R&D for one, built out the science software, etc, building a second one would have been relatively cheap. Bear in mind the first service mission cost $500m plus the shuttle launch. $1.5Bn would have bought you a second Hubble with change. They'd also paid Kodak for the back-up mirror at that point.

This is not to say that the missions were all a waste of time - obviously they needed SM1 to get Hubble working properly and a couple more for station-keeping and upgrades/repairs.

But arguably 3A & 3B (a mix of upgrades and actual service/repair) could have been consolidated into a single service-only mission focussed on failed/failing components and the 3B budget spent on Hubble 2, which would have got the upgraded science instruments. Maybe Hubble 1 would end up with a shorter lifespan, but we'd still have a (newer) telescope on orbit, and for a while you'd have two telescopes (with a diversity of instruments).

But most importantly, don't forget the Hubble project started around 1970 and it was launched around 1990, that is 20 years later... If you start working on Hubble 2.0 tomorrow, it might, potentially be operational around 2043 (or later)...

Maybe, maybe not. The Roman Space Telescope is looking at around a 10-year development cycle (funded in 2016, launch 2027). If one were being picky one might point out that it was first proposed in 2010 - we're discussing how long to deliver from getting the "go" with actual financing. JWST took 20 years (actual studies commissioned 1999, launch 2021) and any Hubble-a-like (i.e. single primary mirror) would be much, much less complex.

rg287

But why...?

Whilst I have no desire to see Hubble fall out the sky, I do have to question the point of continued service missions, particularly now that various mechanical components are giving out. She's an old lady now - magnificent in her old age - but it's not just about keeping her on orbit. How easily can her solar panels be replaced? Or her reaction wheels?

Hubble cost about a billion dollars. Each of the five servicing missions also cost >$1Bn each.

So the question is... why keep throwing money at her instead of just launching new ones? Sure, the first service mission maybe, to get her working properly. But there was a spare mirror - we could have sent Hubble 2 up, with a different set of instruments (e.g. one of the upgrades from a service mission). That way you'd have double the observing time. Sure, Hubble 1 would still be on her old hardware, but that was still incredible, and you get double the science.

And we didn't even need the shuttle. On paper, Hubble seems to just about squeeze into an Ariane 5 fairing (not withstanding any pointy bits that might need re-engineering, it's not like I've got the CAD drawings here to line up against), and at 11tonnes is comfortably within the 20t to LEO payload capacity. Or do whatever the NSA did with all the other Keyhole satellites.

If we're going to drop a billion dollars on a space telescope, maybe spend it on launching a new one? One which complements or extends Hubble's ability?

Or a pair, with laser-links for performing long-baseline-interferometry (which is hard at optical wavelengths because the accuracy to which you need to know the separation of the telescopes is proportional to the observing wavelength - which is why we don't see optical telescopes being chained together across continents - we can't measure their separation accurately enough without mechanically connecting them). In space, a free-space optical link could conceivably measure to the requisite accuracy.

Without wishing to be wasteful, I just find it hard to reconcile the idea of spending that much money on marginal upgrades when we could launch a totally new instrument for similar money and get the benefit of additional observing time (because it's not like there has ever been a shortage of meritorious proposals queued up for HST Observing Time).

Let white-hat hackers stick a probe in those voting machines, say senators

rg287

Re: There's absolutely nothing wrong with computers doing the counting

Everything from president, congressman and senator, to state representative and senator, retention of state judges at the local, appeals and supreme court levels, and random stuff like county water commissioner, assessor, etc. and maybe one or two propositions.

In fairness, this has always struck me as a bit overly complex - and it’s been shown in at least one case that confusing ballot layout affected election outcome because people accidentally voted for the wrong person.

Scantron is definitely a way of reducing count error, but ballots could also be broken out into multiple ballots (colour-coded so the staff can check people are posting them in the right boxes). One for President and/or congressional elections, then a couple for state and county/local elections. Two or three ballots could significantly simplify layout and the presidential ones (say) can be prioritised - it’s an odd thing to be slow delivering a result for a national vote because they’re concurrently counting ballots for local sanitation superintendent.

rg287

Re: If you want secure elections

There's a difference between recording the votes and counting the votes though.

Following a couple of respected security bods on Mastodon who have given a lot of thought to election security, it does seem that good progress has been made in the last 10 years. The main one is that it's now recognised that computers can help with counting votes, but must never be responsible for recording votes. No system should ever require you to enter your vote electronically - there must be a physical ballot paper. Old touch-screen voting machines have been largely phased out. Some systems had you enter your vote and they printed a receipt - but these are also unacceptable as the receipt was often a barcode combining some sort of unique ID with the vote cast, which is not easily human-readable.

The happy medium seems to be that you mark your vote on a scantron-type ballot paper. You then scan it (and perhaps receive a receipt), and deposit your ballot in a traditional ballot box.

The computers provide a quick provisional result when voting closes, and you do an audit count of a statistically-significant proportion of ballots. In a landslide, you probably only count 10% of ballot papers. In a narrow race you'll basically end up doing a full manual count. On average this saves time (not waiting days for a result whilst mail-in votes are counted <side eye to Georgia>). Importantly, you've always got the paper ballots if one of your checks and balances ends up initiating a full recount.

Moreover, even if you decide as a matter of policy to always do a full manual count, an electronically-counted provisional result nukes the hours (or in the US, days) of commentators testiculating about what's going to happen, or candidates submitting vexatious lawsuits to "stop the count". If voting closes at 10pm, there should be a provisional result published by 11pm and we can all go to bed. Whilst some might say "be more patient", it's reasonable to suggest that the fake news in the wake of the last US election (when early provisional results shifted because mail-in ballots favoured Biden, giving rise to "tampering" theories) makes a strong case for being able to publish a provisional result quickly, and then quietly do the manual count (or audit counts) without the pressure of releasing misleading interim results. Obviously also... just don't release interim or partial results.

If you're using a scantron-type counter and doing good audits, then by far the larger source of fraud will be coerced postal votes.

That said, I agree that it's a bit bold to call the last US election "the most secure". Whilst I'm not suggesting it suffered any tampering, elections pre-1980 were more passively-secure by virtue of being entirely manual.

ESA's Jupiter-bound Juice spacecraft has a sticky problem with its radar

rg287

Re: Who Me?

The "Remove Before Flight" pin? Why do you ask...?

US watchdog grounds SpaceX Starship after that explosion

rg287

It's at sea level so tunnels would flood. They've built a raised table instead. They plan to put a water-cooled steel deflector under the raised table.

In fairness, so is Kennedy Space Centre... but building up the launch pad the way they have in Florida would be an epic civil engineering undertaking, so if this design looked cheaper and equally effective (we understand acoustics and suchlike better than they did in the 1950s) then it makes sense that they'd go with the cheaper/low-civils option.

rg287

Re: They may call it a success...

the rocket was purposefully terminated, when stage separation was supposed to happen.

I think if you watch the video back, you'll see the stack performs more than a double backflip before abort (it's quite impressive that it didn't break up of its own accord). If they'd seen the mission was doomed and said "scrub when we go for separation, they wouldn't have pulled a double backflip first. Or they would have gone through stage separation (to validate the mechanism) and then aborted each component separately (but that also makes no sense, because they could still have done the simulated landing of the first stage, even if Starship couldn't make orbit). The flight went well past the planned separation before they hit the big red button.

Now, it's entirely possible that the failure of engines and the course correction caused buckling or torquing that locked up the separation mechanism. But there's no specific evidence at this point that the engine failures were the root cause.

Amazon, Bing, Wikipedia make EU's list of 'Very Large' platforms

rg287

The VLOPs listed include many of the usual suspects, although Wikipedia was something of a surprise

Wikipedia shouldn't be too much of a surprise since it's become somewhat of a core service underpinning things like the Knowledge Panels in Google & Bing search results (which are pulled from some combination of Wikipedia and Wikidata). Consequently changes in Wikipedia can have outsize impacts on downstream services - which include VLOPS and VLOSEs.

That's cute. UK.gov gathers up £100M for AI super-models

rg287

Could we not just spend that £1Bn on something useful, like a tram network for Leeds. Which would improve both actual mobility and social mobility.

Amazing how there's never any money for infrastructure projects that might benefit people's lives, but we can find £100m for a new chatbot or £900m for BritGPT (which has already been done, by the way - less AI, more Ay Eye).

Guy rejects top photo prize after revealing snap was actually made using AI

rg287

It's almost indistinguishable from the standard "we're the victim of a sophisticated cyber attack" plea from someone who left their S3 bucket public.

Yes, there was an attempt to mislead, and it is the responsibility of the organisers to weed out plagiarism and any other malfeasance. They failed to do so and are now burying their heads in the sand, refusing to discuss the matter publicly (except on their terms).

US changes rules on tax credits for electric cars to cover American-made only

rg287

Re: News

Historically, protectionist tariffs have worked really well.

Admittedly, targetting a specific sector like battery manufacturing - currently dominated by China - is unlikely to cast the USA into a 1930s-style Depression, whilst also addressing geo-political concerns. Of course giving high-earners subsidies to buy $80k cars is also of dubious economic value - in terms of the overall US economy, they'd be better off funding urban redevelopment and public transit in urban areas - which benefits non-drivers and the poor, not just wealthy drivers.

Accurately identifying where prudent diversification (establishing US-based resource extraction and manufacturing for industries currently dominated by China) ends and protectionism begins is something that will probably only be done accurately in hindsight. So in the meantime, probably a good thing for them to be tightly targetted and limited in scope. But these measures are probably misplaced to start with.

SpaceX calendar marked with big red circle for 'first Starship launch' this month

rg287

Re: I can't wait.

From an outside viewer's point of view they moved from blowing up a lot of stuff to not lighting so much as a candle pretty much overnight.

Of course. They belly-flopped, relit the engines and landed successfully. Concept proven - now get on with the engineering task of building an actual orbital rocket.

The context is that, a couple of Autumns ago, Musk called all hands into to office over Thanksgiving to get this engine working, and said that they'd got very narrow financial margins. He suggested the possibility of the company going under, if they didn't get it right. That fits what you've suggested - consolidating what they know into a workable solution, preferably without destructing anything.

That's mostly Musk being Musk, and ignores the fact they'd just met a major milestone in validating the concept. Why keep on flying test belly-flops once you know how to do it?

To succeed long term, SpaceX needs to assemble and mature a team so that they can make a thing work first time, like everyone else (more or less) does.

I think they have. This comes more down to how broadly you scope "first time". StarShip probably will work first time - the brand new booster and 33 engines? Lots of sims and test fires. But there were some bits they were willing to say "lets crash a couple of cheap test articles because nobody has ever tried this (or anything like this) before". Fluid dynamics is very difficult to simulate - their best models will have had meaningful error margins about how fuel might slosh around in the tanks during the belly flop. They'll have modelled that undoubtedly. But somewhere along the line they needed to validate the entire concept.

If you can't do destructive testing, then where does it end - are you "doing it wrong" if you prototype turbopumps or engines? Is the "proper way" to design the perfect engine and have confidence that it'll work the first time - strapped to a rocket with a customer payload up top? Obviously you don't want to waste money, but failures can also be successful and validate key aspects of the architecture.

Having said that, so much of what SpaceX are doing with StarShip / StarLink / Superheavy feels like a solution looking for a problem, rather than the other way round. StarLink makes only marginal commercial sense (it can, thanks to the weird politics of the USA / lobbying / existing telco monopolies, make money in the USA).

The US telco monopolies apply in rural Australia, the Antarctic, the mid-Pacific? It's a powerful solution to a real problem.

As for StarShip, it's a market-maker. We've seen an explosion in research, small sats and cube sats over the past 15years as the $/kg has come down. StarShip basically lets you launch a space station in one hit. Sending heavy hardware to the moon to mine Helium-3, space telescopes that make Hubble look like a kids toy, heavy research probes the likes of which scientists have previously only dreamed of. And yes - probably some space tourism.

For example, there were engineers in Boeing who were in utter disbelief that Boeing yet again were to revamp the 737 into the 737MAX. Had they been listened to Boeing would have launched a whole new aircraft instead, probably avoided the Lion Air and Ethiopean Air crashs and the vast costs / delays associated with that poor management decision. Musk strikes me as the kind of person who won't have anyone in the company who asks awkward questions about his favourite solutions, and is no different to Boeing's management in that regard.

I mean, StarShip isn't just bolting SRBs or extra boosters onto F9H. It's the "difficult" engineering decision - the whole new aircraft. F9 is an evolutionary dead end - too thin and slender. It's volume-limited despite the enormous mass capacity. And they've not been beholden to sunk cost - remember they binned the world's largest composite-forming mandrel without using it once when they shifted tack to stainless.

rg287

Re: I can't wait.

The same was happening on Starship / Superheavy, but then somehow Musk got levered aside and they've spent a solid year or so catching up, not launching anything, being far more cautious.

Without wanting to be seen as a Musky fanboy (heavens forbid), and without disagreeing with the bulk of your post on QC, I'm not sure the last year has really been "SpaceX being cautious". They've simply moved from R&D to building an actual production ready-ish model. A year is hardly a long time given that automotive firms will spend well over a year developing new models, and we've been doing cars for a century.

They threw some test articles up to validate the novel aspects of the StarShip architecture (the bellyflop & landing), as well as getting some flight time on Raptor and checking that it relights the way they expect (it didn't always). That was spectacular and everyone got very excited - but they did all this before they constructed a single test article for the first stage booster.

It's no surprise that having chucked up some cheap-and-cheerful test articles, it's then taken them a year to work that into actual orbital hardware, as well as building up the booster which needs something like 27 engines (which needed manufacturing - and Raptor production is a bottleneck on all of this). The booster is relatively conventional except that it's massive, which introduces things like the harmonics/vibration issue of lighting that many engines at once. And "mundane" tasks like the plumbing and wiring are not an innovation/R&D task as such but nonetheless a major engineering/design job.

A year is not cautious - not compared to the time ULA and Blue Origin have spent on Vulcan and New Glenn.

Why Microsoft is really abandoning evaporative coolers at its Phoenix DCs

rg287

Re: Where is the salt coming from??

I'm a little confused how evaporation is somehow turning fresh water into brine? Where is the salt coming from?

The fresh water. It's not distilled, and whilst it's not as concentrated as sea water, it still contains minerals and salts (that's where limescale comes from).

If you evaporate off millions of gallons of fresh water (which is what these DCs are doing daily during peak periods), you'll have a briny sludge of minerals and salts left over.

rg287

Re: So they need power?

In fairness, a quick search indicates MS have done a deal with a local solar generator to build out capacity for them to run mostly on solar (presumably not at night!).

What's weird though is the datacentres themselves have no panels on the roof. This isn't because there's other plant in the way - the cooling plant is all at ground level around the DC. Sure, I guess it doesn't make any difference to the bean counters whether you put panels on the roof or buy it in from down the road. But land in Phoenix isn't free (even if it's cheap - relatively speaking). And from a building-cooling perspective, actively sun-shading your roof with half a megawatt of solar panels is even better than painting it white. It does mean the roof needs the structural capacity for the weight of panels, but when you're building from scratch that's something you can reasonably accommodate.

This US national lab turned to AI to hunt rogue nukes

rg287

Re: Officially recognised???

Not sure who is the official doing the official recognition, or what "officially recognised" is supposed to mean in this context.

Under the NPT, a nuclear-weapon state is one who detonated a nuclear device before 1 Jan 1967 (surprise surprise, the five qualifying nations also happen to be the five permanent members of the UN Security Council). These are the only signatories who can legitimately have nuclear weapons under the treaty because the point of the NPT is to say "we won't develop/test/transfer tech"

India, Pakistan & Israel are not parties to the NPT. North Korea was, but withdrew in order to pursue its programme. South Sudan also hasn't got around to joining.

When 190 states are signed up and 5 aren't, that's generally a "consensus view" that you're in or out of the club.

But in practical terms yes, aside from the 5 NPT members, there are probably 3 additional "competent" nuclear weapon states (by which I mean they have a weapon system they could deploy with some expectation of "success" - not that using nuclear weapons is ever a successful endeavour), plus North Korea who make a big deal about testing a fizzle every now and then but could probably make a mess of Seoul even if the warhead just fizzled out and acted as a dirty bomb.

The US would sooner see TSMC fabs burn than let China have them

rg287

Re: Isn't that their plan?

If China invaded Taiwan it would be just the impetus that the US would need to call for an invasion of China

And how would they do that?

The US has a far more advanced blue water surface fleet, but trying to invade China? Getting past their enormous green-water navy, the land-based air force and then going toe-to-toe with the PLA and civilian population? No.

China's military has nowhere near the technological sophistication of the US. In neutral territory (e.g. a blue-water engagement), the US wins every time.

But what they do have is numbers - at home, countering an invasion force? Numbers count for a lot when you're not doing force-projection. Domestic logistics & supply chain is massively easier. Unless the US is going to open with the use of tactical nukes to decapitate the chain of command, it ain't going to happen.

I suppose - with Vietnamese support - you could try and occupy Hainan. But... why? What's it going to get you? You could also up the ante in the South China Sea and go turf the Chinese out their artificial islands in the Spratlys. But the mainland is a bridge (or twenty!) too far.

Dems, Repubs eye up ban on chat apps they don't like

rg287

Do any of the people up in arms about TikTok understand anything about how Google behaves? They have their tentacles freaking everywhere and it is impossible to avoid them. Yet TT is some sort of “security risk.”

And Twitter. A social platform now run by a Russia-sympathiser who has eliminated any meaningful moderation.

If I were in the shoes of Congress, I'd be keeping a very close eye on TikTok as a near-term threat, but I'd be treating Twitter as a current threat to democracy as a known and proven vector for foreign influence operations.

Just because it's owned by an American (in trust for his Saudi & Qatari backers!) doesn't mean squat. There are lots of American domestic terrorists in US penitentiaries. Simply saying "China bad, US fine" is insufficient. And before the downvotes pile in, I'm not calling Musk a terrorist (he's just a sociopath) - just making the point that being US-owned should not be a free pass, or earn you less scrutiny than VK/TikTok. Because the Russkies can use Twitter just as easily as millions of Americans use TikTok.

Why ChatGPT should be considered a malevolent AI – and be destroyed

rg287

Re: Gross misunderstanding of the tool

It is a language statistical model that strings sentences together in ways it has been trained to do. It doesn't understand context. It doesn't understand truth.

It all makes sense now.

Has anybody ever seen Boris Johnson and ChatGPT in a room together?

Chinese defence boffins ponder microwaving Starlink satellites to stop surveillance

rg287

Re: Working

Both companies have teams that exist purely to manage the twat's idiocy. They're successful in spite of him, not because of him.

Musk built those teams. He convinced Shotwell, Mueller and others to come start a rocket company that wasn't backed by old-space. Team-building was actually his strength as a CEO - finding the right people to answer his questions or fill the right seats in his company. But at some point he started drinking his own Kool-Aid, and this (declining) ability to get the right people in the right place seems to correlate with his falling performance as a CEO.

You're quite right, SpaceX is now successful because Shotwell is getting the work done in spite of Musk. But he recruited and built that team in the first place. That was the thing he was legitimately very good at.

rg287

Re: Working

Musk is an odious little twat, but that's no reason to deny the revolution in launcher cost and reusability that Falcon 9 has brought to the industry.

This. Musk is a sociopath. But I hear people say "nothing he has produced is good", and they're factually wrong.

SpaceX has been a fire up the arse of the entire space industry (and they currently hold an effective monopoly on non-Russian launches since ArianeSpace are currently flapping around trying to work out how to make Ariane6 as good as the original Falcon9, whilst ULA aren't going to do more than 5-6 Vulcan launches a year unless Blue Origin decide to become a serious engineering firm at some point and manufacture their BE-4 engines in volumes better than "we've just about managed to cobble together two for you". China and India are options for some people, but don't have the cadence to be a regular/reliable partner).

And whilst I hold no particular love for Tesla, their (poorly-built cars with panel gaps the size of your fist) have likewise lit a fire under parts of the motor industry, and broken the chicken-egg problem of charging infrastructure by just going and doing it themselves. They mainstreamed EVs and that's no bad thing.

It's possible to separate out the person and the business. I have great admiration for SpaceX as a business, and will continue to do so - because they're doing what everybody else in the world has spent decades talking about but not actually doing.

rg287

Re: Working

Check SpaceX own data, they destroyed over 30 stacks some of them carried a small payload but not all, just to get the landing to work - it takes a lot of recycled launchers just to get to break-even over the cost of disposable launchers.

This was factually incorrect. Pretty much every F9 experimental landing was a paying revenue launch - including proper multi-tonne payloads like Dragon to ISS (CRS programme). There were some oddities like Orbcomm-OG2, which was rather lightweight. But that was because it was a Falcon1 contract and it was cheaper for SpaceX to launch on a (then-proven) F9 than manufacture a one-off Falcon 1 (which they'd abandoned at that point).

Whilst the R&D was by no means free, their testing regime was on "spent" rockets which otherwise would have been deliberately dumped at sea (like every other rocket).

1) you must have a lot of launches using the same design stack planned.

Yep. That is how most rocket companies work. Design one rocket, sell it as many times as possible. They've now launched something in the region of 205 missions using 79 first-stage boosters. Which is rather less than the 1:1 ratio of missions:boosters that everyone else manages. On paper, that ratio isn't as good as the Space Shuttle, but they're also not charging a billion dollars per launch, and they haven't killed anyone.

2) you can only lift to LEO in order to have enough fuel left to make a controlled decent.

This is less limiting that you might imagine, since Falcon9 is massively overpowered and a lot of people want to go to LEO (widely defined as anything <2,000km) anyway. Tom Mueller ended up getting a lot more power out of Merlin than they expected, with the result that F9 is predominantly volume-limited. You can't actually get up against the mass limits unless you're launching a tank of water, or trying to go to Mars. And people going beyond Earth orbit tend to have their own propulsion/kick stages and only want to get to GTO anyway. F9 has launched to GTO this year and recovered those stages.

rg287

Re: How many is critical mass

they'll burn up but some fragments will be accelerated and climb into a higher orbit which will cause problems for other higher satellites which are less likely after a collision to decay and burn up.

Yes and no. Conservation of momentum applies, so something that was in a circular(ish) orbit may end up in a much more elliptical orbit, with higher apogee but much lower perigee. This could introduce debris to other - higher - orbital planes (on a temporary basis), which could induce kessler syndrome higher up, even though that debris will also circle down to a low perigee.

But the StarLink constellations are all in low enough orbit that even with some uncertainty around debris trajectories, the shells will self-clear within 5-10 years. Which is a major inconvenience to a lot of projects (science, comms, remote sensing, military, etc), but is also a relatively temporary blip in the scheme of things. It's not like being trapped on Earth for decades.

Its a long way from the Chinese and Russians doing ASAT demonstrations on satellites at 800+km which is going to stay up there for our lifetimes (and deorbit past/through the increasingly crowded shells at 3-500km).