Re: License?
I am asking about the HMRC tool, not the language it was written in.
Or, are you saying it has adopted the same license?
Google is unable to offer me anything coherent, how lucky you are that it likes you better.
2023 publicly visible posts • joined 16 Jun 2016
My apologies for that remark. I was in a bad mood for other reasons at that moment. Once I cooled down I redacted it.
But as a spacer, you are wrong about what an airbreathing engine is: it draws in ambient, and applies some power source to accelerate it backwards, that is all. "Backed by an air ram" is known in the trade as a ramjet. This is why NASA's aim is to demonstrate an atmospheric thrust/drag ratio and not some space-related thrust parameter. Call it an airbreathing ion thruster if you will.
These are just electric-powered ramjets.
The game has been played at low airspeeds, with a model flown by MIT a few years back, the main difference being that the ionisation and acceleration took place outside the airframe, above the wing. Also, the plane used a DC electrode to ionise the air, not an electron beam.
The Wiki article on ABEP (sic) notes the use of a radio-frequency (RF) magnet to ionise the plasma, just as I did professionally for some years when engineering waferfab stuff (Not that we launched the wafers like UFOs, sadly they lived in tightly-confined chambers).
As any designer of high-altitude ramjets will tell you, you have to get up to hypersonic speeds to achieve sufficient airflow. If you rely on superhigh exhaust velocities, your thermodynamics become inefficient, so you try to shift as much air as possible, as slowly as you can get away with. The trouble is, from a flow-rate perspective there is naff all mass in the air at these near-space altitudes. So you are looking at slightly higher air densities, and hence lower altitudes, in order to maintain efficient thrust.
Also, you have to get up to around Mach 20 before you gain enough centrifugal lift to think of your craft as orbiting rather than flying.
So you end up with a hypersonic ramjet and all the problems of airframe heating that brings. This is the real killer: to get in enough air to generate the required thrust, you have to punch through so much of it that your airframe melts in a few minutes. Your only hope is hypersonic aerodynamics to reduce the thrust required, and new heat-resistant materials and techniques.
Next, we get onto the rate of energy delivery, aka engine power. It is phenomenal. Think of the rocket thrust needed to sustain Mach 20 flight at such altitudes. All that has to come from the electron gun, and any other field-enhancing gadgets you can come up with. You should see the power supplies we built just to sustain a pretty light above a 6" wafer for 30 secs or so. Battery drain would be staggering, recovering ambient energy a drop in the ocean. Increasing the exhaust velocity to allow reduced mass flow would drain the batteries even faster. This is not a satellite but a short-range cruise missile.
The history of electromagnetics has been full of horse shit and snake oil since the days of Nikolai Tesla, and the tradition shows no sign of slowing down.
All too many of us Reg commentards did.
But there is also an L in IoT, for "La-la-la" with your fingers in your ears.
And it's not just the firmware, the older and cheaper hardware is also insecure by design. Secure from the ground up is still rare.
Why I still refuse a smartmeter to this day.
I have always wondered why the connectors for HDMI are so crap - always wobbly needing careful attention, frequently bending/breaking or falling out. Seems the Forum has found another foot to shoot itself in.
With USB C monitors available for all that TV/fillum streaming/downloads, who is still going to want an unreliable HDMI-only "home entertainment" dinosaur?
I have this insane idea. Just store all that information on a publicly-visible site with a search tool. Only hide material which is excepted from FOIA.
All it needs is some AD, network and user group reconfig, and a new home page. Okay okay, and maybe an FOIA column in your legacy SharePoint libraries.
I don't know why I bother to get up in the mornings.
On The Other Hand, dipping into the sticky stuff causes drag and slows the sat (in fact All LEO sats suffer this to some extent, that's what defines LEO). Also, the ionization energy you put in gets wasted as heat on recombination. So you have to scoop-eject extra propellant to maintain orbital speed. And on both counts you have to have a beefier leccy supply, not forgetting that solar panels would cause even more drag so it has to be all-onboard.
Not saying the balance can't be tipped in favour of a net efficiency gain, but it's no easy ride.
I guess the best way to look at it is as an electric ramjet.
Hi Bazza,
Bank switching is an ancient technique for attempting to resolve the traditional dilemma. By pointing the memory bus at a different block, aka bank, you can gain some of the benefits of bulk storage along with some of the benefits of direct addressing. The technique originated in the mainframe world, and became common in the 8-bit micro revolution once demands for ROM+RAM exceeded 64kb. For example it was used by the 128k Speccies. It too needs its own approach to memory management and to coding demanding apps.
Then, there is the linear storage of tape. Mainframes used it for bulk storage, and some developed automated to-and-fro write/read systems which enabled its use as dynamic storage during computation, when the thing ran out of core store. It was a bad idea because the tape wore out, but that didn't stop Sinclair reinventing it for the Microdrive - though looping the tape rather than reversing it.
There are probably other paradigms we have missed.
All in all, the reality is far from the binary RAM+bulk suggested by the article.
TLDR, but anybody remember Symbian? It was the OS that powered the 1990s PDA revolution, on devices such as the Psion Series 5. Fast, non-volatile memory was hitting the high street and Psion recognised its potential for running code straight from it. Loading it into conventional RAM would still run faster, but the instant-on and low power of the NVRAM was seen as the way ahead.
My own memory is getting a bit volatile these days, but ISTR Symbian was held in cheap ROM and loaded into RAM for speed, but apps were installed to and ran from NVRAM. Perhaps somebody can confirm/deny the truth of it?
The important point is that RISC OS system did not catch on. You can logic-chop your way through a thousand piles of bent words but it won't make the RISC OS a success. At least my brain is a mite bigger than yours.
You say that Android is a UNIX. Has anybody ever put that to the test suite and seen it through to certification? I do like evidence-based discussion.
>sigh!< The RISC OS system was three buttons for Select, Menu and Adjust. Adjust was usually a second menu with more arcane options (If you really hate yourself, the RISC OS Style Guide tells all). Do, please, show me the contemporary apps which use the mouse wheel for a third, Adjust menu in this way. No, on second thoughts, keep it for my deathbed.
Nope. Android is at best UNIX replica. It is not a real UNIX. It's not even a very good replica, you certainly can't install the average UNIX app such as Oracle for UNIX and expect it to work. FFS, stop changing the meanings of words to suit your desired sophistry. As any programmer knows, that game doesn't work with strongly-typed languages and it creates havoc with loosely-typed languages like English.
Interesting as ever, lots of stuff I didn't know. But also as ever, not quite as I recall it. Icon for the appropriate discussion venue. Meanwhile:
First, Liam's favourite "Linux is a UNIX" trope. This is nonsense; by that argument, Android is a UNIX too. Just because a handful of distros swim, quack and walk like a duck does not mean they all do. I mean, reality check here, please. The most we might want to claim is that those few Linux distros are Unices. But even there, one might suggest they just happen to look and fly like unices, in the same way that a replica De Havilland Comet Racer is not a real one, even if it is a thoroughbred bitch in the stall.
Then, if we want a Swiss Army knife of an OS, do we actually want networking and GUIs in the OS core as such? RISC OS put the GUI in the core and it eventually became clunky and outdated; design choices such as the three-button mouse failed the test of time but proved too deep in to fix. Baked-in networking is just a massive security risk. In both cases, choice is critical in tailoring the system to your needs. Baking them into the OS is not offering choice.
Somewhere I lost the core thread - where exactly in the tale did we end up coding ourselves into a corner? Now, that is probably because I am old and ugly and only have three cylinders still firing, but I'd still like it in a pithy one-liner.
The main problems with UK Gov adoption of open source are legal.
We already have an Open Government License, intended to apply to software as well as published information. But it retains HMG copyright, so only some F/LOSS projects can accept code from HMG. Also, it begs the question as to who in the wider community would want to contribute to a fully HMG-maintained project.
Then, Big Gov demands Reliable=Big Suppliers, whose off-the-shelf products are far from open. Also, Most Departments are still geared around buying a solution and then coughing up extra for support. The process of buying support, with the solution coming for free, is not on their procurement flow diagrams and there is no money or expertise to revise the 25-year-old Visio horror. And besides, releasing Crown copyright material from all copyright obligations would require an Act of Parliament.
The sensible way forward is to commission an external supplier to write a viable app, or maybe extend/tweak some existing OSS app, and stipulate that they release their code under an open license. Then pay them and others to grow the enterprise code base step by step from there. It'd still be a decade away from an open ERP framework, and sadly politicians don't have that kind of staying power.
You mean like a maglev train weighing a humungous 0.2 milligrammes?
The problem is not scale, it is stability. The fields induce plasma currents which screw them up in return, so you have to faff around stabilising them with auxiliary fields faster than they can screw up. Never, ever look at the equations or your brain will destabilise even faster.
Down at the local golf club:
"My dashboard cost more than your dashboard. Yah!"
"You're not counting the cost of the blockchain data archive mine creates. Boo!"
"I pay that every month for my cloud DaaS. That's dashboard-as-as-service, dear boy. Sucks!"
"Who's that toff in the Veyron?"
"Said his name was Boff-with-an-h or something. No breeding."
Any change is likely to require some kind of kit uplift. For most routers, enabling the 255 block should need no more than a software/config update. I'll bet more than you think will just pass it through already. I mean, why would C. Heap Shitt implement rules for stuff that "doesn't exist"?
The plain 255 block address could be allocated to a re-router (call it a v4++ gateway). Any v4 kit would forward the 255 traffic to the gateway, which would pick up on the v4++ flag and re-route. Not totally unlike the NAT principle, but no need to cache the source IP on the gateway because it persists in the wrapper. There are probably smarter compatibility solutions than that out there.
Of course, some users like the IP-masking anonymity that NAT offers, saves a two-ended VPN or superslow TOR, so that's another reason why kissing v4 goodbye is a pipe dream. Indeed, the anonymity of NAT is surely the main flashpoint to any alternative offering: does we does or does we don't?
In fact, I'd go so far as to say that if v6 evangelists want to make real headway, get a v6 NAT-style anonymiser out there. The spooks will hate you for it, but you can't have everything.
A big problem with v6 is that many links rely on NAT. For example my mobile router NATs anything I connect through onto the carrier's PLMN. It emerges from their gateway with whatever random IP they care to apply. As IP continues to ripple outwards/downwards through the mobile core, converting to v6 means replacing vast numbers of consumer units. Not going to happen.
Kinder to just allocate the 255.x.x.x block to this kind of scheme. And rather than NAT as we know and love/hate it, a second-generation extension protocol might be considered. For example any address in the 255.x.x.x range triggers a "there is another one to follow" flag, thus creating a 56-bit (255.x.x.x + x.x.x.x) address range.
Could then recall the odd extra v4 block as another range of flagged extensions, if/when we ever run low again.
Actually came here to propose this but YBMTI.
Browsers have to render whatever is frikkin' out there in numbers, be it WebP, AVIF, JXL, or whatever.
If the makers try to distort the market through denial of service, some other browser will come along and toast them.
Anybody still using Mosaic, Netscape, IE or Edge? No? There's a reason for that....
Salt? Apparently our esteemed researcher discovered the recipe in an ancient Chinese manuscript. So it's probably true for Chinese tea, which to my palate tastes bitter - unlike the Kenyan tea, of Indian descent, that ends up in our teabags. I prefer my China tea with a slice of lemon and no milk - or salt! Sugar according to the weather.
But then, someone has to be first to recall Arthur Dent's rant at the Sirius Cybernetics AI drinks dispenser on the Heart of Gold spaceship.
I am actually more accustomed to NATO Standard (1. Milk, 2. two sugars, 3. Strong, India-style cheap tea from a very large pot), which is just what you need when you come in out of the storm and tramp across the Portakabin (if you're lucky) in your muddy Doc Martens.
Tibetans traditionally prefer China style tea but with all the twiggy bits left in, to which they add a dob of butter. Most butter in Tibet is rancid (They use it for sculpture in the winter) and His Holiness the current Dalai Lama has said that the quality of the tea depends very much on the quality of the butter.
The best tea I ever drank was a loose-leaf tea, probably from Ceylon, brewed in an ancient aluminium teapot (silver is so gauche, dear boy) encrusted with the brown residues of a thousand such brews.
Salted caramel choccie anybody? Yecch!