Re: So let’s forbid anything that’s new?
Stop being afraid of everything new, stop putting laws that will hinder progress towards lower fatalities.
I find your lack of faith disturbing
Nearest to Darth Vader =>
5636 publicly visible posts • joined 15 Mar 2007
I guess if you have N cells and discharge them one by one the you have a count down to empty even if you don't know the progress accurately for each cell?
Would need to have some cycle/wear levelling so they are all roughly the same "age" and so each one getting to cut-out voltage is representing a similar amount of energy used.
It is not just the usually poor by design aspect, once commissioned no one really wants to update stuff in case it breaks something critical. No one (or very few) have a whole off-line duplicate to test, so it comes down to "are you feeling lucky punk?"
So as above, start by assuming your control system is vulnerable and lubed ready for every curios and/or kinky internet punk to probe, then design your network access from that point onwards.
However, your point about the rise of Idiocracy is quite right.
The UK has had a decent vaccine roll out, perhaps the only thing they have not utterly coked up, and soon most will be vaccinated (or at least have been offered it). At that point the need for any passport is gone, and in the short window of many to near-all being done it is not worth the discriminatory nature or the effort for what should be a couple of months.
Ah, but is this a stick/carrot to get anti-vaxxers to change? Who knows. Personally I would wait and if they start dying off due to stupidity then it is Darwin 1, YouTube 0
Because the camera's software is Windows-only.
We had some Vivotek cameras and the software recorder/manager supplied by them was surprising not crap, but it was over 5 years after UAC was enforced by MS that they fixed their software to not require the poor minimum-wage sod checking stuff to be administrator.
I am sure there are better options, but when faced with stuff that "works" mostly and spending weeks trying to find a better alternative you just put it on a Win7 VM for no updates and firewall the hell out of it.
More it applies to "service stations". Typically a motorway stop will have something like 6-8 pumps, so we are looking at something like 20MW available to provide 10 min charges for long-range support to commercial drivers, holidaymakers, etc. While folk would love to see 800 mile charge ranges I strongly suspect that we won't see that ever, but rather improved battery power density will be used to have a lighter and safer battery pack so cars in crashes don't go all Ford Pinto on the occupants.
Are any existing service stations going to be able to afford it?
If not then we are looking to move society in to a position where car and van use is largely local with long distance by train and similar. Not necessarily a bad thing, but without many fast charge points we will struggle to deal with the large number of people relying on on-street parking that has no reserved areas and local authorities who lack the budgets to electrify them. Even assuming the local infrastructure has enough capacity.
They love selling electricity.
And they hate replacing infrastructure that costs serious money to do. If you want a new supply you will be charged something like £120/m for the cable route for a domestic 3-phase arrangement (max load around 70kW, assuming the local substation has spare capacity). If you wanted the 2+MW that the above commentards have discussed for a 10 min charge you would have your own substation and 11kV supply. Have you tried asking the price for that?
You still need to sync the atomic clocks together in the first place, and to keep them agreeing afterwards (depending on the level of time accuracy you need)!
For that you need something like GPS to do it, so really it comes down to how many will pay extra for an atomic clock reference oscillator in addition to the GPS receiver and outdoor antenna, etc. Many should do it, if they are running essential services, but usually the bean counters say no...
We used to have one of the SunOracle storage servers with the dual heads configured as active/passive and linked via both a Ethernet cable and a pair of RS232 lines. That was, allegedly, so it could synchronise configuration via the Ethernet link and had the RS232 as a final check on connectivity to avoid the "split brain" problem of both attempting to become master at once.
It was an utterly useless system. In the 5+ years we had it as primary storage it failed over a dozen times for various reasons and only occasionally did the passive head take over. We complained and raised a bug report with Oracle and they just said it was "working as designed" because it was only to take over if there was a kernel panic on the active head. Failing to serve files, its sole purpose in life, due to partial borking was not considered a problem apparently.
The conclusion we had was paying for professional systems by big companies is a waste of time. Sure we had a soft, strong and absorbent maintenance SLA but we would have had less trouble with a singe-head home made FreeNAS server and a watchdog daemon running.
For classic NTP operation it is recommended that you have 4 or more time servers configured on each client so they can detect problems including a broken/false clock source. That could be costly in hardware, so you might have 1 or 2 local servers from GPS that offer precise time due to low symmetric LAN delays and back it up with ones across the internet at large that can catch one of the GPS going massively stupid but only offer accuracy, on their own, to several/tens of milliseconds.
People complain about UI changes and install legacy GUI's to keep the old feel.
No, they do it to avoid the loss in productivity that comes from fscking around with an interface that works perfectly well.
Take the Windows GUI as an example, and compare the layout of win95 with win10 - can you point to a single change that actually makes life easier?
Oddly with the limited number of IPv4 addresses we ended up with NAT as the default for home routers and most small businesses, that automatically made "default deny" the standard for incoming connections. Of course that only lasted until we has UPnP breaking it for any dodgy software running on the user's PC, or the design goal of IPv6 offering access by default for ever device in existence.
And this highlights one flaw in the idea of authentication access to the network, as soon as someone's PC (or other device) is compromised it gets their access credentials, and often that is done via pull-requests now (email or web site malware) and so it can do the same to everything they have access to. So while such network rules might help reduce a free-for-all in the LAN, it really is not dealing with your typical ransomware attack for small business or home users. For they they need a immutable copy of important files, and some means to wipe and re-install the machine(s) impacted by it. The cloud-based accounts on offer promise this, but at what cost in on-going expense and in privacy?
indexing, scanning, and monitoring 1.63 million JavaScript libraries
Does that not strike fear in to your heart? Surly the number of useful libraries must be a lot, lot, less! How many of them were written by someone not bothering to check if it is already standardised, and making new and exciting mistakes again and again?