Reasons?
- Politics
- Humans are much more flexible in what they can do, a robot explorer is fixed in function
Downside is that humans are much more delicate, tend to want to be returned back to Earth and need rather more support environment.
3751 publicly visible posts • joined 10 Apr 2007
It's definitely about making money, not "innovation". After all, these are the elected criminals that think that it is perfectly reasonable to sell country-wide personal medical data to the US pharma industry and to control this through a paper based time limited "opt-out".
The massive downside from the recipient point of view is that UC is designed to reduce claims and to make it as hard as possible to claim anything.
Previously unlinked benefits suddenly become linked in UC even though the official narrative is that they are not. It's all down to the aim of reducing the payouts, not social support, not fairness, not inclusiveness and accessibility, just down to reducing payouts. That the usual kickbacks and trough feeders are embedded in it adds to the further aim - taking money from government and putting it into private corporate hands - actually delivering anything is a sideline.
Every step is reducing security, not improving it.
Security is always a balance of convenience compared to actual security, which is fine, but replacing the secret, changeable part of authentication and replacing it with something that is neither secret nor changeable is nothing short of retarded. Swapping the user identifier for biometrics is fine, pair this with a password and one has an improved system. Replacing the password with biometrics always reduces the security, hell it's what we do when letting someone we know, and possibly expect, into our own home. Replacing both the user identifier and password with biometrics is fine for very low security scenarios.
Replacing the secret, changeable component of an authentication with something that is neither secret nor changeable is security idiocy all pushed by those who have absolutely no concept of security whatsoever and live in the shiny la-la land of Hollywood movies.
Biometrics can add to security, in fact they are a very good replacement for the user identifier, however they are not a replacement for the secret component.
I get the fact that you (the article author) loved Unity as a desktop, and people are welcome to their opinions and experimentation and variety is a very good thing.
However, as a productive UI, Unity was appalling. It only worked if the user only had at most a couple of applications open, and in general terms, if the user only had a couple of applications that they ever wanted to open. Anything beyond that and an attention span of a gnat and the UI was barely usable at best. I am definitely not alone in ditching the distro as a result of the UI - I switched to another version with the crippling UI, and then switched to another entirely.
It's not that Unity couldn't have been fixed, and the operation of it amended so it was productive and usable and still kept some of the mechanisms... but the developers of it weren't interested in that and seemed to make it worse with every iteration.
A major part of "safe" is to restrict the surface area of the attack.
The support staff at our MSP just could not get their head around why I insisted that the remote support web page for our Internet firewall be either disabled or locked to a couple of specified IP addresses and not just left open to the public Internet. Their claim was that "it is protected with a strong password" was their reason for their blinkered naive concept of safety... different supplier but these exploits take place before the supply of a password.
By "code" I presume you mean mash the keyboard a few times, likely failing to use any keyboard shortcuts because only luddites use anything other than thumbing a touchscreen and hoping that the code-assist-ai-wonder produces the appropriate context menu and code, compiling it and declaring it complete as long as it looks shiny?
Usability, accessibility, functionality, error handling and so on are only for dinosaurs
Same here - Unity was one of the worst UIs crapped out for a while. At least until Windows 8 was inflicted on people anyway.
I'm all for choice and variety and trying things, but Unity was so useless by way of usability that it could only really appeal to users with zero attention span. That way they could forget how unusable it was in between having to use it and just think "shiny" instead. Any attempts at serious use of Unity quickly revealed it to be utterly unusable - for example trying to have more than a couple of applications running and switching between them.
I ditched the distro and didn't return.
I laughed at idiots who believed the drivel about 5GL programming languages and I still do. We are many, many, many years and iterations away from any form of ML actually generating efficient and decent code.
We're still suffering from idiotic mathematicians who only a few years tried to state that all computer programs can be described mathematically and therefore that 5GL would just generate the code. Their pathetic early examples generated some of the most stupid and inefficient and still very limited (barely more than "hello world") type code.
There is a reason why line drawing graphics was done in the more restricted modes! Drawing pixel lines (and other shapes) and having to set colour information at the same time was distinctly unpleasant and really slowed code down too.
Most games didn't use the hi-res mode though - at least not for the animated parts of the game. Split screen and using it for score and such was common though. While it was similar to the spectrum mode, from memory it worked subtly differently and therefore most C64 games used the multi colour (double wide pixel) mode instead.
I believe part of the reason is the limited size stack implemented due to the very limited memory that was available. Have too many jump-with-return (whatever your language of choice calls these) and the stack very quickly becomes very full and when the stack overflows the system is toast.
Recursive calls was not a sensible thing until rather later.
RAM prices were utterly insane at that point in time in home computing, also systems tended to need lots of them to have even an appreciable, for the time, amount of RAM.
Something that I didn't understand to start with, but definitely appreciated later, was the way that the C64 came with 16K of it's RAM masked by the BASIC and kernel ROMs. This did seem a waste of memory at first, but the ROMs could be disabled in code freeing up the memory. One of the tricks I had was to copy the ROM to the RAM (when not swapped out the ROM could be read and when written to just wrote to the RAM underlying it) and then the ROMs could be disabled. After this, the BASIC and kernel were freely hackable to do anything one wanted, to speed up things, to add extra commands and so on. It wasn't immediately obvious how to add lots of extra command until I found the call to the parser and extended that.
Google should have an app like that in the iOS App Store and Apple should have an equivalent in the Google Play Store. Same goes for any other similar vendors too.
...and just as importantly they must not put blocks in the way of making these work.
Naturally, Microsoft would never have done anything similar, they'd have only just about vomited out a barely working "come to Microsoft and we'll change things so regularly and obscure things as much as possible that you can't leave even when you want to" type app.
I full on laughed at a guest speaker at some event I went to a few years ago where he was standing there extolling the virtues of "the blockchain" (that's what he called it), how the blockchain can perform iterative functions, how the blockchain can have triggers in it to trigger external events and processes and lots of similarly wonderful total bullshit.
He was not amused when I stated that "the blockchain", or more accurately a block chain, was nothing more than an algorithmic approach to storing data and had no advantages but lots of disadvantages unless there were a large number of distributed systems holding the entire database. Adding to this that there no concurrent controls over such a distributed database and almost no security either. He was not even able to describe what "the blockchain" actually was, nor able to argue when I got fed up with his bullshit and just told him straight that he had no idea what he was talking about.
I similarly squashed a suggestion about converting an in-house database into a blockchain data store: absolutely no advantages but lots of disadvantages.
Computers tend to do what they are told to do. It's where there is a difference between what is, or was originally, intended is where the problem comes.
That and just unthinking developers who struggle with a singular binary state and are unable to comprehend that there could be more states beyond this, let alone how to handle them safely.
They are "mainstream operating systems" even if you are too blinkered to know this.
Pulled from Wikipedia because I'm too lazy to write it myself:
An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.
What parts of this do Android and iOS not do?
Not all software installable on Windows is installable on MacOS and vice versa - does this make one or the other not an Operating System? You can't just pick arbitrary applications and decide that if an OS cannot install them then it's not an OS, or even not a mainstream OS.
Statistics show that Android is the most common Operating System accessing websites in the world. Followed by Windows. Android, like windows, is also deployed on systems that are not used to browse websites therefore the numbers are not accurate, but as a wide measure such systems are in the minority. Here's a chart of OS usage worldwide: https://gs.statcounter.com/os-market-share (selected as it was the first result in Google, no other reason)
It hasn't stopped them in the past.
I wouldn't mind so much if it were subtle and not insidious, but it's not. For example a once in a boot popup highlighting new features of Windows, and as a subset showing features available elsewhere. Instead it's "here's a time limited 'free' copy of some game for your business computer", "here's yet another privacy violating application that your data security team won't want anywhere near their network" and so on...
At least with Windows & Linux you can have a choice of the hardware.....Had a choice of hardware, Microsoft are currently issuing entirely arbitrary hardware requirements for Windows 11 (which is nothing more an update to Windows 10)
Windows 8 was totally and utterly pathetic.
It featured the most retarded, brain dead and fundamentally broken from the start shell UI that was unusable on both touch devices and laptops and desktop PCs, which were easily the dominant PC form factor and still are. There were absolutely no positive points to come from the UI.
A driver model that seemed to be broken from the start. When drivers did work they seemed to work OK, however most didn't and the amount of time wasted and lost through this was excessive, let alone the odd performance drops. 8.1 did improve this, a lot, but it was too little too late and the retarded UI was still in place.
The Windows 11 UI isn't quite as dumb, but there are obviously shades of the same conceited blinkered "we tell you how to do things and you can't do anything else and definitely not customise our vision" attitude to it. Half of it feels like a "me too" interface trying, and failing, to ape other OSes and the rest is still inanely disjointed and it all feels like yet more of a push to subscription services from Microsoft than any form of actual progress for the user.
There are no improvements in Windows 11 that couldn't easily have been applied to Windows 10.
It would also help if there was a US equivalent to the ICO and that data protection in the US became a legal and enforceable requirement and not a purely contractual one if the other party can be bothered. Currently it's just yet more fluff with no real legal protection.
As for the US deciding that US company's overseas subsidiaries are covered by US law and don't have to care about local law...
From memory (damn, it's been a long time and I don't mind this), the five letter rule refers to the number of letters that are considered significant in a variable name and anything beyond this was ignored.
Therefore STALIN and STALINGRAD and STALIA are all considered to be the same variable. Much fun can be had with such things if someone were evil enough to use words that sounded shorter than they were...
That is also true, and worth keeping in mind. However it's also worth noting that some people may find it slightly amusing to have certain notes read out in as dead pan a voice as possible while absolutely everyone else is trying not to laugh. I know of judges who failed to keep their laughter in...
Precisely. It's insane. There a few components in security - something you know (secret), something you have (a physical token or certificate) and who you are (some for of identifier).
Removing a secret component and replacing it with one of the others can never improve security. Biometrics, such as face and fingerprint, are a good replacement for an identifier to identify who you are, but they are not secret and they are not changeable but they are relatively easily copied. Adding biometrics alongside a password improves security in the most part, adding biometrics alongside something physical as well as a password improves security further. However replacing the secret component with biometrics or a token only reduces security. And this is before the realities of what happens when the physical component goes wrong and has to be replaced or just bypassed because access is required.
Security is all a matter of balancing convenience vs security. When the "convenience" is all for the benefit of data collectors and not the end user it's even more insidious.
In this case there is a difference, and it signifies the enormous difference between Windows 3.0 and 3.1 when it comes to memory management as Windows 3.1 used "protected mode" and Windows 3.0 used "real mode".
General Protection Fault is where an application tried to access (usually write) to memory which it is not permitted to access. This requires an MMU to register the memory assigned to a process.
Unrecoverable Application Error was slightly different as it's an after the fact error where one process has just written on the memory space that another process was using and as a result the process has crapped itself. Likely because another process overwrote some part of the in-memory code.
That's my memory of the difference anyway!
That sounds horribly accurate. There appear to be various regular stupidities in error messages:
The system knows exactly where the error is but the developer decided that this information is not something that the user should know.
The error message was written to describe the error, but only in a way that once you know what the error is, is the error message actually useful.
The developer couldn't be bothered to implement any error handling (because it requires thinking) and therefore just left everything to exceptions, including all the expected errors. As a result a user trying to open a locked file is responded to by informing them that the low level FileOpenEx function "did not have the correct parameter" or something equally useful.
For a while now I've only installed VS on sacrificial machines or virtual machines. The damage it does to a regular installation of Windows is too much and is unfixable without wiping the OS and starting again from scratch. Running in VMs also allows me to have different versions of VS without one of them trashing another.
It's about presenting the information in a format that is usable by the person with a schedule.
For example, overnight work starting at 22:00 and finishing the following day at 08:00. It wouldn't help them if the system was to output that the finish time was 07:00 GMT and they'd have to work out that this is really referring 08:00 BST. Billing always had to calculate the real duration regardless of event straddling over a clock change, and legal considerations had to be had when the duration of work was extended by one hour.