Talking of workarounds: has anyone ever tried...
v=spf1 ip4:0.0.0.0/64 ~all
?
4014 publicly visible posts • joined 26 Jul 2007
A bit of history here:-
https://www.fknsrs.biz/blog/ibm-cua-basic-interface-design-guide.html
There was a time when I suspect developers thought they should use CUA to cover their arses (CUA), but MS has blown a hole through that laudable concept.
The referred to article mentions "walk-up-and-use". Now the mantra seems to be "walk-up-and-be-confused".
(cough) WordPerfect (cough)
I went through a phase of training most admin staff at a few London hospitals in how to do things properly. Nowadays nobody seems to care anymore.
Don't encourage a Microsoft product to be "smart" (why do you need a leading zero on that phone number?).
Yes but that can also happen with Word between versions.
I used to have a customer that did vast quantities of educational material housed in ring-binders. When the content changed, in theory only the pages with the change would be sent out (the page footer contained version info). When they moved between versions of Word they often ended up having to print everything out each time. They wouldn't have had that problem if they'd stuck to WordPerfect.
Grey is not good.
ISTR an experiment was carried out where ne'er do wells tended to congregate around cabinets like these with their laughing gas canisters. Apparently painting the cabinets pink reduced the tendency to use them as a place to hang out.
The "solution" I keep banging on about is to do a one-off move of the clocks by half an hour in spring or autumn, then leave them forever like that. Will work for anywhere there is Daylight Saving. One reason why it might not be liked as a solution is that GMT will then only exist as a virtual reference point for those of us in Blighty. Another tradition consigned to history.
Ok there will be a big upheaval initially, but in the long run... Perhaps the apocalypse is so near that it is not worth the effort.
A previous employer of mine had massive budgets for software, but not hardware (software being the "in" thing). So we had to be really careful with hardware resources (memory, disk space, CPU performance). In the real-time* situation we were coding, performng one IF THEN test rather than two or more allowed critical time dependent operations to be performed within strict time constraints. I suggested collapsing two tests down into one by using a 'magic number' of 31/12/1999 to compare against, on the understanding that the hardware would be upgraded prior to the millennium. This was accepted by management and was well documented (backside covered? Check).
Another problem with many applications is that they use different concepts of time. We used 2-second time. This was to fit into whatever data-width we were lumbered with at the time. I forget where the baseline was set, but we regularly converted to/from 2-second time into normal time to interface with reality. So we had to concoct our own library to do these things.
*These being typical applications that could have caused big Y2K problems.
Hmm, I can't disagree with the word "can" there, but it's not the whole story by any means (no reason to downvote though).
I cast my mind back to Friends Reunited when one of my fellow classmates wrote to me and asked me if I remembered our schoolteacher's reaction to the picture I drew in response to her asking the class to draw a dinosaur. Everyone else drew a big dinosaur that filled a sheet of paper. I drew a diddy little one in the top corner and incurred her wrath accordingly. "Why did you do that, you silly boy?" I do remember the incident clearly but to this day I cannot explain why I did what I did, even though I knew I would get multiple slaps with a 12" ruler. Maybe explains why Graham ended up in a much more rewarding occupation than me.
I think that is covered by this para:-
"We know too what can happen when these doctrines collide: physical conflict."
Yes it is, but AI has the capability to incite conflict where none is evident. I believe Jim Morrison studied such phenomena and whipped audiences into riotous frenzy via his on-stage behaviour. This is now possible, on-tap at your local keyboard (pun intended).
Something I've noticed a lot which dates back to the pandemic is that people seem to have lost the ability to think in a self-critical way. Typified by the phrase "Computer Says No". Instead of people taking responsibility for their own thoughts and actions they are increasingly encouraged to delegate them to a computer. In doing so they absolve themselves of guilt. The guilty party for their motives is some nebulous being that could be considered to be a proxy for $deity, but maybe in reality a humble programmer working to a flawed spec. AI is simply an extension to this absolution process.
The reason AI is fundamentally a bad thing is that it is not wired in the same way that humans are.
Humans depend for their survival on following certain "rules". Quick searches reveal that Christianity has 10, the Quran cites 75, Buddhism has its precepts, Jews hit the jackpot with 613, etc. Breaking these rules will land you with loss of freedom, ostracism from society and in some religions, death. We know that we have to abide by these rules to achieve long-term evolutionary success that underpins all of these man-specified rules, regardless of religion.
We know too what can happen when these doctrines collide: physical conflict.
By contrast, how many of these rules does AI have? None. So if we follow advice given to us by AI systems humanity will run into trouble. Hence the alarm when a study was carried out that AI played out a 'wargame' scenario with a 'scorched earth' outcome.
AI does not care what rules are in place. It might advise breaking rules because it has access to better probabilistic analysis of overall outcome when comparing the scenarios of following or breaking rules. Humanity doesn't tend to break rules in the same way as it does not have the benefit of an overall strategy. Humans are tempted to break rules for spontaneous reasons and it is fear that acts as a deterrent on those actions. AI does things by cold, hard analysis only. Furthermore, AI will exploit the fact that humanity will follow rules, and will know how humans will ordinarily respond.
ISTR there was some streaming service which had payment vulnerabilities whereby people could easily bypass payment by following instructions on the internet.
I believe that its revenue stream was reinstated by its provider by getting users to download a series of updates which looked innocuous in themselves, but were in fact part of a grand plan to seize control back when everyone had installed the updates.
Can anyone here remember the details/provide a useful link?
Maybe the solution is to launch denial of service attacks on AI machines. This could take the form of false statements such as 1 + 1 = 3. Unfortunately it is going to take a lot of ingenuity for an AI to accept something which it already "knows" to be false, so a "virtual story" would need to be constructed which uses considerable resources and ultimately has no substance. Arguably, think of the idea behind Hesse's Glass Bead Game as a suitable starting point, but with no linkages to reality which the AI system can latch onto for concrete reference purposes.
Problem is that The Next Big Thing is for AI to emulate the way the Internet works i.e., to mesh AI machines together in such a way that they are resilient to disruption.
Reading the plot for Colussus (see 1st post), people will be tempted to connect AI machines together to see what happens, but become overwhelmed by the consequences.
This needs to be nipped in the bud right now, for humanity's sake.
A possible issue with bullet point flow is that there could be scope for doing something in a roundabout way between two steps. For example, if you were to perform a sort, and don't specify how it should do it, an inappropriate sort algorithm could be used. Solution there is to add an extra step into the bullet point flow. So really this is not much different than doing a flowchart and fleshing out the details when needed.
Applications Programming is, I feel, often neglected as far as its role in InfoSec is concerned. How many programmers are taught to program without considering what happens when spurious parameters are input into a program? Luckily the lecturer who taught us programming on my degree course got us to focus on validity checks as much as the algorithms for carrying out the assignments we were set.
It would be interesing to hear what other commentard's experiences are of this in their formal programming training.
With 158 comments on here at time of writing, there would appear to be some dispute as to whether Windows 11 is fullfilling its role as a grown-up Operating System.
This article gives some insight as to the role of an Operating System:-
https://en.wikipedia.org/wiki/Operating_system
PS I'm not a downvoter of any of your comments.
I've spent my entire life learning new things. Many of the achievements in my work career have been involved with development of new things.
Unfortunately technology is at the point where the "learning" process makes me wonder whether I am suffering from dementia because the "improvements" generally put forward as milestones to mark progress look like retrogressions to my mind. In my view technology should be used to push down solid foundations from which industry, commerce, education can progress. Windows role should be such a foundation, it is not, it is trying to be something which it is not, and if history is a guide, never will be.
As I approach my 70's I feel it far better for me to exercise my brain learning to play the piano... however badly that may be to my neighbours.
The reason why WordPerfect was so successful, sorry, one of the reasons WordPerfect was so successful, was that it introduced the user to a new feature when that user was ready to use that feature. If a letter needed to be written that didn't need fancy features, why bombard the user and confuse them?
That reminds me of...
https://medium.com/jumpstart-your-dream-life/empty-your-cup-a-zen-proverb-on-opening-yourself-to-new-ideas-10e8c9545c7b
On this particular day, a scholar came to visit the master for advice. “I have come to ask you to teach me about Zen,” the scholar said.
Soon, it became obvious that the scholar was full of his own opinions and knowledge. He interrupted the master repeatedly with his own stories and failed to listen to what the master had to say. The master calmly suggested that they should have tea.
So the master poured his guest a cup. The cup was filled, yet he kept pouring until the cup overflowed onto the table, onto the floor, and finally onto the scholar’s robes. The scholar cried “Stop! The cup is full already. Can’t you see?”
“Exactly,” the Zen master replied with a smile. “You are like this cup — so full of ideas that nothing more will fit in. Come back to me with an empty cup.”