"determining how sensitive information really is and how it should be classified"
Well, of course, I could explain that to you, but you wouldn't be cleared, so I'd have to kill you.
Classic halting problem. File under <duh>.
The US Department of Defense is exploring whether or not it's worth using artificially intelligent software to suggest levels of classification for information – and control who gets access to it. A sneak peek into the military's formal request for ideas on the matter, filed back in May, and the recent responses from tech …
Odd article. So instead of educating the people who handle all this information in order to ensure that they know that this should be properly handled the preference is to automate the whole thing?
Sounds nice in theory but what's going to happen if people then start blindly trusting the system? Maybe right up to a point where it becomes obvious that something isn't right but because the system never warned them they continue with whatever it is they're doing anyway.
The difference between "education" and "automation" is:
Automation costs a fortune, works most of the time, and reduces the pressure on humans. Education costs a bigger fortune, almost never works, and increases the pressure on humans.
Actually I have to disagree...Automation assumes that you have put in place rules/filters to cover all potential situations and this is where it always fails. The net effect of automation in cases like this is that 60% of your staff see no difference but the people working on the edges in research, security, customer support etc have to be excluded or else they spend half their day explaining why the have triggered an alert.
My wife is a nurse and for a while was working on Sexual Health, on a daily basis her account was locked because 'the system' had identified her as accessing inappropriate materials/websites.
Net effect, automation reduced her effectiveness by a big margin plus it required significant investment from support to keep clearing the alerts. This tends to be the case on all 'Edge' workers
"Automation costs a fortune, works most of the time, and reduces the pressure on humans."
Interestingly, I find it's the opposite. If a task requires entirely manual labour, then there is no point applying extra pressure, since it won't get done any faster.
If a task can be made faster with automation, then the expectation is that it's always done at maximum speed.
So if five blokes with axes can fell one tree per hour, but it only takes two with chainsaws, then the power tools actually add pressure to the meatbags. One person working 10% slower means the first group goes at 98% of the speed, second group goes at 95%.
"Education costs a bigger fortune, almost never works, and increases the pressure on humans."
If a problem can be solved by ML/AI or by an algorithm, then it can be solved by an educated human. The more fuzzy the solutions, in particular anything that involves human communication (like classifying secrets), the better humans will be and worse machines will be.
Since there are exactly zero advances in human civilization resulting from AI, and the rest from education in it;s various forms, I'd suggest that education is in fact quite a lot more effective than automation for making anything new.
"The difference between "education" and "automation" is:"
@veti - I'd try to give an example of this, essentially meatbag computers can handle classes of problems that silicon cannot.
Consider Asimov's laws of robotics. You can explain them to a person, and they can probably apply them in many situations. Of course it's impossible to fully consider the implications of your actions, but meatbags are quite happy with trying to adhere to the zeroth law without being able to fully calculate their chance of wiping out humanity.
Try to automate the three laws and you'll find yourself needing a computer the size of the universe. Even a term like "by action or inaction" means not only calculating what will happen if you do x, but also what will happen if you don't do x.
Pretty much any case where you need to understand the spirit rather than the exact meaning of something is going to be very difficult for a machine to be better at than a human.
Automation costs a fortune, works most of the time, and reduces the pressure on humans. Education costs a bigger fortune, almost never works, and increases the pressure on humans.
Urrrhhhhh.
Education works pretty good if you don't hire those guys at the bottom of the social ladder and upgrade them to responsible military duties or otherwise.
Intelligence: a non-learnable factor of success.
"So instead of educating the people..."
Yes. The implication of this is that they believe themselves incapable of doing what they require which, on the face of it, does seem worrying.
However, if the quantity of information that needs to be managed becomes great enough then the demand for categorisation and subsequent access control will outstrip the quantity of qualified resources that are capable of doing the work, in which case, an automated system does seem to be the only option.
This is still worrying, of course, but for different reasons; multiple AIs would be needed - at least one each, for categorisation, and another for access control, both of which will need their own high quality training sets. And, ultimately, none of the AIs will be flawless - that gaps will be left is guaranteed.
Ultimately, AIs do have the potential to do a better job than wetware but they still won't solve all problems, and they're very likely to introduce a few new ones.
I thought that the standard practice was to mark everything secret unless it was important then it was top secret. Some of the stats around the contents of some secret documents and the sheer number of people who had access when Manning leaked the material they had access too was <REDACTED>.
"machine-learning technology could be used to suggest levels of classification, as well as automatically monitor and log records of who accessed files, where they were accessed, which systems were used to access the materials, if any changes were made, and whether that person really had a need to know the contents"
If I can accept that data analysis could suggest classification levels, you don't need pseudo-AI to log activity records, detect changes and flag inappropriate access. Those are things that we have been doing for decades already with normal code.
This is just more "AI" bullshit to make people think things are going to work better.
AI Automation is a “lever” - relying on expert maintenance else it will eventually fail. Maintenance - e.g. rule base formulation, training (induction, neural) is a specialist activity requiring knowledge not only of the technology but of the domain. Both of which evolve. Moral: long term success depends on sufficient (quality and number of) wizards retained.
This isn't really about making AI smarter, it's about making humans dumber.
Keep giving people mental crutches and they'll forget how to walk - it's already happening to a large extent - AI will just accelerate the process beyond repair. One day we will have no-one left who can train people on anything, as everyone will have forgotten how to think.
It's inevitable at this point..Judgement Day is coming :)
Usually when you see a story like this, it is in reaction to something having gone wrong. Massive changes were put into place post-Snowden. Similarly, others were implemented after different breaches and attempted breaches occurred. The DoD does not have a great record when it comes to proactively addressing threats of this nature, so it makes me wonder what happened and how much of it will we find out about.
I read an article in the Wednesday Wall Street Journal about this investment house that has a seventeen step process to determine if a software house is worth buying. The manuals are kept on a secure server that note whenever someone open one of the files, where, when, whom, how, etc. If they attempt to download the files, the servers do the same routine. If they try to print any of the files, same thing. Routine must have cost more than ten cents because no one else seems to do that.