Lettuce, knife, tomato. Hold the sandwich cream - If we believe the last couple of weeks news articles
Think most could write that software without AI being necessary
A team of scientists at Universität Bonn in Germany has developed not-at-all-creepy software able to predict the future. A few minutes of it, at least. However, before heading out for a lottery ticket, potential users should be aware that the software is currently at its best when predicting what a chef might be about to do or …
This AI must be massively complicated. I mean, tomato... = tomato in salad?
I'm too unwashed to even begin to read this AI article to deduce if this is an algorithm in a net, or some onions in the net.
It's a bit like those articles stating "man invents space ship" when really, they have written down on a bit of paper the calculations of the rocket equation. Both are noble feats. But one is not the other.
Let them perfect this a bit more and security agencies will be begging for it. Probably airport security such that the AI will see someone reaching in their jacket for a bit of a scratch on an itch and the AI might just assume "reaching for a gun". Hopefully, the AI won't be equipped with a gun of it's own.
Worse. Reaching for a scratch *is* partially suspect. The problem with AI is, if we don't know their entire makeup (we don't, as we don't have the computational power/time mix to reverse analyse the data), then every single one might have a single pixel (or data point) of failure!!!
Just as those "turtle/gun" or "Cat" picture tricks they did to the AI (and a few others) that used a single pixel/dot/shape to totally trick AI with very high "accuracy".
It's not the error, where the AI was close. It's not the AI that sees a spot of dust on your shoulder thinking it is a tactical nuke target. It's the person stupid enough to allow the AI to act without human supervision/intervention and respond to the faulty analyses.
(But we are past that point, we did long ago, the point where humans check the calculations we do on machines/computers. We assume they are right, until they break and we lose the system/car/airplane/internet/password database etc, etc, etc)
I'm mostly wondering how this program managed to look at the part of the video it was provided and figure out what a person is doing in it. Sure, it can be easy enough to look at a frame and say "There are carrots in that bowl", but it can be very difficult for a program to look at arbitrary videos and decide whether I'm chopping or dicing those carrots and what I've done with them next. So many details are unimportant, such as what kind of knife I'm using and how fast I'm chopping, yet that will be a lot of the activity. For example, consider a situation where I'm going to make a salad and have started a video stream to this AI. I am currently standing in front of two cutting boards, one containing spinach and the other cucumber. The video is instantly recognizable to a person, and probably to the AI, as well. However, what if I have limited counter space, so the cucumber cutting board is behind me on a different counter. Am I going to use the cucumber? That's a typical salad-making move, but the camera doesn't know. I may just have placed my vegetables on that counter and moved the spinach over because that's what I'm using now.
Therefore, I can think of three possibilities for how this AI does this, which they at least didn't explain in the article and I'm kind of tired so I'm not looking for extra explanations right now:
1. The image recognition system was provided information and has managed a great training set that has actually allowed it to automatically determine, within limits, what culinary task I'm doing. This would be revolutionary news, and would massively overshadow the prediction element, because it would be a success while the prediction is at best borderline noteworthy. So I'm assuming that didn't happen.
2. The training set was made very similar (same kitchen, camera position, etc.) and all the test videos were also shot there, so the algorithm would fail under any standard conditions. In that case, they are overestimating the usefulness of their code.
3. The researchers labeled their videos for the convenience of their algorithm, in which case the prediction algorithm is being based on alternate data. Similar to the time when google tried to predict cancer in patients and forgot to take out the record that identified people as being treated at "[name] cancer center", thus getting a program that looked great while being entirely useless, if this is the case, this experiment is a major failure.
The image recognition system was provided information and has managed a great training set that has actually allowed it to automatically determine, within limits, what culinary task I'm doing. This would be revolutionary news...
Er ... no, it wouldn't. They had a large labeled training set, which trained the past-action RNN to label, with some accuracy, new inputs (videos). That's the sort of thing we do with RNNs all the time. Why do you think that's revolutionary?
Here is another example of using NNs to label video. It's a CNN-RNN network instead of an RNN-RNN one, and it's tagging input videos rather than predicting future content in the input video, but conceptually it's not much different.
Or see this post on continuous classification with TensorFlow-based RNNs. (RNNs are used for this purpose rather than CNNs because they're more amenable to learning vector sequences.)
So, we have substantial prior art showing we can use NNs to label video sequences. Then we output a series of labels, and use another model (this team used an RNN, but for something like this even a HMM might do pretty well) to predict the next label. Getting decent accuracy might be tough, but the basic structure is well understood.
"1,712 videos of 52 different actors" ... "predictive accuracy at 40 per cent"
With development, they could show the AI the pilot of a new TV show, and get it to write the entire series. The only downside might be the predominance of salad-preparation scenes in the next Game of Thrones.
Or did they mean "person told to prepare salad", not, "person paid to lie to large audiences"?
I'll be more impressed when we will be able to breed an animal that actually wants to be eaten, and is very capable of saying so with the utmost clarity, and distinction.
Meanwhile that poor head of Iceburg Salad final thought, before being torn apart leaf, for leaf, was. 'Oh no, not again!'
One day there'll come a point where we'll no longer need to exist. AI will plan our lives in to the future and CGI can create the video feed. The human race can commit mass suicide any way it wishes and simply leave the simulated human race to "live out" it's days until the sun burns out. I've no doubt there's a few scif-fi books detailing such tales.
in too many places famine is still "la saveur du jour". While we're using brilliant minds and expensive resources to watch a salad and to try to guess what vegetable will be sliced next, some people don't have the tiniest leaf or grass or anything else left to eat.
Wouldn't be nice if AI could solve that ?
Food availability is widely held to be a political problem,1 and it will likely be some time before AI is of much help in solving those.
Though having said that, I'll note that many political problems are at least in part a matter of making persuasive arguments,2 and work in computational rhetoric is progressing. Whether that results in solutions to your or my liking is an open question.
1Though I've seen arguments to the contrary.
2Indeed one might characterize most of the political problems in the US this way.
"Food availability is widely held to be a political problem,1 and it will likely be some time before AI is of much help in solving those."
I think AI might get to the point where it can fully replace a politician long before it can fully replace any ordinary human. I've seen the documentary Idiocracy, looks like we are trying to meet AI half way.
I've got NOT one, but two great reasons why your idea sucks....
1) Poor People you can't even find, as you pit it a "Single Blade of Grass", to eat are also probably also by proxy to be able to afford a nice new shiny AI to teach, them something, the rest of the World had already managed to do several thousands of years ago. (i.e. Moving on from a hunter, gatherer to an agracultual society)
2) Its the bunch of guilty white lib-u-tards, who spend their days dreaming of a Utopia to come. that the Earth is as stuffed up with plus seven BIIl-LION Soulsm, and counting. Where is this net surplus coming from, if most of the well establishd Nations of the Earth are all in birth rate decline (i.e. SE Aaia, Europe, and North America), then?
2A) And where your "Love", for your fellow homeless people littering the streets in any given City? Do they not warrent a thought from people like you, because some asshat like Geldof has desided not to give them as good a press?
Oh yes how nice it could all be....
I would utter something about how aweful such places are, but I'll skip it, and just add that theres no point trying to save everyone at the cost of your own expence. (i.e. See the current migrant crisis, that will everntly if left to its own devices. bring down the European Union as a whole*)
*Not that this is a bad thing. Just, not the way I would rather see it end!