back to article Amazon's sexist AI recruiter, Nvidia gets busy, Waymo cars rack up 10 million road miles

Hello, here's a quick roundup of what's been happening in AI outside of the headlines. Machine learning is increasingly being applied to new domains, and human resources is one of them. It's a controversial area and looks like Amazon ran into the problem of creating biased models. Nvidia announced new software integrations for …

  1. Anonymous Coward
    Anonymous Coward

    A small point

    In the Facebook era does everything have to be touchy-feely? I don't want to love a taxi service. I just want it to be reliable and cost-effective. It isn't a religion.

  2. Zog_but_not_the_first
    Trollface

    Shirley...

    " it can still penalize female candidates for including things like “women’s netball team” on their resumes"

    Not the Coventry University one?

  3. Anonymous Coward
    Anonymous Coward

    Amazon’s sexist AI

    I'm currently reading "The AI Delusion" by Gary Smith and am even less surprised by the outcome of Amazon's program than I would have been previously.

    1. getHandle

      Re: Amazon’s sexist AI

      Lol - Google the AI Delusion and the first two hits are sponsored ads for Accenture and Microsoft

      1. Glen 1

        Re: Amazon’s sexist AI

        The article (the one doing the rounds on FB) never states what measure was used for 'fitness' in the AI sense. For it to work they would have had to have had data comparing people's performance *after* they were hired.

        Instead it sounds like they made a bot that could replace HR droids (and thus be gamed by keywords), and all the inherent biases that entails.

        1. Anonymous Coward
          Anonymous Coward

          Re: Amazon’s sexist AI

          Trouble is these days its career suicide to do any research that might be seen to produce sexist (or racist) results. If you compared performance after people were hired and it produced politically incorrect results you'd soon find yourself without a job.

          I've heard of this coming up other times with machine learning- targeted advertising I think it was. Even if you exclude data on race and sex the AI gloms onto other factors with a close correlation to them.

          And like it or not this is inevitable because all sorts of patterns of lifestyle and social behaviour have a close correlation to race and sex, for good or for bad. They are big factors, like it or not.The very existence of other factors which have a close correlation to race or sex demonstrates it. In the example I came across by the time they'd eliminated every factor that could be associated with race or sex the effectiveness of their targeting had plummeted.

          The irony of it all is that of course the machine learning has not a clue what black or white, male or female etc means, its just throwing out matches that reflect society as it really is. So if one considers this is a problem, then the need is to change society, not the AI.

          1. Glen 1

            Re: Amazon’s sexist AI

            >>its just throwing out matches that reflect society as it really is. So if one considers this is a problem, then the need is to change society, not the AI.

            We're working on it.

            As for politically correct results, if you have concrete, objective metrics, that should assuage HR.

            If you *don't* then how do you do a performance review?

            It is politically incorrect to imply a certain race/gender are inherently better or worse at something. To paraphrase a study relating to gender "The differences *within* a group of men/women are bigger than the differences *between* the groups"

            Id wan't to be judged by how likely I could do the job well, rather than a droid (either AI or HR) deciding that me not using "executed" on a CV (because I'm an adult) has an impact on getting an interview.

            The thing about anonymised applications is that you don't shouldn't know if you're discounting a particular demographic. Have a look at the GDS application criteria, they flat out tell you to remove references to sex, sexuality, age or religion from the "Tell us about your skills" section of the application. (although some of that could be inferred)

            Also, if you're telling the AI to select people similar to employee A (A hypothetical straight white cisgendered male), then you're going to get selectees from the same demographic.

            As other commenters have said - GIGO

            1. AMBxx Silver badge

              Re: Amazon’s sexist AI

              Other problem is causality. Do people of certain backgrounds do badly because of their background or due to how they're treated by other employees who don't share their background?

              I wrote a ML model for a customer as part of some personal training. For their recruitment, I could reduce the percentage of employees leaving in the first year by 50% with no false positives. If you could accept a few false positives, the percentage was way higher.

              It was a purely black box approach (regression failed), so impossible to tell what was triggering the decision. Even I had enough sense to never use the model though as the inputs were religion, race, marital status, sex, age, smoking status (they were all we had).

            2. Anonymous Coward
              Anonymous Coward

              "The differences *within* a group of men/women are bigger than the differences *between* the groups"

              On the one hand that's true, but on the other hand its seriously misleading. A normal distribution curve gets very wide indeed if you include enough standard deviations. If you need to select exceptionally tall people for some role, then there will be very few women in the group, but there may be some. If you need to select exceptionally smart people for some role there will be few non-graduates in the group, but there will be some.

              CV sifting is the art of the possible. If you sift on *any* factor you will be discarding some people who would be excellent at the job but don't fit your sort factors. But if you don't sift you end up interviewing 200 candidates, and for the vast majority you are wasting both your time and theirs. I know from experience that I am pretty hopeless at picking out the best employees from CV and interview, because I have interviewed people I thought seemed great on paper and in the room, and they turned out to be a disaster. I've also observed colleagues with a fraction of my technical competence who seemed to have a far better strike rate in picking out good people.

              But less face it, all CVs are garbage in. They tell you nearly nothing about how good a fit a candidate is going to be in *your* organisation. But what choice have you got unless you wish to confine recruiting to people personally known to your current staff - nepotism in other words.

              The thing about an AI based system is it has no prejudices. It picks the people who are statistically likely to be the best fit. It also discards the edges of the bell curves. Now at the moment we have one set of factors which it is politically incorrect to select against, and another set of factors which is not. Some of those factors have very high correlation to success of selection. Still, wait a few years and no doubt the fashions of whats politically incorrect will change. They have in the past and will again.

              But an interesting side effect of all the politically correct stuff is that you get more and more bound by the rules and demonstrating that the right process was followed. Which means that really left field people who don't fit the template are actually even less likely to get selected... Of couse it was always that way, but I suspect it will get worse.

            3. Anonymous Coward
              Anonymous Coward

              Re: Amazon’s sexist AI

              quote

              As for politically correct results, if you have concrete, objective metrics, that should assuage HR.

              You have never truly dealt with HR have you.

          2. Alan Brown Silver badge

            Re: Amazon’s sexist AI

            "Trouble is these days its career suicide to do any research that might be seen to produce sexist (or racist) results."

            Except that....

            "The irony of it all is that of course the machine learning has not a clue what black or white, male or female etc means, its just throwing out matches that reflect society as it really is. So if one considers this is a problem, then the need is to change society, not the AI."

            More precisely, these AI "learning" algorithms are taught using data containing existing human biases and end up locking them in a "computer says no" way (where people accept the result unquestioningly, because "computers are never wrong".)

            It doesn't matter that illegal drug use is the same among poor black and rich white people - with enforcement, prosecution and conviction rates all being biased against poor black people, the AI perpetuates that kind of enforcement.

            What these AI results actually show is the inherent biases of what's feeding into them and therefore in this case the inherent biases of the existing selection processes used by HR dimwits.

            Once you realise that all these AI stores are a result of "garbage in, garbage out", you also realise there's a lot more to these stories than initially meets the eye. It's all very well pointing and laughing at how the stupid computer is producing sexist/racist results but the harsh reality is that the stupid computer is merely doing exactly what is _already_ being done, just somewhat faster and without someone looking at things along the way, then saying "hmm, that seems a little _too_ biased, we'd better fudge things a bit so we don't get accused of rascism/sexism"

            (IE: it's peeling off the veneer and exposing the ugly reality of the assumptions of the selection processes)

  4. Anonymous Coward
    Anonymous Coward

    Trash data in, Trash results out...

    A very simple rule for any ML project...

    1. Alan J. Wylie

      Re: Trash data in, Trash results out...

      The term Garbage in, Garbage out was first coined in 1957.

  5. Dabbb

    I call BS on Waymo

    Why ? Because of Google's neural network training also known as recaptcha.

    If you had to use it recently you'd notice that dominating topics there are cars, buses, bicycles, motorbikes, traffic lights and pedestrian crossings.

    Surely you can drive millions of miles in a simulator or on highways but if after years of development you just started teaching your car to recognize such basic things as pedestrian crossings and traffic lights you most certainly don't have any useful self-driving product, no matter how much BS your PR department offloads onto gullible journos.

    1. Z80

      Re: I call BS on Waymo

      https://xkcd.com/1897/

  6. vtcodger Silver badge

    I'm not so worried about Waymo's customers

    Unlike some other companies, Waymo's approach to autonomous vehicles seems serious and responsible. I wouldn't be surprised that their customers are every bit as safe in a Waymo driven car as a car driven by the average human driver. Maybe more so. What I am concerned about is collateral damage to pedestrians, pets, objects is situations that don't quite fit the actual and simulated situations that Waymo has tested. I doubt a really comprehensive test suite is possible no matter how sincere and skilled Waymo's testers are.

    1. Alan Brown Silver badge

      Re: I'm not so worried about Waymo's customers

      "What I am concerned about is collateral damage to pedestrians, pets, objects is situations that don't quite fit the actual and simulated situations that Waymo has tested."

      Given one of the more whacky real world situations that one of the Google cars encountered (a group of ducks running around on the road and a person on a motorised wheelchair chasing them), I think the "collateral damage" will be minimal to none (the car stopped and waited for them all to get out of the way)

      Waymo seems to get it that "streets are for people", which is likely to become more and more the catchphrase with the living streets initiatives taking root worldwide and fuel prices climbing rapidly.

  7. cd

    Having driven around Phoenix, Waymobiles drive like catatonic grandmothers.

    Artificial Intelligence is interesting to people who don't have real intelligence.

    1. Anonymous Coward
      Anonymous Coward

      "Having driven around Phoenix, Waymobiles drive like catatonic grandmothers."

      So they're up to rural Kentucky standards already. Impressive. If they can also brake suddenly just before a junction and turn left without indicating they'll be covering Indiana as well.

  8. SVV

    AI Recruitment

    Tech people should have learned long ago that CV scanning software based on looking for keywords is a woeful way of recruiting intelligent people to often complex and multi disciplined roles.

    If you simply view IT staff as easily replaceable interchangeable parts, good luck with the mediocrity or even failure that awaits you. If on the other hand you take the time to actually read application emails and CVs that you receive, you should be able to spot the right candidates easily, and the ones who have skills orr experience that you never included in the job spec but grab your attention and make you interested - well, that's more than your AI system is ever going to find for you because AI deals well only with the expected, not the unexpected, just like all other computer programs.

    1. aberglas

      Re: AI Recruitment

      Your dislike of AI driven resume scanning assumes that HR people reading a resume would do a better job. The words are just gobbldy gook to them anwyay.

      Similar systems are used to mark English essays. Sure they can be spoofed with bullshit, but normally they do a much better job than expert human markers. Not because the AI is any good, but because the human markers are so bad. Human markers faced with a pile of essays to mark just scan a few lines here and there, while at least the AI scans the whole essay.

      Always include the actual job ad at the bottom of any resume. That way you will have all the keywords in the ad.

      1. AMBxx Silver badge
        Facepalm

        Re: AI Recruitment

        You should try talking to some contractor agencies about jobs in IT. My dog could do a better job.

      2. Teiwaz

        Re: AI Recruitment

        Your dislike of AI driven resume scanning assumes that HR people reading a resume would do a better job. The words are just gobbldy gook to them anwyay.

        At least HR people have the potential to be adaptive.

        All an AI can do is learn based on the accepted sample it already has. Learning the current employed are an acceptable or preferred baseline, if they have record of a significant proportion of the employees are smokers who do the swinger scene, that also becomes a preference for employment.

        The only thing AI has over humans immunity for boredom from repetitive tasks. An AI cannot possibly recognise a well written creative essay, but probably a human marker is more likely to miss one due to overloaded workload.

    2. Alan Brown Silver badge

      Re: AI Recruitment

      "Tech people should have learned long ago that CV scanning software based on looking for keywords is a woeful way of recruiting intelligent people to often complex and multi disciplined roles."

      Most tech people know that.

      Most manglement don't. It's "shiny shiny" magic stuff.

  9. aberglas

    Add Sex and Race Normalization to the AI

    You run your "AI" (which is basically just doing crude statistics). It comes up with some rules. You then normalize those results by Sex. So if it turned out that one gender seemed weaker than the other, you just add a post-AI normalization to bring the numbers in the weaker gender up, and call that the normalized AI, which will never show any bias because it has been normalized.

    This is essentially what we do by hand anyway, when there are quotas for one gender, race etc.

    1. AMBxx Silver badge

      Re: Add Sex and Race Normalization to the AI

      Doesn't work. You then adjust for religion. Then social background. Then sexuality. Then education (those nasty private school-educated).

      Eventually you get to the point where you might as well just draw them out of a hat.

      1. Korev Silver badge
        Terminator

        Re: Add Sex and Race Normalization to the AI

        Eventually you get to the point where you might as well just draw them out of a hat.

        I've been involved in recruiting a few people over the years; generally the process is so long and painful that I wonder if just employing a random person with the chance they might be hopeless "costs" less than doing it "properly".

  10. Nigel Sedgwick

    And I will Drive 10 Million Miles

    In learning to drive (in the UK), I probably drove not more than 3,000 miles (50 lessons of 30 miles each and as much again with parental supervision). On top of that, I had probably been, by then, an observing passenger for 3 to 10 times as many miles. Waymo, in its 10 million miles, has done not less than 300 times as much.

    After decades, my miles driven count is probably up around 400,000 - plus another lesser but similar amount as observing passenger. One tenth as much as Waymo.

    Does that give us confidence, or the opposite? In human drivers like me? In Waymo?

    So far (and assuming the USA fatality rate of 7 persons per billion km driven), I have no evidence to claim better or worse than the average kill rate (for 400,000 miles) of 0.0045 others.

    In the linked article under the byline of John Krafcik, Waymo CEO, some claims are made.

    "Our self-driving vehicles just crossed 10 million miles driven on public roads." The thing that gets me about this sort of claim is that it makes no allowance for the number of different roads driven. It's clearly not 10 million miles of different road. Nor is it the same 1 mile driven 10 million times - which we all surely think is less useful 'experience'. Am I alone in thinking the difference matters - and that the raw claim is thus overstatement likely to mislead Joe/Jo Public (and his/her political representatives).

    "By the end of the month, we’ll cross 7 billion miles driven in our virtual world (that’s 10 million miles every single day)." Well, that's enough miles for an average real-world Joe/Jo Public to kill 78 people, and seriously injure many more. How many virtual deaths, Surely not zero? How many virtual serious injuries; how many virtual crashes (USA: wrecks) occurred with Waymo driving? There must have been some where the other virtual human driver was at fault. Or did virtual evaluation fall short of measuring those numbers? And/or reporting them to us? At least after normalisation for the real-world occurrence of the driving conditions.

    "Today, our vehicles are fully self-driving, around the clock, in a territory within the Metro Phoenix area. Now we’re working to master even more driving capabilities so our vehicles can drive even more places." Am I 'unkind' in thinking that every road driven by Waymo unsupervised within that area has been driven (recorded, computer analysed, and the rest) by human drivers - many times over. In many ways that is good; it's a good way to get started. But it's not the same word "driving" as is commonly understood when applied to humans -- that would be like first sitting and observing while one's driving instructor shows one how to do it on that very same road, several times over.

    "Today, our cars are designed to take the safest route, even if that means adding a few minutes to your trip." Again, 'unkind' thoughts creep into my mind. I know of real-world people who actually avoided all motorways, all UK right turns (USA left turns), all roads not previously experienced. No such policies inspire confidence in the drivers! What does driver-Waymo do with unexpectedly less-safe routes: especially with all that 'experience' of not driving similar roads?

    "Building the world’s most experienced driver is a mission we’ll pursue for millions of miles to come, from 10 to 100 million and beyond." Which, I think introduces a gulf between "most experienced" and "safest".

    "We hope you’ll come along for the ride!" I spot ambiguity!!!

    IMHO, it really is not good practice (nor good ethics) to mix/confuse, with product advertising, the serious considerations that should be underpinning health and safety policy.

    Best regards

    1. Alan Brown Silver badge

      Re: And I will Drive 10 Million Miles

      "In learning to drive (in the UK), I probably drove not more than 3,000 miles "

      When you got your license in the UK, you were regarded as barely competent to drive(*) and needed a lot more practice to become actually _good_ at it. Unfortunately a lot of people actually reach the peak of their abilities for the test - which is a bloody good argument for mandatory periodic retesting.

      Robot cars don't need to be perfect, just better than most humans - and that's a very low bar to get over, considering that even the best drivers make a couple of mistakes per minute on average. It doesn't matter if they all drive like old grannies - if the traffic flows smoothly around town you'll actually get there faster. Impatient monkeys overestimating their abilities and underestimating the laws of physics is the primary cause of road deaths.

      (*) Which is why you faced swingeing levels of insurance and other restrictions. It takes _years_ of experience to understand not only the mechanics of driving but also what makes other drivers tick - something that a lot of humans never master.

  11. Teiwaz

    IMHO, it really is not good practice (nor good ethics) to mix/confuse, with product advertising, the serious considerations that should be underpinning health and safety policy.

    From a businesses perspective (and often from a politicians, and basically any organisation or political/religious nutjob with an agenda to push), ethics don't enter into it, unless legislative/mandatory, and good business spin = good practice.

    It's why we're all going to hell in a self-driving car, and sooner or later, Soylent Green, whether a climate change handwringer or a frothing 'god will save us / won't let it happen' maniac.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like