Like many I’m waiting for self-driving cars, but I’m also increasingly concerned about how safe they will be. Now there’s another issue. The technology working on those safety issues looks to be programmed to be racist. It identifies white faces but the darker someone is the harder it is for the machine to identify it as a person, in the case of self-driving cars, pedestrians. Researchers from Georgia Tech found that machines consistently failed at recognizing darker skin tones. It’s actually not only self-driving cars, AI in Google image recognition system couldn’t recognize black people, and couldn’t tell the difference between them and an a dark ape. The researchers called such finding alarming, as I hope you will too. There are apparently radars which can better differentiate skin tones, but these are very expensive and to include them in cars would make them very expensive.
It seems to me that
since the machines were once programmed by humans and that since the algorithm
they function on were devised by humans that the time has come to change the
algorithm. That should be the responsibility of the researchers who erred in
the first place by revealing their own view of race. So my message to the
companies developing AI for self-driving cars is, correct the AI race biases
the original engineers programmed in before you even think of cost.
I was in the car this morning with a seven year-old when a text announced the prospect of a play date. “Give me the phone,” the child asked, and she proceeded to use the Voice feature to answer the text using language she understood but was still beyond her capacity to spell out and write. Children are now growing up with technology, a lot of them with Alexa, Echo and other robotics aides. Researchers at the Personal Robots Group at MIT Media Lab are now looking at the consequences of their growing up relying on digital assistants. There’s of course the privacy issue—that the more one uses them, the more one needs to be connected and the more our privacy is compromised. But leaving aside the privacy issue, can these manifestations of AI help or Continue reading “Kids and Robots”
When problems don’t have a physical face, they are harder to see and easier to dismiss. Yet they can have deeper impact than many we can recognize. A powerful example is investment in research and development such as those in AI (artificial intelligence). It is something the government used to do, but is doing less and less and is slated to do even less in the Trump administration. It doesn’t mean that advances are not being made, they are. It means that much of the advances are made by the so called big five, sometimes called the Frightful Five—Amazon, Apple, Facebook, Google and Microsoft. But Continue reading “AI Investments by Government or by Others?”