Slate magazine sent a ballot to a host of people like journalists, scholars or advocates asking them for who they thought were the tech companies they were concerned about. They did not define what was meant by concern or what was meant by tech companies. Then they tallied the results and published a list of what those they asked considered the 30 technology companies they were most concerned about. Let’s note that the companies were not listed by size or name recognition, but by how much concern those polled experience towards them. As is perhaps expected the top three companies on that list of 30 are Amazon as number one, Facebook as number 2 and Alphabet, the parent company of Google, as number three. Exxon Mobile is number 10, Huawei is 11, Tesla 14 and Disney number 15. But there are surprises too, AirbnB as number 24 or Megvii at number 25, a company working with facial recognition which I for one had not heard of. The popular 23andMe is number 18. Elon Musk SpaceX is number 17 and Verizon number 16. Many of the companies are not household names, but as a whole they reflect our general concern for AI, for surveillance, for the loss of privacy, for how big they can be, how pervasive their reach is or for not being sufficiently interested in climate change. For me, though, the list is a rather good microcosm for companies which may not as a rule concern themselves with the public good.
Two authors well versed in the state of the world and the state of technology give a yearly list of how they see the top ten technology policy issues facing us. The list is meant to refer to challenges before us as well as challenges technology could address. Given a new decade, this year’s list applies to the 20’s as a decade.
- Defending Democracy
- Privacy in an AI Era
- Data and National Sovereignty
- Digital Safety
- Internet Inequality
- A Tech Cold War
- Ethics for Artificial Intelligence
- Jobs and Income Inequality in an AI Economy
One may disagree with the placement of some of these challenges, such as jobs and income inequality but it is difficult not to agree with the items on the list being important. While many of these challenges are self-explanatory, I needed to review their explanation of the journalism item. If I may paraphrase, it is a profession crucial to the survival of democracy whose lower profits over time have caused a decline. The authors hope that technology can foster a revival that will help not only to protect journalists who have been under attack (particularly overseas where journalists can too easily be jailed) but for the whole field.
Because technology has now infiltrated every aspect of our lives, directly or indirectly, the list as a whole has great relevance in in determining our future and shaping needed answers. What is a concern, though, is how little these issues are being acknowledged and addressed by decision makers.
Like many I’m waiting for self-driving cars, but I’m also increasingly concerned about how safe they will be. Now there’s another issue. The technology working on those safety issues looks to be programmed to be racist. It identifies white faces but the darker someone is the harder it is for the machine to identify it as a person, in the case of self-driving cars, pedestrians. Researchers from Georgia Tech found that machines consistently failed at recognizing darker skin tones. It’s actually not only self-driving cars, AI in Google image recognition system couldn’t recognize black people, and couldn’t tell the difference between them and an a dark ape. The researchers called such finding alarming, as I hope you will too. There are apparently radars which can better differentiate skin tones, but these are very expensive and to include them in cars would make them very expensive.
It seems to me that since the machines were once programmed by humans and that since the algorithm they function on were devised by humans that the time has come to change the algorithm. That should be the responsibility of the researchers who erred in the first place by revealing their own view of race. So my message to the companies developing AI for self-driving cars is, correct the AI race biases the original engineers programmed in before you even think of cost.
I was in the car this morning with a seven year-old when a text announced the prospect of a play date. “Give me the phone,” the child asked, and she proceeded to use the Voice feature to answer the text using language she understood but was still beyond her capacity to spell out and write. Children are now growing up with technology, a lot of them with Alexa, Echo and other robotics aides. Researchers at the Personal Robots Group at MIT Media Lab are now looking at the consequences of their growing up relying on digital assistants. There’s of course the privacy issue—that the more one uses them, the more one needs to be connected and the more our privacy is compromised. But leaving aside the privacy issue, can these manifestations of AI help or Continue reading “Kids and Robots”