Ethics, Artificial Intelligence, and Corporations

Ideas and Issues
Facebooktwitterredditlinkedin

New Orleans     Facial recognition just seems sketchy.  Back when it was possible to fly, and I did so, it was hard not to notice how customs’ officials and kiosks were increasingly using facial recognition software.  Not just over there, but over here as well.  On its surface it seemed like a simple trade of speed and efficiency against privacy already forfeited to Global Entry or for a visa.  Surveillance hides behind the claims of security, while privacy has been downgraded as privilege and secrecy.  Where are we going with artificial intelligence and profiling, and who is protecting us, seems like an appropriate question to push up the list now.

Reading a special section in the Wall Street Journal was scary, not comforting.  A number of experts were asked their views on all of this in thumbnail sketches.  Their visions for the future were expansive.  Many of their comments were couched in terms of what AI could not now, but in 15, 20, or 25 years, look out, it will be a different and unimaginable world.  It was easy to see that they were excited about it in the way that deeply embedded practitioners are always optimistic about the future regardless of the shortfalls and problems of the present.

Most of them seemed to feel it was important to mention that there might be some ethical problems with AI that were unresolved.  The fact that huge errors are made still with racial and ethnic accuracy in AI facial recognition makes it more like profiling than a gee whiz advance in our times.  Some countries are already imprisoning minorities and others using such software, including in the USA.

Kate Crawford, who is a senior researcher at Microsoft, raised these questions pointedly, saying, “Attempts to detect or predict people’s criminality or internal emotional state by looking at pictures of their faces will ultimately be seen as unscientific and discriminatory.”  Oh, yeah, we have all seen the Tom Cruise movie Minority Report in 2002, haven’t we?  Take away the women empaths in the chemical pool and replace them with AI, and there are nightmares waiting.  Crawford also thinks using gig and “click workers” in unsafe workplaces should mean not building the machine learning systems.  She argues that if they use too much juice and hurt the climate while only delivering minimal improvements, that should be a no-no.  She believes if it “puts more power into the hands of the already very powerful,” it’s a problem.  She says “we should be deeply skeptical.”  She’s nailed it, but the answers don’t follow the questions, and many of us, know that her concerns are exactly what will happen, because it already happened.

Adam Wenchel of Capital One’s Center for Machine Learning in a somewhat cavalier pretty much guarantees that there will be an epic fail for AI in 10 to 25 years in health care or finance.  He sees the cure as something companies themselves with some paid helpers can manage.  Andrew Moore of Carnegie Mellon says, “Organizations have to delineate at a high level how they’re going to approach ethical problems and implement a business process behind it….”

All of these folks seem to think that companies involved in making money from AI can somehow handle the ethical and privacy issues.  Absolutely none of our experience with big tech companies, finance companies, insurance companies, and an endless list of other enterprises indicate that we should have one iota of confidence that any of them will pay any more than lip service to these issues or to our concerns.  The use of AI by governments, especially autocratic ones, is also real, but without some system of public accountability, likely only available through governments, there is no way that the answers to any of these hard questions are going to be satisfactory to all of us on the other end of AI and facial recognition.

Facebooktwitterredditlinkedin