It’s Time For Global AI Ethical Standards

 

Microsoft researchers recently created an AI ethics checklist. The Microsoft research team believes that there is something crucial missing from existing checklists, of which there are at least 84(!) available from a variety of public and private entities, and their checklist sets out to supply the missing bit. Their premise is existing checklists have been designed without input from the practitioners who will implement them, making them difficult if not impossible to follow. Compounding this issue, they assert, is a lack of support in the dominant organizational culture for ethical efforts (or at least the actual execution of ethical design).

It’s a laudable effort, and I encourage you to read through the paper. However, despite the authors’ call to avoid “technosolutionism”, their entire approach is steeped in it. Adequately solving the thorny ethical questions around AI for the long term benefit (and safety) of humankind can’t be accomplished simply by getting corporate culture buy-in and making sure the engineering teams are able to properly operationalize the outputs of the design process.

The Risks of AI

AI is dangerous. Tech giants like Bill Gates and Elon Musk have raised deep concerns about the huge potential risks of autonomous weapons systems. Steven Hawking believed that AI presented a primal threat to our survival as a species. And if you can watch this video of Sophia the robot at SXSW without feeling a little creeped out, then you’re made of sterner stuff than I am.

AI has also proven that it can be fundamentally unfair in ways that benefit the existing power structure, typically at the expense of minorities.

For instance, a recent study at MIT tested three commercially available facial recognition services and found that they could properly identify a person’s gender 99% of the time… as long as the person was a white man. If the person was a woman of color, that percentage dropped to 35%. The New York Times notes that a widely used dataset for facial recognition is 75% male and 80% white. It would not be surprising to find that this echoes the demographic of the engineering teams that have developed the software. 

The Big Questions

There are big questions that need to be grappled with here, and soon.

  • How do we- as a society- standardize a cohesive philosophy and practice of AI ethics?
  • How do we enforce compliance with those standards?
  • And most importantly, how do we ensure that the AI ethical standards are consistent with larger societal values systems?

I don’t intend to suggest that there are not thoughtful conversations happening in the tech sector about how to reconcile the interests of humanity with the potential of AI. Witness this thinkpiece from Satya Nadella. What I am challenging is the right of Big Tech to make these decisions themselves. After all, how deeply does Big Tech really care about, say, your right to privacy when your data is so scrumptiously monetizable?

Now, Big Tech is certainly not uniquely incapable of prioritizing fundamental human rights and making nuanced decisions that respect life while potentially harming their own bottom lines. To wit:

A Cautionary Tale

In 1939, Swiss chemist Paul Hermann Mueller discovered the insecticidal properties of a chemical that had been languishing unheralded since its initial synthesis some 70 years previously. His discovery had an enormous impact in preventing the spread of diseases like typhus and malaria, and ultimately won him the Nobel Prize in 1948. The Nobel Prize committee cited the “ [preservation of] the life and health of hundreds of thousands” in the awards ceremony.

If you hadn’t already guessed, the chemical that I’m talking about is our old friend DDT. Although today’s conventional narrative about the eventual ban on DDT is that opposition started to ramp up following the publication of Silent Spring in 1962, there were in fact widespread concerns about it right from the beginning. Not only did it kill “bad bugs”, it killed good ones like honeybees too. It killed fish, small game, and birds. It accumulated in human fat over time, and no one was quite sure what that meant. It also showed a troubling tendency to create tumors and liver failure in laboratory animal testing.

As the Science History Institute points out, “Even DDT maker Monsanto warned that ‘the danger inherent in the indiscriminate use of DDT as a cure-all is very real.’”

Naturally, then, Monsanto suspended the manufacture of DDT until a full assessment of the risks and benefits had been conducted and society as a whole made an informed decision over whether to allow production to continue. Ha. No, no they did not. After all, Monsanto is a corporation.

Corporations are not immoral. But they are amoral. Despite being comprised of people who themselves largely have strong codes of ethics and firm moral foundations, industries simply do not police themselves very well. It is in their very nature to be focused exclusively on growth and profitability.

We know how this part of my story ends. In 1972, the Environmental Protection Agency had banned the use of DDT in the US, and by 2004 usage was globally limited to disease vector control in developing countries.

Governmental regulations, like the corporations they regulate, aren’t innately good or inherently evil. They are, though, the best way that we have found as a society to express and enforce our collective will. In the absence of regulation, all you need to have for a product or service to proliferate is someone who is willing to sell it and someone who is willing to buy it.

When National Geographic ran a feature in 1945 on the “World Of Tomorrow,” they prominently featured the miracle of DDT. Along with that, they highlighted how “tubes” and“electronic eyes” would someday catch burglars, iron our clothes, and stack our laundry.

Finding A Way Forward

It’s been a wild ride technologically from there to here. Now it’s time for us to define, as the human family, what our relationship with the descendants of those “electronic eyes” is going to be. AI is both an existential threat and a phenomenal opportunity, and it demands the attention of our global society, national governments, and international bodies. It’s going to take foresight, leadership, political will, and global cooperation to pull it off, and one can easily argue that those articles are in short supply.

Nature abhors a vacuum, and the 84 different checklists and numerous AI ethics frameworks proposed by tech leaders are attempts both to drive the discussion and to fill the yawning gap created by governmental inertia. The failure of attempts in the UN to enact a ban on autonomous weapons is a painful illustration of how far we still need to go.

Abrogating responsibility for regulation of AI to the industry that produces it would be a terrible political failure with potentially disastrous consequences. Hundreds of firms making independent decisions about how their AI technology interacts with humans and self-policing the results is a frightening proposition. Yet Big Tech has already shown the ability to think constructively about how to design and implement AI ethics frameworks. It’s time for our governments to build on that knowledge and deliver cogent, universal, enforceable standards.