The issue of personal data protection has been addressed in two recent cases. The first concerns the Amazon Echo smart speaker, connected to the Alexa voice application. To improve the service, user conversations can be recorded and listened to by Amazon employees. This gives them, among other things, the (unintended) opportunity to surprise sequences of users’ private lives, including intimate moments or potentially criminal situations (1). The second case concerns the “Rekognition” facial recognition technology, also developed by Amazon. This software may be used by public authorities. At the end of March 2019, researchers published an open letter in which they stressed the fact that “there are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties” (2). The complexity and importance of the moral issues raised by these two cases suggests a question about how Amazon and other concerned operators can behave with rectitude. For reasons of space, we will discuss this in two stages. In this post, we present the ethical initiatives taken by Amazon and some of its colleagues to prevent moral problems related to the use of the technologies in question. The word “rectitude” has a specific moral value. It refers to the “quality of a person (and by metonymy of his actions) who does not deviate from the right direction, in the intellectual and moral field,” or to action in accordance with “reason, law, the true norm” (3). Has this value been specifically expressed by Amazon and other operators concerned with the tools in question?   Let’s consider the first case. It talks about “caution” and “seriousness.” A university professor, interviewed by Bloomberg (4), noted that “whether that’s a privacy concern or not depends on how cautious Amazon and other companies are in what type of information they have manually annotated, and how they present that information to someone.” “Seriousness” is mentioned in these words from an Amazon representative:

“We take the security and privacy of our customers’ personal information seriously. We only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience. For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests, and ensure the service works well for everyone.” (5)

These elements may suggest that Amazon treats with caution the information it receives through the Echo enclosure. Another example is the statement that “this is big data handling” (1): it is not a question of singularising or identifying information and users – which would constitute a violation of privacy – but of helping the Alexia application to “make sense” (a term used in Bloomberg’s article) of the data it collects.   Consider the second case, that of Rekognition. On 7 February 2019, in response to concerns about this software (6), Amazon proposed five ethical principles for policy makers. They appear on a page entitled “Some Thoughts on Facial Recognition Legislation:”

1. Facial recognition should always be used in accordance with the law, including laws that protect civil rights.

2. When facial recognition technology is used in law enforcement, human review is a necessary component to ensure that the use of a prediction to make a decision does not violate civil rights.

3. When facial recognition technology is used by law enforcement for identification, or in a way that could threaten civil liberties, a 99% confidence score threshold is recommended.

4. Law enforcement agencies should be transparent in how they use facial recognition technology.

5. There should be notice when video surveillance and facial recognition technology are used together in public or commercial settings.

In conclusion, Amazon advances a meta-principle: technology that is supposed to contribute to the general well-being (through, for example, the fight against trafficking in human beings, the search for missing persons or the securing of a building) cannot be prohibited from the outset because it could be misused:

“New technology should not be banned or condemned because of its potential misuse. Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced.”

  Google has also formulated principles for its facial recognition research. It has waived it pending laws governing its use. In a text published on 13 December 2018, one of Google’s executives highlighted the company’s commitment to developing artificial intelligence technologies that contribute to the general well-being. He proposed several examples, including the detection of retinopathies in diabetics or the preservation of native birds. On the other hand, he said, technologies that could be used for “multiple uses” should be considered with caution. Facial recognition falls precisely into this category:

“Google has long been committed to the responsible development of AI. These principles guide our decisions on what types of features to build and research to pursue. As one example, facial recognition technology has benefits in areas like new assistive technologies and tools to help find missing persons, with more promising applications on the horizon. However, like many technologies with multiple uses, facial recognition merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes. We continue to work with many organizations to identify and address these challenges, and unlike some other companies, Google Cloud has chosen not to offer general-purpose facial recognition APIs [Application Programming Interfaces] before working through important technology and policy questions.” (7)

The principles referred to at the beginning of this extract were adopted in June 2018 (8). They concern exclusively artificial intelligence. Here is their heading:

1. Be socially beneficial.

2. Avoid creating or reinforcing unfair bias.

3. Be built and tested for safety.

4. Be accountable to people.

5. Incorporate privacy design principles.

6. Uphold high standards of scientific excellence.

7. Be made available for uses that accord with these principles.

  On 13 June 2018, the President of Microsoft had already called for legal regulation of facial recognition programmes (9). The words he uses in the introduction to his remarks clearly express Amazon’s meta-principle, which we quoted earlier:

“All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head. The more powerful the tool, the greater the benefit or damage it can cause.”

Facial recognition technology is one of these tools. The President of Microsoft underlines the diversity of its possible uses (“the potential uses of facial recognition are myriad”) as well as its “relative immaturity.” From these characteristics, he infers that the government should regulate these uses (he mentions “a government initiative to regulate the proper use of facial recognition technology”). Indeed, government is in a better position than private companies to contribute to the general well-being or achieve public objectives. Corporate social responsibility commitments cannot in any way replace legislative and government decisions. Moreover, even if their sense of social responsibility is the primary consideration, it is not guaranteed that all companies are equally concerned with the public interest, in this case the preservation of fundamental rights. Microsoft proposes this general maxim to justify its position:

“As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

The President of Microsoft also uses the argument of the “ethical gap,” which states that the ethical regulation of a new technology appears with delay, i.e. some time after its advent (10):

“Given the importance and breadth of facial recognition issues, we at Microsoft and throughout the tech sector have a responsibility to ensure that this technology is human-centered and developed in a manner consistent with broadly held societal values. We need to recognize that many of these issues are new and no one has all the answers. We still have work to do to identify all the questions. In short, we all have a lot to learn.”

We could add other considerations, but the empirical evidence presented above should make it possible to assess the correctness of these different approaches. We will discuss this in the next post. Alain Anquetil (1) See “Amazon Workers Are Listening to What You Tell Alexa,” Boomberg, 11 April 2019. Amazon employees reportedly witnessed a possible sexual assault. (2) “On Recent Research Auditing Commercial Facial Analysis Technology,” Medium, 26 March 2019. (3) Sources: respectively (in French) the Dictionnaire historique de la langue française, Le Robert, 2010, and CNRTL. (4) See note 1 above. (5) See “Amazon staff listen to customers’ Alexa recordings, report says,” The Guardian, 11 April 2019. (6) See “Coalition Letter to Amazon Urging Company Commit Not to Release Face Surveillance Product,” 15 January 2019, “Amazon proposes ethical guidelines on facial recognition software use,” The Sociable, 11 February 2019 , and this other article in The Sociable of January 18, 2019 on the initiative of Amazon shareholders requesting the withdrawal of the Rekognition technology: “Shareholders tell Amazon to stop selling Rekognition facial recognition tech to govt.” (7) “AI for Social Good in Asia Pacific,” 13 December 2018. (8) “AI at Google: our principles,” 7 June 2018. (9) “Facial recognition technology: The need for public regulation and corporate responsibility,” 13 June 2018. (10) See my article (in French) “L’explication des conflits par le « décalage » entre domaines humains,” 20 December 2012. Reference can be made here to B. Kracher and C.L. Corritore, “Is there a special e-commerce ethics?” Business Ethics Quarterly, 14(1), 2004, pp. 71-94. [cite]    

Share this post:
Share with FacebookShare with LinkedInShare with TwitterSend to a friendCopy to clipboard