European Commission’s Stakeholders’ Consultation on Draft AI Ethics Guidelines

Monday, 4 March 2019 00:00 -     - {{hitsCtrl.values.hits}}

The following is the feedback given by Asanga U. Ranasinghe for the consultation on the Draft AI Ethics Guidelines for Trustworthy AI, prepared by the High-Level Expert Group on Artificial Intelligence

 

 

The Commission’s document was structured as follows,

A. Rationale and Foresight of The Guidelines

B. A Framework for Trustworthy AI

I. Respecting Fundamental Rights, Principles and Values - Ethical Purpose

II. Realising Trustworthy AI

1. Requirements of Trustworthy AI

2. Technical and Non-Technical Methods to Achieve Trustworthy AI


III. Assessing Trustworthy AI

 

Conclusion

The feedback and contributions were given to the different sections of the document as follows.

 

Introduction: Rationale and Foresight of the Guidelines

The preparation for socio-economic changes under Pillar 2 should not be done only with a view of Europe. What about the rest of the world? Changes in Europe will influence changes all over the world, and in return, have a return-effect on Europe.

How are the three pillarsi linked? For example, how will it be ensured that the investments under Pillar 1, especially private investments, are within the ethical and legal framework in Pillar 3?

How will the so-called benefits of AI offset the environmental footprint of producing and using AI? Technology has a huge impact on the environment. The mining for minerals and raw material required to produce digital equipment takes a huge toll on the environment. When the consumer demand for technology is artificially created through greed-driven commercial principles rather than honest human principles, it depletes these natural deposits at an alarming rate. Further, a very large amount of energy is consumed for these mining operations to extract material. In addition, the processing of the material and production operations to create technology, not only consumes massive amounts of energy but also, creates harmful conditions for workers.  

It is inexpensive to produce technological devices in certain countries due to the lack of standards and implementation of labour laws. The factory workers have to work in conditions, which are harmful for their health due to the chemicals with which they come into contact. Long tedious hours and in certain cases, even child labour, are some of the despicable means employed by tech companies to make fat profits. This could even lead to cases of modern day slavery. The plastic packaging and energy-dependent logistics for distribution too has a toll on the planet. 

Finally, the current use of technology consumes and unprecedented amount of energy, which completely disregards people’s right to safe and clean environments. While an estimated 1.1 billion people – 14% of the global population – did not have access to electricity1 according to Energy Access Outlook 2017, the elite consume more energy than necessary due to the use of technology. The situation has also affected other life species on land, as well as life under water, by destroying and diminishing their habitats. UN Digital Cooperation should make sure that principles, which protect inter alia SDGs 12, 13, 14 and 15 are considered seriously when addressing digital issues.

How will ethics be used to inspire trustworthy development of AI in other countries, such as China, to cater to the huge demand that can be expected to be created in Europe and the Occident? Trustworthy AI Made in Europe must be comprehensive; it must ensure that processes and systems elsewhere in the world to develop, deploy and use AI in Europe follow the ethical purpose and technical robustness.

 

Chapter I: Respecting Fundamental Rights, Principles and Values – Ethical Purpose

The interdependence of human life, other beings and phenomenon, within the earth sphere and in the universe, is subtle and unfathomable, especially to the distracted human mind. For example, human organ donation will reduce if AI cars create less accidents. This means lives are saved and unsaved at the same time (i.e. less accidents will save lives but, other human lives that need organs to be saved might not be so lucky). Who can decide what is right in this instance? It can also be argued that AI, along with 3D printing and other complementary technologies, will present solutions to those who need organs. The only certainty is that human lives will be deeply impacted by AI.

General use of AI in social media has created distraction in society! Social media has affected people’s productivity. Simon Sinek explains this very well: https://bit.ly/2Uuaok4

When going through material about AI, at times, it feels as if though the global push for advancing AI is all about the money: the potential of $ 15.7 trillion to the global economy by 2030 from AI (https://pwc.to/2hmUvOB). The EC must ensure that this doesn’t contradict the declaration that the EU is upholding the human-centric values. Therefore, metrics to measure the human-centric impact from AI, such as expansion of choices available to people living in poverty, benefits to education, health, preventing domestic violence, crime etc. should be developed.

Related to point 3.4 on page 7, Rights of different categories, such as Child rights, Rights of Indigenous People, etc., should be considered, in addition to what is covered under the umbrella of fundamental rights. Therefore, relevant conventions, for example the CRC (Convention of Rights of the Child), Convention on the Rights of Persons with Disabilities (CRPD) and the United Nations Declaration on the Rights of Indigenous Peoples, should be considered as complementary documents to the Charter of Fundamental Rights of the EU and Treaty on EU.  

 

The factory workers have to work in conditions, which are harmful for their health due to the chemicals with which they come into contact. Long tedious hours and in certain cases, even child labour, are some of the despicable means employed by tech companies to make fat profits

 

The human minds can predict and foresee the future only to a limited future, partly due to their greed. What about when AI demands Ethics? Should the inherent value of humans mentioned on page 5 be adapted to AI in humanoid form? i.e. Robots do not need to look a certain way, etc. Will it be the same as for humans or different? Artificial Consciousness (AC), which are AI systems that may have a subjective experience, are a real threat to human autonomy. 

From a philosophical perspective, this might be the inevitable future of the human evolutionary trajectory. AC research labs in France, USA and Japan should immediately be shut down. Regarding Covert AI systems on page 11/12, an important question to ponder, even though it might not be in the immediate future, is if AI will develop human characteristics and face discrimination based on ethnicity, nationality, sexual preference, ability, age, gender, etc.

New trends in AI, Tabula Rasa, which is learning without data and human guidance, could undermine human autonomy and threaten human life. Intel’s Ambient World envisions a future where the physical and cyber worlds converge. How are the masse affected by the choices of a few? Do citizens all over the world, seven billion of them, demand this sort of a future? Are they even aware of these developments going on behind secret doors? Obviously not. How can the EC influence these processes to be in line with the ethical purpose and technical robustness it advocates for AI?

For Informed Consent to be effective, people need basic skills and knowledge, not just limited to literacy. In addition, people will need at least a basic idea and fundamental knowledge of technology; technical literacy. Young people and the generations born into an era of advanced technology might navigate with relative ease, if we envision an inclusive future, where everyone has access to technology. But, what about elderly people during the transition period to more technically-advanced societies? We are in that transition period right now.

Long and cryptic legal agreements and user terms might be necessary to cover the technology producers but, this too undermines human autonomy, since consumers will not have the time and knowledge to read word for word and agree. Instead they will simply agree to the terms, driven by their eagerness to be served, blind enthusiasm to try the new technology and trust in the manufacturer/service provider. Richard Thaler’s Nobel Prize winning research has shown that people can be cajoled to stay in a system by already including them and giving them the choice to ‘opt out’. They will be reluctant to opt out.

Human egos are fragile. What happens the moment someone who has the capacity to manipulate technology feels they were threatened, embarrassed or taken advantage of by someone else? Human or AI. It can safely be assumed that the person will use his tech knowledge to get back at the person or system who made him/her feel like that. This is applicable to world leaders as well. Therefore, future wars can be triggered and executed easily with unfathomable consequences because more than one party will have highly advanced digital arsenals. LAWS.

Human dignity is of paramount importance. In response to a question raised by the former CFO of Yahoo at a Panel Discussion in Davos recently, the Executive Director of OXFAM mentioned how some women in poultry factories in the US have to wear adult diapers since they don’t get breaks. People may have jobs but, human dignity is at risk. The shocking example from the poultry factory in the US could be the same for AI, especially when it comes to the development and production of AI. 

 

We are not owners of the earth, its resources and, most importantly its inhabitants; human, animal and plant; on land, air and underwater. Everyone must act responsibly and respectfully as guardians and not owners of these to enable future generations to live peaceful and dignified lives

 

 

Respect for democracy, justice and the rule of law is crucial for human civilisations to flourish. But, AI, bots and algorithms have already been used to rig elections, as was the case in the US in 2016. So, what assurance can the EC give regarding the safety of current and future human civilisations?

Principle of Autonomy: Autonomy is already threatened through social media platforms, where algorithms decide what gets promoted and what people see. Followers and Likes are harvested through fake profiles and identities. The right social messages, which are beneficial to people are not disseminated, instead gossip is promoted. Why isn’t any relevant global authority able to implement the Principle of Explicability to Facebook? They’re acting with impunity. How can citizenry be ensured that this will not be the case in future with more sophisticated AI technologies?

The dual-use nature of AI is a recipe for disaster. So much destruction is perpetrated by weapons manufacturers, who are mostly in Europe and the Occident, to sell their weapons for humans to divide themselves, fight and kill each other. This will no doubt continue, albeit in a more aggressive manner, with LAWS. Using AI in weapons systems with a view to reduce collateral is just finding an excuse to produce it despite the unprecedented destruction it can cause. It will also ‘play god’ and bypass the laws of nature and the philosophical notion of ‘Karma’, which prevents people from doing bad.

 

Chapter II: Realising Trustworthy AI

It’s only matter of time before AI itself is used to test and validate intelligent systems. How robust would this be?

In terms of XAI research, Hannah Arendt’s thoughts on her 1958 work the Human Condition (HC) are useful to consider. As Arendt puts it: “The reason why we are never able to foretell with certainty the outcome and end of any action is simply that action has no end” (HC, 233). This is because action “though it may proceed from nowhere, so to speak, acts into a medium where every action becomes a chain reaction and where every process is the cause of new processes … the smallest act in the most limited circumstances bears the seed of the same boundlessness, because one deed, and sometimes one word, suffices to change every constellation” (HC, 190).”1

Non-technical methods of achieving trustworthy AI is important. But, we need to produce a cadre of people who have knowledge, both, of the technical and non-technical methods to achieve trustworthy AI.

Consulting the work of the IEEE Standards Association might be worthwhile. “IEEE Standards Association (IEEE-SA) is a leading consensus building organisation that nurtures, develops and advances global technologies, through IEEE. We bring together a broad range of individuals and organisations from a wide range of technical and geographic points of origin to facilitate standards development and standards related collaboration. With collaborative thought leaders in more than 160 countries, we promote innovation, enable the creation and expansion of international markets and help protect health and public safety. Collectively, our work drives the functionality, capabilities and interoperability of a wide range of products and services that transform the way people live, work, and communicate.”2

Including the HDCA (Human Development and Capabilities Association)3 for stakeholder and social dialogue is recommended.

There’s a fake belief that we understand technology. Just because we use technology, we feel that we know it. For example, the internet. We use it but, do lay people really know what it is and how it functions? Do they know the difference between the WWW and the internet? Most likely not. So, people feel safe using it. And this goes for any other technology that might be harmful as well. People seem to feel a certain superiority in using technology. And unconscious reassurance that technology is your friend. This is a great advantage for technology producers.

There are two separate issues that are connected. Firstly, nobody is saying that technology is evil. People are worried about the humans behind the technology, who might be evil; the ‘evil humans’ have the ways and means to manipulate the world to make profit and create inequality. Secondly, as technology advances, people are worried that AI will develop human like tendencies, which includes both good and evil. There already are examples where AI has shown to be aggressively competitive4. Gmail's canned replies invert machine learning, so that automated replies “train the users, who function less as creative human beings & more as...neural nets that sift through AI-generated proposals & reject those that fail to conform to some pattern.”5 This means technology too can become evil by imitating human characteristics, making the first fear of people void. This situation is worse and has many ramifications.

 

Chapter III: Assessing Trustworthy AI

While there are planning and operational tools to practice the DNH approach, the ‘Four divine abodes’ from the Buddhism offer a great set of philosophical guidelines. Buddhist texts translate the term brahmaviharas as “divine abodes,” and state the four basic ones: metta (loving kindness), karuna (compassion), mudita (empathic joy), and upekka (equanimity). These four are attitudes towards other beings. They are also favourable relationships. They can also be extended towards an immeasurable scope of beings and so are called immeasurable6. Can these be embedded in AI and used for its assessment?

Tech companies are creating a generation of mindless zombies, who are glued to their digital devices; this needs to be prevented. Tech companies consider humans as commodities to further their profits. Instead, they should respect common principles. In fact, all stakeholders involved in AI should endorse the Principles for Digital Development7 and support its implementation.

 

General comments

The advice when it comes to developing, deploying and using is “Don’t play God.” We are not owners of the earth, its resources and, most importantly it inhabitants; human, animal and plant; on land, air and underwater. Everyone must act responsibly and respectfully as guardians and not owners of these to enable future generations to live peaceful and dignified lives.

From an economic perspective (not purely), the market was decided by supply and demand of human needs. How will AI impact this?

From a social and evolutionary perspective, autonomy of humans: Survival of the fittest was on a human scale. How does AI impact this?

From a philosophical perspective, does the philosophical notion of Karma affect AI and robots? i.e. every action has a consequence.

Can AI be prosecuted for a violation of conduct involving children? Ex: child abuse, show of porn etc. What about rape?

World/society is sending mixed messages: some are talking about ethics, principle etc. and protecting the rights of the robots; humanising robots. But, in VR, you can easily kill people and robots.

 

Footnotes

iThree pillars underpin the Commission’s vision: (i) increasing public and private investments in AI to boost its uptake, (ii) preparing for socio-economic changes, and (iii) ensuring an appropriate ethical and legal framework to strengthen European values.

1https://stanford.io/2BfMhyC

2 https://bit.ly/2GltlSa

3 https://hd-ca.org/

4 https://www.sciencealert.com/google-deep-mind-has-learned-to-become-highly-aggressive-in-stressful-situations

5https://tinyletter.com/robhorning/letters/reasons-to-believe

6http://www.buddhanet.net/mettab5.htm

7https://digitalprinciples.org/

Recent columns

COMMENTS