Monday Dec 16, 2024
Wednesday, 3 June 2020 00:00 - - {{hitsCtrl.values.hits}}
Dr. Inga Strumke
If you’re the kind of individual used to having conversations on your phone, or the type to greet Alexa as you enter your home, congratulations, you’ve had an interaction with Artificial Intelligence.
Except, no.
The truth is that you have been interacting with Artificial Intelligence for far longer than you would care to realise. From every word suggested to you by that pesky auto-correct feature on your phone, to your self-regulating air condition and the incredibly helpful autofill feature on search engines, you have been immersed in the world of AI for as long as you could imagine.
In the month of November, 2019, SLASSCOM conducted the AI Asia Summit, where some of the most incredible minds in the fields of Artificial Intelligence took the stage to speak of their relevant topic, one of them being Dr. Inga Strumke, a particle physicist in her own right, and most importantly, Manager – AI and Machine Learning at PwC Norway.
Dr. Strumke took to the podium to discuss varying topics ranging from problems within the creation of AI, alongside morals and ethics of the models and the possible solutions to said issues. Artificial Intelligence is an intricate subject, even for those well-versed with the workings within it, but the following topics will shed a brighter light of understanding on the matters touched on by Dr. Strumke.
The core of her subjects ranged around ‘responsibility’, a matter which circles around issues as frequent and considerably inconsequential as Google misunderstanding your voice due to an accent or as large and racially impactful as a criminal justice system-based AI which could possibly jail the wrong person.
Her first port of call was the subject of ‘Bias’.
Bias within the construction of AI is one of the most noted and storied subjects that Dr. Strumke brought forward in her speech. It is important to understand that in almost all fields, ranging from businesses to personal endeavours, all AI algorithms are created with good intent, intent which while being understood by those creating the model, and those implementing it, the same might not be true for the algorithm itself. She based her reasoning on the case study of one Alfred Nobel, whose narrative is well known to all, concluding by stating that “AI is dynamite.”
Dr. Strumke put forward an example along the lines of recruitment, where an algorithm might lean towards choosing a candidate along traditional lines of reasoning rather than through a unique perspective, thus possibly omitting a stand-out candidate from the lineup. Furthermore to her explaining the subject of bias within AI models, Dr. Strumke explained the gender bias within languages which proves to be a correlation of problematic nature within Google Translate, as well as the more serious issue of an algorithm which was utilised in the USA to determine the priority of healthcare to be given to those needing it, which eventually ended up erroneous due to racial bias.
From the bottomline of a business perspective, if it doesn’t work, it’s bad business. The idea behind ‘Responsible AI’ is to find the fine line of balance by seeing the big picture, concerns ranging from societal impact, its long term goals, the scalable solution of the model, and of course, protection against attacks. Correlation between experts of various areas is an absolute must, because all AI models deal with varying issues from multiple fields. The creation of an AI model is a complicated dance of psychology in many respects, which is why the communication between the problem owners and programmers is of high priority, as stressed by Dr. Strumke.
While bias is a rigid concept and relatively understood within its circle of relevance within each AI model, the concept of ‘fairness’ is incredibly subjective and proves to be a frequent and serious problem within model creation. Dr. Strumke sampled the COMPAS algorithm created in America as an example, a model created to identify those in line for a criminal hearing as to who needed to be restrained in jail prior to their hearing and those who did not need it.
Even without ethnic data of each individual being given to it, racial bias was present based on past circumstances within the information already in the system. With the main responsibility of the model being to keep the error rate as low as possible, this itself proved to be a problem, as the algorithm could choose to jail those above its determined and possibly problematic threshold.
Dr. Strumke’s final subject ranged around ‘sustainability’, which dealt with problematic correlations within models, which she explained fluidly with a subject of pregnancy, and the issue with a mother smoking during pregnancy. An algorithm could recognise that smoking in pregnant women could cause a child to be born earlier, while also keeping in mind that a child born earlier could be born healthy, and therefore conclude with the obviously erroneous idea of smoking in pregnant women could result in successful birth. These are the issues of correlation that affect sustainability.
Her solution? Simplicity. Work alongside the simplest model, one which can be explained easily.
While programmers and those dabbling in the realms of AI find it difficult to keep in mind all of the OECD AI Principles, some of which include the ideals of benefiting people, human rights, democratic values, transparency, and accountability, good intentions are of the highest importance, and the absolute necessity to take ‘responsibility’ for the creation and implementation of Artificial Intelligence models.
With events such as the AI Asia Summit being conducted by SLASSCOM, the world is closer to understanding the subject of Artificial Intelligence. If the Wachowsky trilogy of The Matrix is anything to go by, Artificial Intelligence is a subject that must be treated with the utmost respect, and as Uncle Ben said: “With great power comes great responsibility.”