What, why and how of artificial intelligence

Thursday, 29 October 2020 00:10 -     - {{hitsCtrl.values.hits}}

 AI is an interdisciplinary area in science and engineering that focuses on building smart systems, which focus on performing actions that would normally require human intelligence and the underlying theory behind it


By Dr. G.M.R.I. Godaliyadda


What is Artificial Intelligence (AI), why is the world so captivated by it and how can you develop AI systems and be a part of this exciting world of AI? These are reasonable questions being asked by many due to the hype associated with AI. 

The path on how to get there, is the answer to the third and most important question in my opinion for budding students and is closely tied to what motivated researchers to develop this technology. Because once you understand the underlying logic behind what drives this field, it makes it easier for anyone who wishes to get into this field to get an edge in today’s highly competitive environment.  

Beforehand, it is important to fully understand what AI is. AI is an interdisciplinary area in science and engineering that focuses on building smart systems, which focus on performing actions that would normally require human intelligence and the underlying theory behind it. 

The latter part of this definition is as important as the former, yet it is often excluded. As the theory behind any development process is as important or at times more important than the process itself. The theory is the heart of the algorithm that powers the AI system.

The “why” is directly linked to the “what” as a system, which can emulate human intelligence has limitless possibilities and in essence, is multidisciplinary. Hence, the interest is not limited to one area, applications of AI are wide ranging. 

Generally, these application areas are divided according to the tasks performed by them. A key area that has gathered tremendous interest recently is Computer Vision. The AI systems that make sense of visual inputs such as images and video feeds from cameras, medical scanners, radar and sonar in a humanlike manner. 

This is due to Convolutional Neural Networks’ (CNN), ability to directly analyse images and video, which is a relatively recent advancement in AI. Smart surveillance which performs human activity monitoring for security and defence has gathered tremendous interest in the current climate. 

It can detect patterns of human activity and identify abnormal behaviours, as well as detect violations in social distancing protocols real-time using CCTV footage. In bio-medicine this has facilitated the detection and diagnosis of diseases, viability and effectiveness of treatment, through AI based analysis of medical images. 

The ability computer vision-based systems have to make sense of the 3D world around us has enabled the rapid development of the autonomous navigation sector which is booming currently. These are just a snapshot of the numerous uses AI has found in the real world. Its ability to emulate human behaviours, judgment, motion and perception gives rise to endless possibilities.

Now let us move on to how. As stated earlier in my opinion the answer to this is inside the motivations of “how” researchers have so-far developed AI solutions. To expand on this more, let us dig into the motivation behind modern Deep Learning (DL) architectures, a recent form of AI systems. It revolves around the concept that the human mind attempts to break down given task into subtasks. 

For example, our methodology for identification of a person walking towards us from a distance, is a process of cascaded tasks. You would first start with more exterior features such as height and size of body frame, then towards features such as details about his/her hair, clothes and skin colour. This is the kind of layered thinking that goes behind the task of classification and identification in a Deep Neural Network in a very rudimentary sense.

The classification task mentioned above is actually realised in two phases for AI systems. First, we memorise labels such as names of people, types of animals, or types of vehicles when we first encounter them. When encounter many versions of the same animal with slight changes, we try to come up with a unifying model to describe that group of animals. 

We created a “boundary in space” to distinguish it from all other animals. By repeating this process for all animals, an animal classification system could be formed. This initial phase of identifying these unifying models is called the training phase, this allows for the actual identification to take place in the testing phase. 

This grouping if performed in an unsupervised manner it is called Clustering. The AI is thrown into the metaphorical deep-end to fend for itself from the beginning. So, it self-learns. It sees it surrounding and like us it tries to make sense out of it. It does this by grouping things that are alike. 

As humans when we cluster and classify objects, we do not consider every miniscule detail of the object. We extract out a few features which are characteristic of that group and represent the entire class of objects through that. 

For example, you only need a few salient features of an animal to distinguish this species of animals from others. This is a form of reduced representation or a dimension reduction. A task that is pivotal to all forms of AI systems. 

For example, many dimension reduction techniques that exist take inputs such as images which are considered to be high dimensional, due to the large number of pixels present in the image. These techniques reduce the redundant dimensions of objects, and form representations in low dimensional space (compressed form), that form the basis for the identification task at hand. Clear example of emulating humans to improve efficiency of a given task.

Once you have the correct mindset to dive into the world of AI, now you can go into the nitty gritty details. So, take a problem in the real world that excites you. Remember to pick a simple problem at first such as, identification of the 10 numerical digits. Because a simple problem enables you to grasp the fundamentals better without getting side-tracked due to details. 

Define its objective and desired outputs. Then collect data that relates to your problem. Make sure to keep room for both training and testing (or validation) using the collected data. Decide on a method to measure success for your algorithm. Brainstorm with your friends on how you can improve your logic to make the algorithm behave in a humanlike manner. 

When you are ready for implementation learn a basic programming language. A general-purpose coding language such as Python, will help you, breathe life into your algorithm, because an algorithm in essence is just an idea. Once the idea is implemented, you will see new issues that were not apparent at the conceptual stage. Use that to refine your logic and repeat. 

Once a simple problem has been tackled to a reasonable level move into something more complicated. For example, a transition from printed digit identification to handwritten letter identification. Ensure that the transition is smooth so that the changes that you make are trackable and not abrupt and discontinuous. 

Every time you take on a new problem, think about how you can insert your own thought process to the code to make it more humanlike in judgment, perception and behaviour. AI system development becomes a natural process once you get this framework right.

(The writer is a Senior Lecturer of the Faculty of Engineering at the University of Peradeniya.)

COMMENTS