What do the pope, x-rays and the game of Go have in common?

30 Apr 2020
Chess
30 Apr 2020

By Professors Thomas McWalter and Jörg Kienitz

Artificial Intelligence (AI) has been in the headlines recently, with the pope joining Microsoft and IBM in a quest to outline principles for powerful emerging technologies and their regulation. Broadly, this kind of AI refers to a wide range of technology, interdisciplinary approaches and applications including big data, cloud services and the use of machines capable of performing tasks that typically require human intelligence.

Advances in computer power, machine learning and predictive algorithms are creating paradigm shifts in many industries. For example, when an algorithm outperformed six radiologists in reading mammograms and accurately diagnosing breast cancer, this raised questions around the role of machine learning in medicine and whether it will replace, or enhance, the work being done by doctors.

Similarly, when Google’s AI software AlphaGo beat the world’s top Go master in what is described as humankind’s most complicated board game, The New York Times declared “it isn’t looking good for humanity” when an algorithm can outperform a human in a highly complex task.

Both these examples point to narrow uses of AI, specific types of machine learning that are hugely effective. The medical example illustrates supervised learning, where a computer is programmed to solve a particular problem by looking for patterns. It is given labelled data sets, in this case, x-rays with the diagnosis of presence or absence of breast cancer. When given a new x-ray, the computer applies an algorithm based on what it has learnt from all the previous x-rays to make a diagnosis. Unsupervised learning is a kind of self-optimisation where a computer has a set of rules, such as how to play Go, and through playing millions of games, it learns how to apply these rules and how to improve.

Machine learning is a phenomenal tool. To fully harness its potential it is essential to understand what machine learning is – and isn’t – and to demystify some of the hype and the fear around what it can and can’t be used for. We have anthropomorphised computers; we speak about them in terms of intelligence and learning. But in essence, a machine computes – it does not learn. Its algorithms are designed to mimic learning. In essence, these algorithms minimise the errors of a complicated function that maps inputs to outcomes and we interpret that as solving a problem, but the machine doesn’t know what problem it is solving or that it is playing a game. The intelligence rests with the humans who design the algorithms and configure them for specific tasks.

Now, more than ever, we need intelligent and very well-educated people who can apply these techniques in the correct context and interpret the results. When an algorithm fails, the consequences can be catastrophic. An obvious example is a fatal accident caused by a self-driving car. We need to build in fault tolerance. Data integrity is also an important issue – what we put in is going to affect what we get out. Education is critical in making sure we get these elements right. And of course, there are broader ethical issues to consider surrounding data collection such as what data can be used, where it is sourced, and whether different data sets can be combined.

Machine learning is particularly valuable in the financial sector. Many applications are already in use in banking, insurance and asset management. Financial institutions use pattern recognition very successfully for fraud detection. It is also valuable for looking at trends in data sets and finding patterns that humans may not be able to identify directly, for example, in profiling people who apply for credit. There are even robo-advisory applications for individual asset allocation. In financial modelling, machine learning can be applied to pricing, calibration and hedging.

For example, valuing derivatives contracts depends on many complex factors and variables such as interest rates, exchange rates, equity values – all of which fluctuate all the time. Financial mathematicians use models for this, but they are complicated and not easy to solve in a closed form. We may be able to build and apply a model to one contract, but banks have hundreds of contracts, and risk management and regulatory frameworks need to be updated all the time. Machine learning, specifically deep learning and neural nets provides a powerful shortcut. We can use classical numerical methods to produce financial models and then use them as labelled data sets – as in the x-ray example. An algorithm can take this input to generate the output for multiple contracts.

Industries and organisations that are pulling ahead are figuring out where to replace standard methods and complex, time-consuming computations with machine learning. They are also using it for more complex modelling approaches, adding further variables that cannot usually be factored into standard methodologies. The most obvious benefit is that it is faster and of course, machines can compute millions of times faster than humans. These techniques also have the potential to be far more accurate and allow us to make better-informed decisions.

But the human element is critical. The accuracy of potentially life-changing outcomes will depend on how we identify where we use these techniques, how we build the algorithms, how we choose and manage data and finally in how we interpret and act upon the results.

Professor Thomas McWalter is an applied mathematician and an Adjunct Associate Professor at the African Institute of Financial Markets and Risk Management (AIFMRM) at UCT, where he lectures computational finance.

Professor Jörg Kienitz lectures at the University of Wuppertal and is an Adjunct Associate Professor at AIFMRM. His research interests include numerical methods in finance and machine learning applied to financial problems and derivative instruments.

AIFMRM hosted a machine learning for option pricing masterclass in Johannesburg on 5–6 March 2020, taught by Professor Jörg Kienitz and Dr Nikolai Nowaczyk.