Mathematics, AI, LLM s and Digital Banking with Emotional Intelligence

The Flemish mathematician Simon Stevin (1548-1620) introduced the concept of decimal numbers to Europe and stated that the universal introduction of decimal coinage, weights and measurements would only be a matter of time. His Ouvres disseminated his teaching, along with trailblazing work on quadratic and algebraic equations, real numbers, and statics. Usually published in Dutch, his 1634 edition of his collected mathematical works in French, spread his ideas across Europe and is currently on display in the NSW state library. It sits along side Adam Smith’s “Wealth of Nations”, the capitalist classic, first published in 1776 and regarded as the first classic of modern economic thought, well over 150 years earlier than that of John Keynes.

Their individual and collective display is thought provoking for anyone interested in the role of mathematics in the economy today. Smith and Stevin could never have imagined that their works would have set off a pathway whereby decimal coinage morphed into digital coinage and algebra, in the form of AI, took over the production of not only content but many forms of value and the foundations of the economy.  

The characteristics of intelligence

According to Yann Lecun, Chief AI scientist at Meta, to be regarded as intelligent, a system must have the ability to:

  1. Understand the physical world
  2. Remember- i.e. have persistent memory
  3. Reason
  4. Plan

When tested against these capabilities,  AI and LLMs in particular have a long way to go before they can be regarded as intelligent. While LLMs can pass the bar exam, they still can’t drive a car or clear up after dinner, due to what they lack in these regards. That doesn’t mean that driving a car requires more intelligence than passing the bar exam, but rather we have a ways to go before we have enough bandwidth in the models to allow them to navigate the world we live in. 

To understand why we have some way to go to build intelligence into LLMS, or enable them to build their own intelligence,  we must reflect on how LMS work. Doing so in relation to a common human task, for which we might want to put LLMs to work, is useful.

LLMs are often thought as useful for creating chat bots; talking to us. When engaged in the activity of talking we are communicating, usually with the intention of enabling the other person to know something we know, or know something they know. As we do this we are doing so in such a way as to ellicit a response from the other person or persons. This goes beyond imparting or acquiring knowledge and often entails the derivation of an emotional response, such as wanting the person to like us, or understand how displeased and upset we are with them; the emotional challenge of AI.

LLMs work by taking a word or phrase as an input and applying a probablity distribution model over the known responses to that word or phrase and choose from a range of previous responses based on such a probability distrubution. “How are you?” “I’m well, how are you” and so on.

The way these systems are trained is by taking a piece of digital content, perhaps text or an image, corrupting it to degrade the quality , then giving that to the encoders to predict the original piece of text or image. The output is compared to the original and the system is tuned for performance. Given enough data, such as is the case for Meta within social media, the system gets pretty good at behaving like a human, hence the large number of robots within social media profiles and dating sites duping people into scams. 

What the LLMs are not doing is planning ahead or forcasting. They are merely taking a large set of previous responses to an input and selecting an output based on probability.  This is fairly easy to do with text and still images but becomes increasingly more complex with video. When video is introduced to the experiment, the machine is now tasked with predicting what happens next, something that requires intelligence, which LLMs and Large Video Models, actually possess little of currently. 

This is why you can get CodeGPT to write a banking app, but you cant get CodeGPT to write a next generation banking app. The LLMs can trawl through old code. They cant see the future. For that you need vision and a design paradigm like On-Ramp, run by creatives. 

This lack of intelligence is exemplified in driverless cars. When we transpose the video challenge to a motorway, AI has to take all the moving and static objects and predict where they are about to be and then formulate a response to some perfect state of driving. “Where have I seen all of this before and what happens next, based on some ditribution of probability ?”

That might be easy on a country road when there is one car on the road and the task is simply to navigate corners and stop signs. It gets a little more complex when a kangaroo jumps out of the bushes at dusk on an otherwise lonely road. 

93.5% of the time, the Kangaroo crosses the road at 64.2kph, no problem.  3% of the time it stops and watches the car go past, 3.5% of the time, its meat pie. The scenario becomes increasingly more difficult as we add other supposed sentient, aka random, beings . Step it up a notch to a busy freeway with drunks and arguing siblings in the car and it’s all a bit messy. Less like a meat pie and more like a 70’s B-grade horror movie.

LLMs can’t predict – they simply apply probability. This is why animals, who exist without language, are able to interpret the world way better than AI and LLM today because of the large number of nuances that go into determining the future. Animals do better than LLM by decoding a broad range of visual, audio and other cues that they have learnt over generations. Being intelligent, undertsanding the world and predictingthe future requires a high bandwidth model to represent the world that you can’t do with language alone . One way around all of this, to add more intelligence and stop the system from collapsing is to introduce abstract versions of the world that contain this broader range of paramters within the model, upon which to train the machines. Self generative AI has advanced this capability by introducing self supervised learning or what is known as bidirectional systems, i.e  generating large numbers of futures states and bringing these back into the models. It is this capacity to self generate more data that has led to the massive increase in accuracy and power of AI recently. 

Whats this go to do with Digital Banking?

Take the kangaroo and the car story and transpose it to a student leaving high school and imagining their perfect life, whereby normally they get a job, a partner, a home, some adventure and babies and die a wise old owl. Easy right? Nope! It’s really hard to predict. So they head off to a financial planner and ask them to build a plan and provide a set of financial driving skills and instructions around that, a self driving money system if you like. As for the self driving car, it’s hard to account for the drunks and arguing siblings and partners. Yet what we do have is sufficient historical visual, behavioural and textual data to build a money safety and growth model around those behaviours ,such that they can be reasonably accomodated and when they occur, responded to. Along the way the nuanced , illogical, human is considered, with all their biases and needs such as for example to forgo future gain for short term pleasure. 

Odyssey does this at scale by creating the abstract representation of reality into a money model. Based on the best know journey system for overcoming challenges and building skills, namely game, Odyssey is used to initialise the system and test it for the demographic at play. Once accuracies of reposnse are tested the LLMS can be primed to create the hyper personalised nudge system to help the customer build habits, stay the course and thrive with their money. All with emotional intelligence at the centre of the artificial intelligence.

 

Building Emotional Intelligence into Banking

Before Artifcial Intelligence, understanding the deeply nuanced human need to feel better