The Unbearable Lightness of Seeing

What you see is what you get. There is more to it than meets the eye. Seeing is believing…These sayings are not just some hollow phrases as they hold a hidden truth: our five senses offer us a great experience of a truth which does not necessarily corresponds with the actual or full reality. Our senses are sensors that feed our brain with information to converge into an image of the environment that surrounds us or ‘the world as we know it’. Undoubtedly, this image is limited to the information that is received from our senses and any additional sense would feed our brain with more information resulting in an even richer image, offering new insights and experiences. At the same time, our senses can quite easily be tricked such as with optical illusions, VR glasses or are limited such as is the case with, for example, high pitch tones that we are not able to hear. The fact that our hearing system is not picking up these sounds does not necessarily mean these are non-existing and not part of reality.

Scene from ‘The Unbearable Lightness of Being’

With this knowledge of how our brains function in conjunction with our senses, and sometimes fail in processing the available data correctly, it is striking to see how people are often convinced that the reality as they perceive it, is actually representing ’the truth’ as if there were only one, single version of the truth namely the one being there own. This overestimation of the human mind inevitably leads to disputes and conflicts. Being conscious of our inherent limitations should make us aware that we are not fool-proof and instruct us to relativize one’s own right. Social media and the internet in general enable the fast spreading of ideas and news that may fool our minds with news that is not fully corresponding with the facts or are excluding facts on purpose. With our limitations to interpret data correctly, some vigilance and reluctance in coping with ideas is essential.

Fast forward to how today’s systems with artificial intelligence process massive amounts of data that are captured by a variety of sources and sensors. In order to get some sense, mind the word, out of this unstructured data, algorithms interpret an amount of data that exceeds human capabilities. At the same time, these algorithms are designed by humans and thus are inherently limited to what lies within the realm of our imagination. So in analogy, we should be critical to the results that are produced by artificial intelligence and not by default consider these to represent the truth. The algorithm that produces the results is created with a certain purpose in mind, leading to the objectives the creator was hoping to achieve when developing the algorithm.

If we do not apply the same reluctance towards AI as to our own senses, then we risk of being mislead at an even bigger scale, with even more severe and possibly disastrous implications. To avoid such an outcome, findings produced by AI should go through several iterations of thorough human evaluation. Not just a single person but preferably a group of people with diverse backgrounds and expertise to guarantee a broad view and well-balanced evaluation.

An illustration of this risk is the complexity of military surveillance in which satellites, sensor equipped drones and other hi-tech equipment produce data with the purpose of offering better insight into a military situation. The interpretation of this data by a team of like-minded people may result in a biased opinion that can be out of touch with reality. As such a multi-disciplinary team with people with different expertise may create a more balanced interpretation of the data. This is where democracy comes into play: a constitution of people with different backgrounds and opinions in contrast to dictatorships were only the like-minded are licensed to participate in the decision process. 

Another apparent example is within the financial industry. To comply with regulatory obligations banks have installed risk-based anti-money laundering systems to identify suspicious financial transactions. Unfortunately these algorithms are not 100% accurate and also produce a significant amount false negatives, transactions that are incorrectly identified as being suspicious. Further optimization of these algorithms may lead to less false positives but at the same time may lead to an increased risk of letting actual suspicious transactions slip through. Even with the most advanced AI in place for distinguishing the good from the bad, human evaluation can’t be excluded from this process and AML-systems can’t entirely replace a committee of experts in this complicated manner.

Xperian is supporting banks and financial institutions in developing effective methods and systems, while taking into account the limitations of human and artificial intelligence.

Jan Verbruggen – Managing Partner Xperian

Contact

Fill in the form below or give us a call and we'll contact you.