The explosion of data and the growth of algorithms are having an impact in claims, just as they are everywhere else, says Karim Derrick of Kennedys
Since 2004—more or less when Amazon started renting out the infrastructure it had created to deliver its massive e-commerce operation to anybody—our ability to store massive amounts of data has grown exponentially. We are awash with data.
As we move through the world, we leave in our wake a rapidly growing sea of data, tracking our every move, click and transaction, even our driving, preferences, health and desires. The list is growing. And as this store of data has grown, so algorithms to process this data have become widespread.
Silently, often invisibly, data capture and processing has started to not merely reflect our lives, but instead to shape them, bringing new social orders into existence.
The ways algorithms affect our lives are various. Google searches are of course determined by outcomes. When we type in a search term, the engine returns a list of pages, however, it is likely that we will only really consider the top few items. The choices made by the engine are rarely challenged and yet we have all become almost completely dependent on it for our source of knowledge. As Microsoft’s principal researcher, Tarleton Gillespie puts it: “[W]e are now turning to algorithms to identify what we know is as momentous as having relied on credentialed experts, the scientific method, common sense, or the word of God.”
In the workplace, liberalism has come to mean quantification of performance as much as it is associated with market economies. Measurement creates competition and shapes our perceived value of things, services and each other.
University and school ranking have come to influence our choice of institution right across the world. While to many these rankings can appear objective, the fact is that particular measurements have been chosen over others that might have been taken and, by doing so, they describe a particular type of institution. Not only that, but as the rankings become all pervasive and increasingly international, institutions themselves change to improve their scores. They homogenise.
At the microlevel as well, our lives are increasingly influenced by machines. Our virtual grocery shopping basket recommends purchases based on our past behaviours. Adverts on websites adapt themselves to derived preferences, fears and desires. Our physical movement through the world is increasingly orchestrated by navigation algorithms.
In parallel, our own idea of ourselves is, within itself, increasingly quantified and increasingly subjected to the power of algorithms. The explosion of social media for our personal and professional lives has given rise to new measures of personal value. The size of our network, the quantity of our posts and the quantity of our ‘likes’ have all come to be measures of our worth, determining who we socialise with, the jobs that we get and, increasingly, our perception of the world.
In claims, the explosion of data and the growth of algorithms are having an impact. At Kennedys, we have built tools to automate first notification of claims via text message or web page, to analyse medical reports via text analytics, and to extract injuries and prognosis. And at the intersection of insurance and legal services, we have worked to standardise processes, in turn to systemise them and as a result, commoditise them.
Ironically, however, the rise of algorithms may also be seen as a new emerging risk. The standardisation of process, claims or legal, may, in the end, reduce the diversity of approach. And it is diversity that ensures a variety of cognitive approaches that, in turn, guarantees our ability to solve problems.
There is much talk at the moment of ‘dumbing down’, of the loss of professional control. Professions arose out of the need to ensure the maintenance of standards and accountability to society. The availability of data and the ability of algorithms to replace the role of experts in a field, has allowed people to question the role of professional authority like never before.
And yet, while all around us, the authority of teachers, doctors and lawyers is under sustained attack, the authority of algorithms goes largely unchallenged. We are at once unable to avoid them, unable to understand them and as a result, they are mostly allowed to operate with impunity. And therein lies the risk.
Is it possible that, as the direction of our lives is increasingly orchestrated by proprietary algorithms, there will come a point when society demands that they be brought to account, just as society did of the professions when they first came about?
If that were to happen, and if in doing so we came to recognise that we had allowed their rise to continue at a quantifiable cost to us both as individuals and as societies, might we at that time then, come to conclude that we should be compensated for our losses by those technology companies that developed and maintained them?