Anyone who has seen an episode of Black Mirror will realise the potential of technology to transform our world. For good or bad? This is a personal opinion. But in relation to the Western world’s expectation of freedom of decision, the world frequently portrayed in Black Mirror’s episodes is dystopic. The acceleration of technological development away from the control of the general population and into the manipulation of a higher bureaucracy is illustrated. This bureaucracy, in turn, has the leverage of instrumental control over shaping the behaviour of its subjects. And is it worse that the subjects are oblivious or if they are knowingly being controlled?
An Orwellian societal structure isn’t far from reality. As disturbing as scenes from Black Mirror are, we should turn our eye onto the present world, where technology enhances our everyday function, and question whether we are as free as we may think. With the utilisation of social media ubiquitous, are we continually narrowing the degree to which have freedom of opinion? The next time you see your friends, ask them for permission to peruse their Facebook or Instagram feeds. Or even their YouTube homepage. What you will likely see are sets of different content filtered specifically to each friend’s browsing habits. If one friend has a preference for sport, their newsfeed will display a spectrum of sport-related videos. If another friend likes music, then music-related content will be predominantly displayed.
Whilst filtered recommendations for our browsing activity aren’t necessarily bad – after all, it isn’t dissimilar to each person decorating their room just the way they like it – the rise of big data in the coming technological industrial revolution foreshadows a more enduring consequence. Just as a memory foam mattress remembers the imprint your body made in order to improve your quality of sleep, big data will remember our personalities to improve our operating efficiency. Voice-automated assistants already display uncanny abilities to operate seemingly independent of control and Google’s assistant can already mimic a human to order takeaway. In the space of rapidly developing artificial intelligence designed to learn our preferences, down to the schedules of our daily calendars, we have to wonder whether the memory of an electronic chip will one day eclipse the memory of a human brain.
Whilst the apocalypse of Terminator movie scenes project artificial intelligence as having a controlling conscience, our daily online habits aren’t as innocuous as they may seem. The penetration which Google effects with its product range already illustrates the behavioural and cognitively-shaping effect that our usage of web browsers can have. Ever wondered why consecutive search terms just seem to be so complementary during a nonchalant afternoon browsing session? This effect can light-heartedly be described as a ‘YouTube binge’ where one cat video opens a slippery slide into three hours of animal-inspired comedy. YouTube recognises that cats are what you want to see, and maintains a succession of related videos until the limits of self-control are reached and you finally close the procrastination window.
Not so light-heartedly, however, is the virality with which sensitive opinions can spread. Particularly within the political domain, this has the destructive effect of mobilising populations behind an incorrect, or skewed, fact. Certain elections have already evidenced the opinion cocoons which social media can create. These cocoons, instigated by each person’s online browsing habits, are a vicious cycle. Persistently ‘liking’ posts will inform data bots of trends in personality, causing them to push further posts of similar nature onto your feed of ‘recommended’ content, preventing you from developing other opinions. Whilst in an open table discussion a spectrum of opinions could be heard and summative conclusions – arbitrated from multiple perspectives – derived, search engines will simply propagate one perspective to us. Our own.
Whilst this is not to say that we will all forever revel in the cocoon of our own opinions, it does highlight a danger that big data poses. In tracking our online activity, we should be wary of the freedom we are sacrificing for the purported ‘efficiency’ that big data entails. As our usage of Internet deepens, the clearer our online personalities are sculptured, and this may be capitalised by private companies or governmental bodies.
The Chinese government’s adoption of a social credit system is a representation of big data’s power concentrated into the hands of an elite minority. Whereas usually, our credit score determines the probability that we have the financial capacity to pay back our loans (hence granting us privileges to use credit instruments such as mortgages), the Chinese government has expanded the application of a financial credit score to a wider socio-economic scale. Leveraging platforms such as Ant Financial, an operating arm of commercial giant Alibaba, an effective social credit scheme has been implemented as a means of discouraging societally unaccepted behaviour and encouraging the better.
In some cases, instead of discouraging unwanted behaviour, civilians have outright faced prohibition, illustrated when a journalist wasn’t allowed to book a flight because of an outstanding court fine. In another city, like Suzhou, a point system rates every resident between 0 to 200, with 100 being a neutral score. Benevolent acts such as donating blood would gain extra points whilst disobeying laws would result in deductions. The consequent impact, in granting the civilian access to services and products dependent on their social credit score, illustrates a controlling application of big data worthy of a thought, if not a worry.