Artificial. Intelligence.
We are bombarded by these words from every news and media outlet, from magazines and research papers, from TV and Internet: in fact, we consume so much information on AI it might seem that we are already living in the neon cyberpunk future people fantasized about in the 60s and 70s.
Despite the disappointing lack of neon everywhere, we are already in a new era pioneered and headlined by AI. Look around you - chances are you’ll see some kind of device that’s ‘intelligent’, even if it’s at a level of a 2 year old child.
What’s there to be afraid of then?
With the abundance of information on the internet, it was only a question of time when content filtering and suggesting systems would be put in place. Now they are an integral part of most online services and popular apps.
Content suggestion algorithms work by collecting as much information on you as possible to make out what content you will most likely engage with. It all sounds nice in theory, but handing the power of choosing which content you consume and engage with to a single entity is incredibly dangerous.
The profit-driven model of most online publications and social media apps drives the algorithms to only suggest the content you will most likely engage with based on your behaviour. The surface simplicity of this approach in reality is far from innocent.
Confirmation bias makes us prone to only interact with content that reinforces the beliefs we already hold. Content suggesting algorithms act as a magnifying mirror, creating echo chambers, validating dangerous or false beliefs by suggesting the same type of content over and over.
But this is just the tip of an iceberg. We have all seen just how easy it is to do a 180 change of public discourse surrounding heated topics by silencing some platforms and amplifying others.
As one of the last frontiers of free speech, Internet has been under an attack of government and corporate imposed regulations that attempt to influence what information people consume.
Using content suggesting systems on a larger scale, let’s say, blocking entire websites for certain groups of people, seems to be inevitable. Information is too valuable of a resource to let it flow freely, and it definitely doesn’t anymore.
A digital personal assistant has been a lifelong dream of many. The speed of modern life has made time the most valuable resource, and delegating mundane tasks to your digital assistant definitely saves precious hours.
With the emergence of smart speakers and smartphone assistants, AI is now closer to each of us then ever.
The nature of ‘smart’ speakers and smart home systems keeps them aware of everything you say to catch the trigger words. Try to describe smart speakers without saying the words ‘AI’ and ‘digital assistant’ - you will end up describing one of the darkest concepts of an Orwellian dystopia - twenty-four-seven monitoring of your activity inside your own home without any chance for privacy.
As a society, we have witnessed a mind-boggling erosion of privacy than has had no real push back or controversy. We have all accepted constant supervision as a new norm, trading privacy for the convenience of a digital assistant that can turn the lights off or set a timer.
The softcore version of round the clock surveillance under the guise of technology advancement is truly a terrifying prospect. Using AI to essentially spy on the general public is far more advanced and effective when compared to storing phone calls or text messages since AI can understand what you are saying instantly and on the spot.
We have accepted and welcomed AI into our homes without regulating it first to keep our privacy. It might have been a fatal mistake as now it is too late - digital assistants are now used by million of Americans every day.
Facial recognition is an area of AI which is growing and advancing every day. But most of these developments fly under the radar which gives corporations and governments ample time to implement the latest developments of facial recognition systems into civilian supervision and other aspects of our everyday lives.
Previously, security cameras would capture a blurry image or video of its surroundings which would be little to no help. Now it detects individuals by analyzing their face and provides all available information on each person present in the picture or video.
Now your every move can be documented, your face is stored forever in the archives only to be pulled up as soon you’ve been unfortunate enough to be captured by a camera. Governments now hold unprecedented power over you. From mass prosecution and incarceration of citizens that engage in ‘anti-government’ activities to dreaded social credit systems - facial recognition has got you covered.
Now that civilian surveillance is at its all time high as those who have access to facial recognition software can not only see where you’ve been, but what your intentions were.
One of the latest developments is the use of facial recognition algorithms in banking. Giving out loans and mortgages is, for the most part, risk assessment. The bank needs to know how trustworthy you are and how likely you are to pay back in full.
That’s where AI comes into play - it analyses your face during your mortgage interview to see if you are lying and what emotional state you are in to determine how trustworthy you are. These systems are on their way to replace human interaction entirely and be the sole decision makers.
Your mortgage interview is not the only place where AI makes decisions about you that have an impact on your life.
AI taking your jobs is now the least of your concerns. Smart human resources management will not only decide which job to give you, but how long you will occupy the place and how you should be treated.
The abundance of resumes force human resources to introduce smart filtering systems. And the willingness to reduce risks as much as possible and in a never-ending race to increase profits corporations want to know as much about you as possible in the shortest amount of time.
Using AI analysing systems gives corporations just that. Instead of taking chances and hiring workers who might be a good fit, it is now possible to make decisions about your workforce with almost 100% accuracy.
Automated systems have long been used to sort resumes and pick up on possible employee weaknesses and strengths. With the introduction of AI now your experience or education are far from being the most important aspects during the hiring process.
AI hiring systems now extract information that employers could not get before. It analyses your entire Internet presence to understand you from the inside out and assess how trusted you can be, how long you are most likely to occupy the position and how you can be manipulated.
Anything you have ever posted online can now be used against you, and your own future, which for most people is closely tied with their jobs, is in the virtual hands of AI.
We might hope that employers will go the old-fashioned way and hire people based on their own unique experience, but this hope is bleak - AI is much faster, can analyse more data and, at least in theory, does not introduce biases into the hiring process.
We have largely moved our lives online. Posting images and videos of ourselves, we all become potential victims of a new kind of fraud, one that is almost impossible to remedy.
Deep fakes have grown to be a terrifying yet impressive technology which can manipulate any video or voice recording to produce something new. While it’s fun to look at movie characters whose faces have been replaced with someone else’s, it raises an important question - how can we detect if a video or audio recording has been manipulated?
And the answer is that we have no answer. As far as things are now, we have no reliable means of recognising deep fakes. They are a ticking time bomb ready to blow up at the most inconvenient time. A carefully planted fake voice recording or a short manipulated video clip can spark or fuel a global conflict.
Imagine a fake black box recording or a fake video incriminating a prominent political figure being uploaded on a social media platform during a sensitive time. Something small can cause disproportionately big turmoil and even armed conflicts.
Yes.
We have already allowed AI too far into our lives. From your phone to a camera outside - everything is listening, watching and analysing.
All that is left to do is try to reap as much benefits from this new era of technology as possible. AI-based systems can be used in a positive and productive way if used with caution. Regulation and policy are two most important aspects of any smart system. Ensuring not only the efficiency but also safety is critical for any smart system.
Staying informed and up to date with the global and local developments in AI and their applications is something that everyone can - and should - do. Information is power not only for those who control it, but also for those who consume it. We as civilians are affected by AI the most, and we have to take responsibility to stay in the right media space and take action whenever possible.
We have entered the Digital Age and our technological advancements as a humankind are growing by the day, rapidly expanding into what up until recently was considered just a vision of a distant future.
It is hard to say where AI will take us. We may still have time to turn things around and reverse the crisis. However, it may become worse - much worse- before it gets better. So let's hope that the lowest low will not be enough to dismantle our society and change us at our core.