What are the disadvantages of artificial intelligence?
Artificial intelligence is already having a profound effect on society, an impact that promises to become even greater as the technology becomes more sophisticated. But not all of it is guaranteed to be positive.
We've put together a list of our 7 disadvantages of artificial intelligence, which we all should be watching out for.
With growing fears that automation and AI will change the way we work and force people into unemployment, questions about which jobs will be replaced by machines in the future are being raised. Some experts point out that potential shifts in occupations are imminent by 2030, estimating that between 75 million to 375 million workers (3 to 14 percent of the global workforce) will need to switch jobs and learn new skills. This shows a large gap in predictions, ranging from optimistic to very pessimistic, and highlights that many experts from technology and business sectors do not share a common view on the future of our labor market. In short: it's really hard to say how many jobs will actually be lost.
The transition to a more automated world will be a major challenge for many countries as ensuring that workers have the skills and support needed to transition to new jobs is anything but easy. This is especially so because the impact of automation is more pronounced for low-skilled jobs, such as administrative tasks, construction or logistical services. Hence, the diffusion of robotics and AI contributes to the reduction in available jobs for the less-educated and has a negative effect on lower waged jobs. This disadvantage of artificial intelligence could lead to a growth in income polarization and mass unemployment. Economic insecurity - as we know from the past - can be a huge threat to our democracies, causing loss in trust in political institutions, but also discontent with the system at large. Consequently, the way AI changes the way we work could pave the way for voters to sympathize with populist parties, and create the conditions for them to develop a contemptuous stance towards representative liberal democracies.
2. Lack of transparency
Al can be faulty in many ways which is why transparency is extremely important. The input data can be riddled with errors or poorly cleansed. Or perhaps the data scientists and engineers that trained the model inadvertently selected biased data sets in the first place. But with so many things that could go wrong the real problem is the lack of visibility: not knowing why the AI is performing poorly. Or sometimes not even that it is performing poorly. In a typical application development, there is quality assurance as well as testing processes and tools that can quickly spot any bugs.
But AI is not just code, the underlying models can’t just be examined to see where the bugs are - some machine learning algorithms are unexplainable, kept in secret (as this is in the business interests of their producers), or both. This leads us to limited understanding of the bias or faults AI can cause. In the United States, courts started implementing algorithms to determine a defendant's "risk" to commit another crime, and inform decisions about bail, sentencing and parole. The problem is such that there is little oversight and transparency regarding how these tools work.
Without proper safeguards and no federal laws that set standards or require inspection, these tools risk eroding the rule of law and diminishing individual rights. In the case of defendant Eric Loomis, for example, the trial judge gave Loomis a long sentence, because of the "high risk" score he received after answering a series of questions that were then entered into Compas, a risk-assessment tool. Compas is a black-box risk assessment tool - the judge, or anyone else for that matter, certainly did not know how Compas arrived at the decision that Loomis is ‘high risk’ to society. For all we know, Compas may base its decisions on factors we think it is unfair to consider – it may be racist, agist, or sexist without us knowing.
3. Biased and discriminatory algorithms
This leads us to our next topic. "Bias" is not just a social or cultural problem, it is equally found within the technical sphere. Design flaws or faulty and imbalanced data that is being fed into algorithms can lead to biased software and technical artifacts. So AI just reproduces race, gender and age bias that already exists in society and deepens social and economic inequalities. You have probably read about Amazon’s experimental hiring a few years ago. That tool used artificial intelligence to find candidates by ranking them from one to five stars - much like shoppers rate products on Amazon. It was discriminatory against women, because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period, effectively preferring male candidates and penalizing resumes that included the word “women”.
In addition to the biased data basis, homogeneous non-representative developer teams also pose an issue. With their low diversity, they weave their cultural blind spots and unconscious biases into the DNA of technology. Companies that lack diversity therefore risk developing products that exclude their customers. Four years ago, a study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. The producers claimed that the program is proficient, but the data set they used to assess performance was more than 77 percent male and more than 83 percent white.
AI can be used to build frighteningly precise profiles of people. Algorithms are developed to find patterns, so when testing their abilities in gathering personal data in a contest, it became clear that they were able to predict a user’s likely future location by observing past location history. The prediction was even more accurate when also using the location data of friends and social contacts. Sometimes this disadvantage of artificial intelligence is downplayed. You might think that you do not care who knows your movements, after all you have nothing to hide. First of all, chances are that this is not completely true. Even if you do not do anything wrong or illegal, you may not want your personal information available at large. After all, you would not move in a house with transparent walls. So is it really the case that you do not care about sharing your device’s location history? How about your teenage daughter’s location history? Would you really be comfortable if someone published her location data including predictions? Surely not. Information is power, and information we give up is power over us.
A rise in disinformation is a disadvantage of artificial intelligence that we are already witnessing. In 2020, the Activist Group Extinction Rebellion created a deepfake to produce a fictional speech by Belgian prime minister Sophie Wilmès. The group took an authentic video address made by Wilmès and used AI to manipulate her words. The result: disinformation. In the fake video Wilmès can be seen talking about Covid-19, claiming that the pandemic is directly linked to the “exploitation and destruction by humans of our natural environment”. Unfortunately, this hasn’t been the only case. Deepfakes will be incrementally used for targeted disinformation campaigns in the future, threatening our democratic processes and causing societal polarization. Adding to these issues of misinformation are online bots, who can generate fake texts including news articles altered to push scamming perspectives or tweets. The AI language tool, GPT-3 recently composed tweets saying “They can't talk about temperature increases because they're no longer happening”, aiming to create skepticism about climate change. In recent years with Trump constantly calling out the media as fake, such technologies could mean, as the Atlantic put it, the “collapse of reality”. With deepfakes and online bots spreading disinformation, society could face blurring lines between reality and fiction, destabilizing trust in our political institutions.
6. Environmental impact
Although AI can have a positive environmental impact, for example by enabling smart grids to match electrical demand or enabling smart and low-carbon cities. However, one of the disadvantages of artificial intelligence is that it can also cause significant environmental damage due to intensive energy use. A 2019 study found that a particular type of AI (deep learning in natural language processing) has a huge carbon footprint due to the fuel the hardware requires. Experts say that training a single AI model produces 300,000 kg of CO2 emissionsroughly equivalent to 125 round trip flights from NYC to Beijing or 5 times the lifetime emissions of an average (American) car. And the training of the models of course, is not the only source of emissions. The carbon impact of the infrastructure around big tech’s deployment of AI is also significant: the data centers need to be built up, and materials used need to be mined and transported.
7. Domination by Big Tech companies
AI is dominated by Big Tech companies. Since 2007, Google has bought at least 30 AI companies working on everything from image recognition to more human-sounding computer voices - building a huge monopoly of AI tech. But Google isn’t the only gatekeeper.In 2016, Google, Apple, Facebook, Microsoft, and Amazon together with the Chinese megaplayers spent up to $30 billion out of an estimated global total of $39 billion on AI-related research, development, and acquisitions. Companies snatching up AI startups globally is dangerous, because consequently they will play an oversized role determining the direction AI technology takes. With dominance in search, social media, online retail, and app stores, these companies have near-monopolies on user data and are becoming the primary AI suppliers to everyone else in the industry. Such a concentration of power is dangerous, as it risks huge tech companies dictating to democratically elected governments.
Photo credits: Alena Darmel, Anastasia Shuraeva, Darina Belonogova, Keira Burton, Ron Lach, Tim Douglas, Vazhnik /Pexels.com