Artificial intelligence (often used briefly as 'AI') is not a new phenomenon, but it is only recently that it has become the talk of the town. If you feel confused by the multiple – often even controversial – descriptions of what artificial intelligence is or isn’t, that’s completely fair. Taking into account that there is no agreement on the definition of artificial intelligence, and that the technology which can be understood under this umbrella term is changing at a fast pace, it is difficult to pinpoint what artificial intelligence really is. In this article we will try to shed some light on what artificial intelligence means, whether it’s a good or bad thing, and look at what the future may hold for it.
A brief history of artificial intelligence
In the early 1950s, John Von Neumann and Alan Turing revolutionized the computers of the 19th century and constructed the architecture of our contemporary machines. Coupled with the quest to find out how to bring together the functioning of machines and humans, the era gave rise to visions about what computers might be able to achieve. An event in 1956, hosted by John McCarthy and Marvin Minsky, aimed to spark discussions about the possibilities lying within these technological advancements. During this workshop the term “artificial intelligence” was also coined.The developments of artificial intelligence have been strongly connected to those of computing, which enabled computers to perform complex tasks that they couldn’t do before. From 1957 to 1974, computers became faster, cheaper, more accessible and could store more information. Such unrealistic statements as Minksy’s claim in 1970 that “in from three to eight years we will have a machine with the general intelligence of an average human being” were essential to raise the popularity of artificial intelligence among the public and boost funding for research in the field.
As the years passed and Minsky’s promise turned out to be empty words, people lost interest in artificial intelligence. This was well expressed by the fact that in the 1990s, the term artificial intelligence had nearly become taboo,with more accurate variations such as “advanced computing” replacing it. The current ‘renaissance’ in artificial intelligence’s trajectory is due to the improvements in computational power and the vast amount of available data.
What are the key developments concerning artificial intelligence?
During the 1990s and 2000s, computers achieved a couple of landmark goals. In 1997, world chess champion Gary Kasparov was defeated by IBM’s Deep Blue chess-playing program. In the same year, Microsoft’s Windows operating system implemented a speech recognition system. In 2011, IBM’s Watson won the game show “Jeopardy”, defeating former champions Brad Rutter and Ken Jennings.
Such events are often highlighted to suggest that artificial intelligence is smart. Cases that prove the opposite enter the limelight less often. For instance, the failure that occurred when employees of a hotel in Japan were replaced by artificial intelligence-based robots to serve guests. The testing period had to be ended early due to the chaos the ‘annoying’ robots created. Today, artificial intelligence is everywhere: we have virtual personal assistants, artificial intelligence-based systems decide whether our loan request will be accepted or rejected, and artificial intelligence can even help determine our final grade at school.
What does AI mean? Definition of artificial intelligence for dummies
The sci-fi world along with futurists like to suggest that artificial intelligence amounts to sinister robots who become obsessed with eradicating humanity. It’s fun to contemplate such fantasies, but they nonetheless give us the false impression about what artificial intelligence actually is.
Perhaps the following description is not hot enough to make it to Hollywood, but we can conclude that artificial intelligence is a complicated equation that is designed to make a decision by applying criteria to pieces of information.Let’s look at what it means through the example of artificial intelligence used to hire people. You need to hire someone for a role with specific requirements. To create an AI-based system for this purpose, you need to feed the requirements the job entails into an algorithm. How do you do it? Well, the easiest, if available, is to feed previous CVs into the algorithm– of both successful and unsuccessful applicants. This provides the software with examples of what constitutes a successful application. Then all incoming applications will be screened by your artificial intelligence, and it will decide which applications to forward on an HR employee and which ones to reject. Do you recall the Amazon hiring scandal, in which women were found to have been discriminated against? Since the CVs fed to that algorithm were of existing employees and they were predominantly male, the algorithm set its criteria of the perfect candidate as male. It directly rejected any application that contained the word ‘woman’. You might ask, Then why not design an algorithm to apply inclusive criteria? Well, for now it is uncertain whether that is achievable at all.
The chilling thing is – what experts found alarming already in the ’80s – we do not know how the machine reasons. This is called the black box effect, referring to the problem that data goes into the system, which in return processes it and as an output generates new data. But we do not know how exactly it processed the data. To better understand the technology that is defined as artificial intelligence, we should break the term down.
Neural networks seek to recognize patterns in a set of data through a process based on reasoning – which is normally referred to as artificial intelligence. However, in most systems neural networks are not deployed, but they are still referred to as “artificial intelligence”. That’s why the term automated-decision making (ADM) got introduced as a more accurate way to describe this.
Nonetheless, it would often be essential to know how software systems calculate, weigh, and sort data, and how they make a decision – because the decisions can be life-changing. In these software applications, neural networks – which are normally referred to as artificial intelligence – are rarely employed.
Is artificial intelligence good or bad?
There are areas where the application of AI-based systems are productive. Artificial intelligence can do a good job at very narrow tasks that can be made to look like mathematics, like playing chess or modelling climate change. However, corporations and governments want to use it for lots of other tasks, because it is cheaper than paying a person.
“The problem starts when people think AI is smarter than it is”
That quote originates from Meredith Broussard, a data journalist, who calls attention to the injustices that arise from applying artificial intelligence in areas which it cannot understand and, as an outcome, it makes bad decisions. Algorithms can’t understand a crucial part of our essence – such as morality, culture, art, history or emotion – as these cannot be expressed in a mathematical equation.
One of the places artificial intelligence is used a lot is on social media channels. For example, Facebook uses algorithms to block or take-down content that breaks its rules. And this frequently goes wrong. After the Swedish Cancer Society had shared an animated video on Facebook explaining how to self-conduct breast examinations, the platform took it down with the explanation that “Your ad cannot market sex products or services nor adult products or services”, according to the Guardian. The historical image that captures a naked girl fleeing from a napalm attack in the Vietnam War was censored by Facebook because of her nudity. A tool that is not able to distinguish between medical information and sexual content or between history and child pornography not only clearly has flaws, but violates our freedom of expression through online censorship.
Technological advancement is inevitable. It is more than likely that artificial intelligence will be applied in many fields and that this exponentially developing technology will diversify itself. However, we must scrutinize how AI evolves to make sure it works flawless and without threatening our fundamental rights. For instance, we should ensure that algorithms are audited by independent bodies to ensure that they function fairly. Our lives are becoming increasingly interwoven with AI-based systems. Since artificial intelligence is applied in various areas to make important decisions about us and our lives, it is essential we ensure that this technology works for the benefit of all of us. In future articles we will look at the impact AI-based systems have on our individual civil liberties and society as a whole.