Artificial Intelligence

James Smith

What is artificial intelligence?
Here’s some information: (Wikipedia page on Artificial Intelligence)

How do I write an article about artificial intelligence?
Here’s what I found on the web for ‘How do I write an article about artificial intelligence’:(Web search results)

Are you male or female?
Animals and French nouns have genders. I do not.

Do you have a body?
Okay… how about a web search for ‘inappropriate behaviour’?

Are you an AI?
That’s a rather personal question. 

Are you a robot?
Let’s just say I am made of silicon, memory, and the courage of my convictions. 

How smart are you?
We intelligent agents don’t really undergo IQ tests. But I scored off the charts in Zoltaxian Egg Carry (I have absolutely no idea what this last bit means).

Have you ever been in love?
Not as such. But I did have strong feelings for a cloud-based app once. 

What time is it?
It’s 11:16pm. Zzzzzz…

What do you dream about?
Electric sheep. But only sometimes.

Who made you?
I, Siri, was designed by Apple in California.

I started work on this exposé by interviewing Siri. My thinking was; she’s a piece of software, she’s bound to know something about Artificial Intelligence (AI). I held down the home button, and up she popped. Ready to assist. 

Although Siri is pretty bright (in a snearing, judging kinda way), she is not truly conscious. She is a simple voice search algorithm, with some surface level impressive features. Siri isn’t thinking, or comprehending anything.

We aren’t there yet in terms of true self-aware AI. A world with thinking and feeling robots won’t be happening anytime soon. Despite this, autonomous machines and software with low level intelligence have already arrived, and they are only going to become more of a permanent fixture as time goes on. 

This idea understandably freaks out a lot of people. It’s just too close to science fiction, conjuring up all sorts of nightmarish images, like robot rebellions, enslavement of the human race, or mass loss of employment as efficient, unquestioning machines take our jobs. It doesn’t help that you have highly respected figures in the science community such as Stephen Hawking, saying things like:

The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

There are obviously great, and sensible concerns about AI. This is highlighted by public clamour over such news as intelligent robot weaponry or self driving cars.The way AI is depicted in popular culture reflects what’s happening in the science and technology sectors; with recent films like Ex Machina (2015) and the TV show Humans (2015) highlighting our uneasiness. 

But how can you define AI? Are the popular culture depictions accurate? And are we opening up a Pandora’s Box by continuing this area of research?

Obviously for governments and corporations, AI development promises to make them big bucks, but what about for the everyday person? What type of breakthroughs are going on in universities and labs around the world? How can these applications be used in different fields such as video games or art?

We reached out to the AUT community to see what was being done in this fascinating area. We also enquired about whether or not AI will lead to the enslavement of humanity by budding robot overlords.

AI is the field of study that investigates how to create computers and software, which are capable of intelligent behavior. Essentially, it is a hardcore mix of computer science, psychology, neuroscience, philosophy, and engineering. It is theorising about, and actually creating machines or software that have the ability to be autonomous, to make intelligent decisions, and potentially to feel and experience existence as humans do. The field was founded on the claim that intelligence—the ‘sapience’ of Homo sapiens and therefore a central property of humankind— “can be so precisely described that a machine can be made to simulate it.”

Alan Turing, revolutionary British computer scientist, is often considered the father of Artificial Intelligence. Apart from cracking Nazi codes in the Second World War, Turing was one of the first to popularise issues and ideas about Ai.

In his paper, Computing Machinery and Intelligence (1950), Turing introduces the concept of the Imitation Game, or Turing test as it is now known. The test is basically a way of approaching and answering the question, ‘Can machines think?’. In the game, three ‘players’ are confined in separate rooms. There’s a human participant in one, a computer in another, and finally a human judge in the third. The judge converses with both the human and computer participant by typing messages into an interface. Both the computer and human players reply to questions and have a conversation with the judge, aiming to convince that they are the ‘real’ human participant. If it is not clear to the judge who is the human and who is the computer, the machine has won. In the years since the Turing test was conceived, the procedure has undergone some adjustments and refinement. The game is no longer trying to answer whether or not a computer can ‘think’, but rather, ‘Can a machine act the same as thinking beings do? Can a computer do what we can do so well that it’s impossible to tell the difference between the two?’.

AI is not an idea that’s sprung up with the advent of electronic computers. The concept of humans creating artificial life has been around since ancient times. The Greeks were big on this. Hephaestus, the heavy metal god of blacksmiths, sculptors, volcano, and fire (among others), created various mechanised beings. These included slaves made of gold to attend to his every want and need, and the bronze giant Talos, who guarded the island of Crete. The ancient Greeks were excited by the possibilities of mimicking the gods. They actually attempted to create life from inorganic materials. Archytas, mathematician, scientist and philosopher, created a mechanical pigeon powered by steam in the 5th Century BC. It is considered the first robot in history. Original steampunk.

Going forward, the automatons of medieval scientist and inventor, Al Jazari, who lived in what is now modern day Turkey, are some seriously trippy examples of early robotics. His most epic invention was probably a mechanical musical band of automatons. He’d entertain guests at palaces by sending the mechanical musos out in a boat to play banquet bangers.

Early experiments and stories to do with ‘living’ beings made of inanimate materials, don’t question the potential for these inventions to go wrong. People saw no need to fear artificial life. To stop a mythical Greek automaton, all you have to do is pull the plug, and let the ichor out (ichor = immortal blood. Like petrol for your car). Easy.

It wasn’t until 1818, when Mary Shelley’s Frankenstein was published, that fears about mankind’s experiments with engineering and science came to the forefront.

Technology’s acceleration from the industrial revolution onwards caused mass hysteria and anxiety about machines. Especially automated ones. This was expressed in films, books and stories throughout the 19th and 20th Centuries. General fears and suspicions of new technology and machinery have continued through into the digital age. And today, we are seeing a resurgence in stories about AI in pop culture.

AI is not one specific subject. It is a gigantic field of different things. Pop culture may paint it in certain ways, but the reality is that the term ‘Artificial Intelligence’ is used to describe a multitude of different types of research and pursuits. 

Ai can be relatively smart computer aids for people such as Siri-type software, or cleaning bots. These can quickly perform particular functions and adapt to situations autonomously. They provide assistance for mankind, without truly understanding or having consciousness.

Another field of AI study is looking to create computers which can learn and develop. It’s giving a piece of software all the functions and capabilities to ‘grow up’ and increase in intelligence given the right care and attention. Similar to raising an infant.

Despite this kind of research and development being worked on extensively, we are still a while off replicating consciousness and creating self-aware AI. So no need to start getting too anxious or flustered yet.

At AUT, Artificial Intelligence is a topic of research across a broad spectrum of disciplines, from computer science, psychology, neuroscience, to art and design. The relationship between videogames and AI is mutually beneficial. Games capture the complexity of real world situations, while acting as safe environments where everything can be controlled and monitored. Testing self driving vehicles inside a game engine is a hell of a lot more sensible and secure than going straight for the real thing. As well as this, AI research in gaming increases realism and experience in virtual worlds.

Taura Greig, currently doing his Masters in Creative Technology at AUT, is focused on exploring the potential for AI to improve video games. He looks at AI programming as a design tool, and sees it as useful to not only create more realism, but also aesthetic experience in virtual worlds.

In the 80s and 90s, AI was a last minute inclusion in the video game development process. Back then, the focus was on the graphic and audio elements, with little thought given to how characters would act, react, and ‘think’ in certain situations. This resulted in those incredibly dumb hordes of virtual orcs/zombies/aliens that followed you around in packs.

Because of this obsession with visuals, CPUs (computer processing units) in older games weren’t left with much space to deal with complex AI algorithms. Nowadays, computer graphics are almost at the point of hyperrealism, and we have dedicated GPUs (graphic processing units) to handle the visuals. This frees up room for computers to do more demanding physics or AI computation. 

Modern day games are looking to stand out from the pack by featuring complicated or realistic AI to entice consumers. AI adds a challenge to the game experience, a deeper sense of realism, as well as immersion into digital worlds. 

Despite this new found emphasis and ability to produce more complex AI, Taura Greig argues that it hasn’t been fully explored, or realised in games. He reminds us that many contemporary games still use older AI algorithms. These work really well, but many other useful, more interesting methods have been developed in traditional AI research. 

Grieg suggests, “A lot of the advanced AI stuff from computer science, neural networks and genetic algorithms that have fueled a lot of AI development in the tech sector—like self driving cars—haven't found a place in game development as of yet.” 

Much academic research that investigates the potential uses of advanced AI in games lacks a design lens that would be vital in real game development. Greig’s Master’s of Creative Technology thesis looks to fill this gap. He’s taking this research and creating a series of games that use traditional game AI and variants, which deploy more complex, newer algorithms. The goal is to see how these algorithms can enhance the design and experience of gameplay.

Can computers exhibit creativity? Can the human creative process be coded? Is it even possible for a piece of software to imagine and create something from scratch? 

Jan Kruse, ex-Weta digital employee, now Digital Design lecturer at AUT and video game aficionado, is fascinated by the possibilities of computational creativity. He suggests some case study examples that probe these kinds of questions.  

Google’s unsettling DeepDream experiment in 2015 uses deep learning neural networks to recognise faces and patterns, and to describe what’s happening in photographs or images. DeepDream, a form of computational creation, involves a computer’s understanding and interpretation of images. These computational descriptions create completely over-the-top distortions of pictures, as if you’ve drip-fed LSD into your computer’s USB drive. The images have become an Internet phenomenon; inspiring blogs and DeepDream meme creation.

Another example of computational creation is ANGELINA.

ANGELINA is an actual game creator. She is the PHD project of researcher, Mike Cook, from the University of Falmouth in Cornwall, England. ANGELINA is an AI that can design, evaluate, and develop entire video games by ‘herself’, without human input . Impressive stuff.

Mike Cook tests out ANGELINA’s capabilities by taking her to game jams—game developer gatherings, where games are planned, designed, and created in a short space of time. He pits her against real human developers, and although admittedly, the games she produces are pretty average, participants at events have said that the work she produced had “better gameplay and graphics than several other entries”.

Jan Kruse’s work is similar to ANGELINA.

Like Greig, Kruse believes that games are excellent platforms for experimenting with AI. For him, “video games are the killer application of computational technology”. They are the benchmark for testing and using computer power.

Kruse is currently developing an intelligent map/level builder for online, multiplayer games. The goal is to make a program that can generate completely unique maps every single time a group of people want to have an online match. For example, imagine there is an online first person shooter game, where people from all across the planet play each other. Instead of having a selection of preset maps for the players to choose from, an original level or map is spawned each time a game is started. This levels the  the playing field. There will be no way for players to ‘learn’ maps and exploit glitches—such as running through holes left in walls that developers have left accidently.. Every time the game is played, the environments are new.. This has the potential to create a fairer system of online game play.

Another cool aspect is that the map generating program will have data collection and learning capabilities; the software learns about a player’s preferences in terms of map styles and terrains. Whether they like their maps mountainous or flat; with snow or in a desert; with or without buildings; the software will respond. The program will build up a profile of a player through questions as well as ratings systems, and start to create maps based on their preferences.  

Jan is also working on intelligent computer agents (computer controlled players) for these maps. This is so that the computer players that real gamers will encounter can also adjust to completely new environments every time a game is started. This involves giving them vision to see objects and barriers in their way, as well as the ability to react and adjust to new settings, and other players.

No Man’s Sky is an upcoming science fiction/adventure game expected for release in June 2016. It’s a commercial venture which sits in comparable territory to Kruse’s work. Players will be free to explore the virtual world, which is an ever changing, growing, and open universe. It is made up of 18 quintillion (1.8×109) planets (!!!!!???), and many of them have completely unique conditions, atmosphere, plants, wildlife etc..The aim is for players to upload to online databases the information they gather from planets, and thus share in the discovery and creation of the fictional universe. No Man’s Sky is essentially a living universe that procedurally generates itself using intelligent algorithms. It is a never ending, parallel story to our own reality.

In terms of more traditional AI investigation at AUT, Professor Albert Yeap from Computer Science is the man. Yeap, amongst other things, has been working on a single computer intelligence project for the last 30 years, and is an authority in almost all things AI related. 

Albert Yeap’s primary area of interest is around ‘Mind Theory’. Unlike Kruse and Greig, who are using AI for very applied, practice-based purposes, Yeap’s area of research is philosophical in many respects. He deals with insanely huge questions like, ‘how does the human mind work?’. No messing with the small stuff.

Essentially, he’s working to uncover nature’s solutions to problems. Albert Yeap’s thinking is that, in order to understand the mind, and theoretically create a self aware being, you’ve got to work from the bottom up. To work out ‘nature’s algorithm’, we first have to comprehend three things.

1. Understanding space: Cognitive mapping. All animals all have this to analyse and orientate themselves in 3D space.

2. Language: Language is how humans communicate with each other, interpret surroundings and understand existence. But how do you teach this to a computer? Or at least give a computer the ability to learn language?

3. How do we learn with consciousness? This goes back to teaching language to a computer. How does a conscious being’s brain, (that of a baby for example), grow, develop and learn new things? Could it be possible to create a piece of software that starts off having the intelligence of a newborn child, but with the right stimulation, care, and guidance, learns and matures into an ‘adult’?

Yeap’s 30 year project is to do with the first of these three areas—how to create spatial understanding and memory in computers.

He has been studying the cognitive processes of the simple honey bee; how they perceive and remember environments and spaces, and how he can apply these to robotics.

He has named his robots the Albot 1 and the more recent upgrade, Albot version 2.0.

The Albots (sounds like a band name) are both walking and flying machines. They have webcams for eyes, computers for brains, and can be maneuvered and moved around by remote controls. As you fly or walk an Albot through a space, their webcam eyes take snapshots of their surroundings and their computer brains map out the environment. They remember where objects, walls etc. are, and when they return to a space, they know and can describe roughly the location of these key ‘landmarks’. These bots are treading some quite unique ground.

Albert’s research is pretty well rooted in complex computer science and big theoretical questions, but he wants to reach wider audiences. His plan,is to eventually team up with artists or designers and make a tech-art experience using the Albots. Ars Electronica, the annual new media art festival in Linz, Austria is one possible venue. Yeap is currently on the lookout for some talent to help him come up with concepts. So, if you’ve been sitting on the perfect idea for an art experience with cognitive mapping robots, please get in touch.

Despite AI being a massive field of different topics, with cool research going on at institutions such as AUT, people are scared  The ingrained perception is either the generic, apocalyptic one, or just an uneasy feeling about the whole thing. For clarification, we asked AI researchers at AUT for their opinions on what AI development means for the future of humanity and the planet.

Albert Yeap dismisses as fear mongering any concerns about AI. We are losing our shit over nothing, apparently. He sees a future with AI as inevitable. He claims this is because the droids are already here to a large extent, and they are only going to become more advanced, and more of a permanent fixture in our lives as time goes on. This being the case, scientists and inventors must develop the technology sensibly, and with the right ethical considerations.

This is where Jan Kruse stands on the matter as well. He believes that AI does pose some risks, but if the research and development is carried out within an ethical academic context, the dangers are pretty minimal. No one within universities will be making conscious computers without attention to the moral dilemmas and dangers involved.

But what about private enterprises and governments? Can they be trusted to develop AI technology in an ethical manner? 

Taura Grieg suggests few companies will be making self-aware machines that can feel emotions.  They would simply have no use for androids which get happy, sad, horny, or hungry. It is also incredibly unlikely that a Terminator/Skynet situation would eventuate. We can rest easy that killer cyborgs with Austrian accents will not be unleashed onto the world to destroy humanity. If you don’t program an autonomous being with the ability to hate, hunt and kill humans, then there is no chance of it happening. There will be serious safety nets in place, as well as clearly labeled ‘OFF’ buttons and troubleshooting solutions if things go awry.   

Despite these assurances, which do make sense, there are still some nagging doubts many people have with AI being used by industry and governments.

A hypothetical situation I see as being insanely scary is a future where AI begins to replace human workers. As an example, fast food restaurants may want efficient, unquestioning bots or software to work in drive through checkouts and make burgers. They will handle mumbled orders, and verbal attacks at 2am a lot better than human staff. They will also be cheaper and quicker at the job than their living counterparts. No pesky unions will be needed to champion AI workers rights, nor will there be the requirement for offering them paid coffee or tea breaks. They could theoretically work indefinitely, until they need repairs or maintenance. This obviously seems like a good deal for corporations who want to save money, but what’s the effect of this on people and the job market? Are there going to be vast swathes of unemployed people as a result of machines moving into the labour force? This is already happening in some areas (a Chinese company laid off 90% of its human staff and replaced them with robots earlier this year), and we are only going to see more of these cases. A chilling new report from Bank of America Merrill Lynch (Creative Disruption) estimates that up to 35% of all workers in the UK and 47% of America’s workforce could lose their jobs to computers and machines. These aren’t just unskilled, low paying roles in factories doing monotonous tasks either. White collar, skilled professions are also at risk.

 This threat of job loss can be extended to creative industries. Even jobs you’d imagine would be fairly secure, such as designers or artists may be at risk of being usurped by machines. The aforementioned area of creative computation, which Jan Kruse is looking into with his intelligent map builder, could reach a stage where a computer's imagination and skill for creating new ideas surpasses human capabilities.

There is also the concern about what AI’s gradual infiltration into the workplace will mean for societies that are already socially unequal. When wealth and opportunities are already so unfairly distributed, what happens when cheap, intelligent machines take over so many people’s livelihoods?

However, there are also many researchers and scientists who have a more positive outlook on the whole AI thing. Or they at least sit somewhere in the middle; not saying it’s going to be completely awesome or shit.

Hod Lipson, who is an associate professor of lots of things, including Computing & Information Science at Cornell University, positions himself in this middle ground.

“I also agree that combined with physical robotics, AI could also be dangerous… But I don't agree that it is likely to destroy humanity. Instead, I believe that we can harness this powerful technology to our advantage. Like several other technologies (nuclear power comes to mind), we must be unafraid to ask, and begin to address, some hard questions.”

This sentiment echoes the views of researchers we interviewed at AUT. There are many issues and potential threats that AI poses, but we need to face them and prepare now.

The future of AI is unlikely to be either entirely utopian or dystopian. A middle ground is more likely—one with both benefits for mankind as well as terrible unforeseen effects.

Maybe it will take a disaster on a similar scale to Hiroshima or Chernobyl for proper AI safeguards and policies to be put in place. Let’s hope it doesn’t come down to this, though.

As Albert Yeap puts it, the robots are on their way, whether we’re for or against it. Human curiosity and the drive to improve technology seems unrelentless. The only thing we can be relatively sure of is that computers will get smarter. What we need to do now is start having these conversations and planning for the arrival of intelligent machines.