Skip navigation

Duo Security is now a part of Cisco

About Cisco

Industry Events

RSAC 2016: Safety Issues in Advanced Artificial Intelligence (AI)

One notable keynote from RSAC 2016 last week, Safety Issues in Advanced Artificial Intelligence (AI) covered the fascinating concept of the developing field of AI, and more importantly, the type of security and safety concerns related to the rapid development of human-level, and, potentially, superhuman/super-intelligent AI.

Nick Bostrom, Professor and Faculty of Philosophy at the University of Oxford, and Director of the Future of Humanity Institute compared the old days of AI to the more modern, transitory phase of the current machine intelligence era. Now in its third wave, AI is moving forward with expectations of reaching human and superhuman levels.

AI Transition

Over time, humans have survived many natural disasters, including asteroids and supervolcanoes. Nick theorizes that what may actually cause extinction will be something entirely new - and since we, as humans, are the source of new things, it will likely be something we create.

In the past, different milestones in the history of AI include the IBM-developed Deep Blue chess-playing computer that was the first AI to win a chess match against a human world champion. This reflects the early stages of AI, when we first believed logical thought was the epitome of the human brain.

Another milestone includes the creation of an AI that could play Atari games, developed by Google. This was an advancement in AI, as the program only had access to the score and pixels on the screen - with no other background information about the game - in order to learn how to play at a superhuman level.

History of AI

Today, AI is different from the way it was in the 1960s-70s. Back then, AI programs required manual knowledge token input, meaning you only got what you put into it. Now, the focus is more so on machine learning.

Machine-Learning Gaming AI Surpasses Humans

One example of this is gaming AI, for the game Go, one of the oldest board games played today, originating in ancient China thousands of years ago. Go is an abstract game, requiring a lot of strategy with almost endless possibilities and complex pattern-recognition skills.

DeepMind Technologies, Google’s British AI company, developed a machine-learning AI called AlphaGo that beat a European Go champion, and will soon go up against an Asian Go champion.

Update since the keynote: AlphaGo recently beat the champion in a series of ongoing matches in Seoul, South Korea, as The Verge reported. AlphaGo uses deep learning and neural networks to teach itself to play - it reinforces and improves its ability to play games against versions of itself.

This reinforced learning system makes AlphaGo a lot more human-like and, well, artificially intelligent than something like IBM’s Deep Blue, which beat chess grandmaster Garry Kasparov by using brute force computing power to search for the best moves — something that just isn’t practical with Go.

Aside from game AI, other applications of AI technology in our lives are vast and include speech, handwriting and facial recognition; self-driving cars, route-finding and automated logistics planning; search engines, object-recognition in images and spam filtering; fraud detection; equation-solving and theorem-proving; etc.

Applications of AI

Naturally, there are ethical and safety debates that arise over the progression of AI in our social sphere and how it affects humans - topics like lethal autonomous weapons, the use of non-military drones, privacy concerns related to surveillance and data mining, cybersecurity, self-driving cars - and finally, the labor market impacts of automation and whether technology will eventually replace human employees.

One theme throughout many encryption talks and panels at RSAC centered on how technology was quickly outpacing our current laws and system of passing new laws (a nod to the Apple encryption debate). As AI progresses in, I hate to say, highly anticipated and never-been-seen-before ways, legislation needs to not only catch up, but undergo serious reform to keep up with the pace and level of sophistication.

Scalable Control Concerns as AI Surpasses Human Level

So what happens if we actually succeed? That is, in creating an AI with true machine intelligence. Based on experts in the field, Nick found the median estimated date of reaching human-level AI is year 2040 to 2050 (50%), with most projecting it to be 2075 (90%).

Another 90% estimate 2095 as the target date for reaching super-intelligent AI, that is, computers that radically surpass humans in very human ways, such as social skills and general wisdom. And the real question is just how quickly - perhaps, exponentially - we’ll reach super-intelligent levels of AI after initial human-level is achieved.

This requires an examination of the concept of intelligence, and how our concept of intelligence is calibrated on our own experiences. Developing an AI past the human level involves concerns around scalable control, as an AI at or above this level is not just a tool, but an agent that has its own goals and the ability to strategize different ways of achieving those goals.

Assuming you can make an AI smart, how can you also ensure the machine does what you intended it to do? This is a major technical challenge as well as a oft-used trope; the primary basis for conflict in the plot of numerous movies about AI (more recently, have you seen Ex Machina? Just saying).

As we progress in AI development, we may have to assume that humans can’t control reward channels, and that the machine may learn strategic behavior like deception and verbal hyperstimuli (ability to persuade humans) - it could even learn to fake incompetence. We should be concerned about controlling an AI’s access to actuators, and deal with the fact that unplugging it to keep it offline may one day fail.

One example he gave was a paperclip AI. If the goal of this AI is to maximize paperclip production, it may build more and more paperclip factories, even if it’s not conducive to human welfare. It could potentially prevent humans from shutting it down, just because it has the primary objective of creating more paperclips, and human interference would mean there would be less paperclips in the world.

Modeling AI for Control

Security and safety implications have to be considered in the design and development of AI, similar to other technologies (apps, websites, Internet-enabled devices, etc.). The vague philosophical conundrum - how can you control super intelligent AI? has been clarified enough to be broken out into the beginnings of technical research agendas, that is, very specific and technical open research problems that people are starting to work on.

A few models to deal with AI control issues include inverse reinforcement learning (learning what humans optimize for), to models of control degradation (model how control can fail), imitation (develop techniques to train AI to imitate how humans perform/achieve a task).

Another approach may be thinking in architectural terms, and determining which architectural compositions may give you more control - such as linked competencies, a unitary system, integrated competencies, etc. Part of that involves figuring out what the different modules in your flowchart will do, and how information is passed between them.

Watch a video of the full keynote below:

If you happen to be in the Ann Arbor area today, there’s a free Penny Stamps lecture at 5:10 PM at the Michigan Theater featuring Dr. Guruduth Banavar, VP of Cognitive Computing at IBM Research. His research is focused on building cognitive systems to create new partnerships between people and machines - learn more here.