AI Pioneer Concerned About Direction of Intelligent Computers
Ashley Allen / 1 year ago
Stuart Russell, Professor of Computer Science and founder of the Center for Intelligent Systems at the University of California and pioneer in the field of artificial intelligence, drafted a letter to the AI community asking it to forget making intelligent computers more powerful at the expense of human values. Russell wrote, “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial. Our AI systems must do what we want them to do.” The letter has thousands of signatories, amongst which are researchers from Facebook, Google, and Microsoft. The 37th signatory, tech entrepreneur Elon Musk, later funded an initiative to “[keep] artificial intelligence beneficial”.
Russell worries that AI research has become too focused on power and speed, while neglecting its true intention: to serve humanity. He cites the DQN system, an intelligent computer that, with no external input, became better at playing Atari games than humans within a matter of hours. “If your newborn baby did that you would think it was possessed,” Russell said.
Russell argues that, instead of asking AIs to break high scores and perform tasks quicker, better, it should be focused on learning human behaviour and how it can complement it. For example, your domestic robot sees you crawl out of bed in the morning and grind up some brown round things in a very noisy machine and do some complicated thing with steam and hot water and milk and so on, and then you seem to be happy,” he said. “It should learn that part of the human value function in the morning is having some coffee.”
The solution, Russell feels, is to switch from teaching artificial intelligence simple rationality to “hierarchical decision making,” or “abstract representations of actions” in order to prepare it in advance for potential human needs. “There are some games where DQN just doesn’t get it, and the games that are difficult are the ones that require thinking many, many steps ahead in the primitive representations of actions,” Russell argues. “Ones where a person would think, “Oh, what I need to do now is unlock the door,” and unlocking the door involves fetching the key, etcetera. If the machine doesn’t have the representation “unlock the door” then it can’t really ever make progress on that task.”
Ultimately, Russell wants to give AI a purpose – one that aligns with human values. “You build a system that’s extremely good at optimizing some utility function, but the utility function isn’t quite right,” he says. “In [Oxford philosopher] Nick Bostrom’s book [Superintelligence], he has this example of paperclips. You say, “Make some paperclips.” And it turns the entire planet into a vast junkyard of paperclips. You build a super-optimizer; what utility function do you give it? Because it’s going to do it.”
Thank you Quanta Magazine for providing us with this information.