Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts
February 09, 2018

Morality algorithm lets machines cooperate and compromise better than humans

by , in
Using a new type of game-playing algorithm, machines can deploy traits like cooperation and compromise better than humans(Credit: VLADGRIN/Depositphotos)
Over the past year, it's become pretty clear that machines can now beat usin many straightforward zero-sum games. A new study from an international team of computer scientists set out to develop a new type of game-playing algorithm – one that can play games that rely on traits like cooperation and compromise – and the researchers have found that machines can already deploy those characteristics better than humans.
Chess, Go and Poker are all adversarial games where two or more players are in conflict with each other. Games such as these offer clear milestones to gauge the progress of AI development, allowing humans to be pitted against computers with a tangible winner. But many real-world scenarios that AI will ultimately operate in require more complex, cooperative long term relationships between humans and machines.
"The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills," says lead author on the new study Jacob Crandall. "AI needs to be able to respond to us and articulate what it's doing. It has to be able to interact with other people."
The team created an algorithm called S# and tested its performance across a variety of two-player games, either in machine-machine, human-machine or human-human interactions. The games selected, including Prisoner's Dilemma and Shapley's Game, all required different levels of cooperation or compromise for a player to achieve a high payoff. The results were fascinating, showing that in most cases the machine outperformed humans in the games.
"Two humans, if they were honest with each other and loyal, would have done as well as two machines," says Crandall. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It's programmed to not lie, and it also learns to maintain cooperation once it emerges."
An interesting technique incorporated into the algorithm was the machine's ability to engage in what the researchers called "cheap talk." These were phrases that the machine deployed either in response to a cooperative gesture ("Sweet. We are getting rich!"), or as a reaction to another participant lying or cheating ("You will pay for that!"). When the machines deployed "cheap talk," the human players were generally unable to pick they were playing a machine, and in most cases the comments doubled the amount of cooperation.
The researchers suggest these findings could lay the foundation for better autonomous machines in the future as technologies like driverless cars require machines to interact with both humans and other machines that often don't share the same goals.
"In society, relationships break down all the time," says Crandall. "People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better."
The study was published in the journal Nature Communications.
February 08, 2018

Artificial synapses fill the gaps for brainier computer chips

by , in
An MIT team has developed a new kind of artificial synapse, enabling more brain-like computer chips(Credit: agsandrew/Depositphotos)
Right now, you're carrying around the most powerful computer in existence – the human brain. This naturally super-efficient machine is far better than anything humans have ever built, so it's not surprising that scientists are trying to reverse-engineer it. Rather than binary bits of information, neuromorphic computers are built with networks of artificial neurons, and now an MIT team has developed a more lifelike synapse to better connect those neurons.
For simplicity's sake, computers process and store information in a binary manner – everything can be broken down into a series of ones and zeroes. This system has served us well for the better part of a century, but having access to a whole new world of analog "grey areas" in between could really give computing power a shot in the arm.
The brain is a perfect model for those kinds of systems. While we've barely scratched the surface of how it works exactly, what we do know is that the brain deals with both analog and digital signals, processes and stores information in the same regions, and performs many operations in parallel. This is thanks to around 100 billion neurons dynamically communicating with each other via some 100 trillion synapses.
While neural networks mimic human thinking on the software side, neuromorphic chips are much more brain-like in the design of their hardware. Their architecture is made up of artificial neurons that process data and communicate with each other through artificial synapses. IBM's TrueNorth supercomputer is one of the most powerful neuromorphic systems, and Intel has recently unveiled a more modest, research-focused chip it calls Loihi.
In conventional neuromorphic chips, synapses are made of amorphous materials wedged in between the conductive layers of neighboring neurons. Ions flow through this material when a voltage is applied, transferring data between neurons. The problem is they can be unpredictable, with defects in the switching medium sending ions wandering off in different directions.
 "Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way," says Jeehwan Kim, lead researcher on the project. "But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it's hard to control. That's the biggest problem – nonuniformity of the artificial synapse."
To combat the problem, the MIT researchers designed a new medium for an artificial synapse. They started with a wafer of single-crystalline silicon, then grew a layer of silicon germanium over the top. Both materials have a lattice-like pattern, but silicon germanium's pattern is slightly larger, so when the two overlap it forms a kind of funnel shape, keeping ions on the straight and narrow.
The team built a neuromorphic chip using this technique, with silicon germanium synapses measuring about 25 nanometers wide. The team then tested them all by applying a voltage to them, and found that overall there was about a four percent variation in the current that passed through them. An individual synapse, tested over 700 cycles, was also found to keep a consistent current, with a variation of just 1 percent.
"This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks," says Kim.
The scientists then put the chip through its paces with a simulated test. They used an artificial neural network that functioned as though it was made up of three sheets of the neurons connected with two layers of the synapses. Then they fed in data on tens of thousands of handwriting samples, and found that the system was later able to recognize 95 percent of samples it was then given. That's not far off the 97 percent accuracy that more established systems can achieve.
Next, the team plans to develop a physical neuromorphic chip that can handle this task in the real world.
"Ultimately we want a chip as big as a fingernail to replace one big supercomputer," says Kim. "This opens a stepping stone to produce real artificial hardware."
The research was published in the journal Nature Materials.
Source: MIT, NewAtlas


December 28, 2017

2017: The year AI beat us at all our own games

by , in

AI beat us at a bunch of games in 2017 and it looks like it's something we'll have to get used to(Credit: phonlamai/Depositphotos)

For much of the 20th century, the game of chess served as a benchmark for artificial intelligence researchers. John McCarthy, who coined the term "artificial intelligence" back in the early 1950s, once referred to chess as the "Drosophila of AI", in a reference to how significant early research on the fruit fly was to the field of genetics.In the late 1990s, IBM's Deep Blue embarked upon a series of chess games against Garry Kasparov, the world champion. In 1997, Deep Blue ultimately beat Kasparov, marking the first time a machine had defeated a world champion in match play. By the early to mid 2000s, the technology had improved to the point where the machines were consistently beating chess grandmasters in almost every game-playing context.
Naturally, AI developers moved onto other, more complex, games to test their increasingly sophisticated algorithms. Over the past 12 months AI crossed a series of new thresholds, finally beating human players in a variety of different games, from the ancient game of Go to the dynamic and interactive card game, Texas Hold-Em Poker.
Going, going, gone
In the late 1990s, after a machine finally and definitively beat a chess grandmaster, an astrophysicist from Princeton remarked, ''It may be a hundred years before a computer beats humans at Go – maybe even longer."
Taking up the challenge, computer scientists turned their attention to this ancient Chinese strategy game, which is both deceptively simple to play, yet extraordinarily complex to master.
It has only been in the last decade that machine learning developments have created truly competitive AI Go players. In 2014, Google commenced working on a deep learning neural network called AlphaGo. After a couple of years of semi-successful Go challenges the development team tried something different.
At the very end of 2016 a mysterious online Go player named "Master" appeared on the popular Asian game server Tygem. Over the next few days this mysterious player dominated in games with many world champions on the system. By the 4th of January the jig was up and it was officially confirmed that the "Master" was in fact the latest iteration of DeepMind's AI AlphaGo.
In May of 2017, AlphaGo "Master" took on Ke Jie, the world's highest ranked Go player. Over three games the machine comprehensively dominated the world champion, but perhaps most startling was the revelation in October that Google had already produced a more sophisticated iteration of AlphaGo that was even better than the "Master"
AlphaGo Zero, revealed in a Nature journal article, was a revolutionary algorithm designed to learn entirely from self-play. The system simply plays against itself, over and over, and learns how to master whatever game it has been programmed to work with. After 21 days of learning, AlphaGo Zero had reached the level of "Master" and by day 40 it had exceeded the skill level of every prior version.
By December 2017, an even newer version of the system was revealed by DeepMind. Called AlphaZero, this new AI could master a variety of games in just hours. After merely eight hours of self training the system could not only beat prior versions of AlphaGo Zero, but it also could become a chess grandmaster and a shogi champion.
Mastering the bluff
While Go offers a game rich in complexity, mastering Poker has proved an entirely different proposition for AI. To win big at Poker one needs to master the art of deception. Bluffing and recognizing when you are being bluffed are key dynamic abilities that need to be mastered to win big in this infamous card game.
After over a decade of attempts 2017 saw two separate studies revealing AI finally beating big-time poker professionals. DeepStack, from the University of Alberta, unveiled an AI system that could comprehensively dominate human poker players using an artificially intelligent form of "intuition".


A team from Carnegie Mellon University put on a more public spectacle in January of 2017 when its Libratus AI system spent 20 days playing 120,000 hands of No Limit Texas Hold'em against four poker professionals. While the pros spent every evening of the challenge discussing amongst themselves what weaknesses they could exploit in the AI, the machine also improved itself every day, patching holes in its gameplay and improving its strategy.
The human brains were no match for the machine and after nearly a month of full-time gameplay Libratus was up by US$1.7 million, with every one of the four professionals having lost thousands of fictional dollars. One of the losing professional players said to Wired halfway through the bruising competition, "I felt like I was playing against someone who was cheating, like it could see my cards. I'm not accusing it of cheating. It was just that good."
Elon Musk's AI experiment
In 2015, Elon Musk and a small group of investors founded a group called OpenAI. The venture was designed to explore the development of artificial intelligence systems with a particular interest in reinforcement learning - systems where a machine teaches itself how to improve at a particular task.
In August 2017, the OpenAI team set its sights on conquering Dota 2, the central game in a giant eSports tournament called The International. Dota 2 is an extremely popular, and complicated, multiplayer online battle arena game and is serious business in the world of competitive gaming.
The International is a giant eSports competition with millions of dollars in prize money (Credit: zilsonzxc...
After just two weeks of learning, the OpenAI bot was unleashed on the tournament and subsequently beat several of the world's top players. The AI system was only trained on a more simplistic one-to-one version of the game, but the OpenAI team is now working on teaching the system how to play "team" games of five-on-five.
Divide and conquer - The Pac-Man challenge
A couple of years ago, Google DeepMind set its AI loose on 49 Atari 2600 games. Provided with the same inputs as any human player the AI figured out how to play, and win, many of the games. Some games proved harder than others for the AI to master though, and the classic, but notoriously difficult, 1980s video game Ms Pac-Man was especially challenging.
In 2017, a deep learning startup called Maluuba was acquired by Google and incorporated into the DeepMind group. Maluuba's novel machine learning method was called "Hybrid Reward Architecture" (HRA). Applying this method to Ms Pac-Man the system created more than 150 individual agents, each tasked with specific goals – such as finding a specific pellet, or avoiding ghosts.
The "Hybrid Reward Architecture" (HRA) system learned to master the notoriously difficult video game Ms Pac-Man
The HRA method generates a top agent, something akin to a senior manager. This top agent evaluates all the suggestions from the lower agents before making the final decision on an individual move. The approach has been euphemistically dubbed "divide-and-conquer," where a complex task is broken up into smaller parts.
After applying the method to Ms Pac-Man the AI quickly figured out how to achieve a top score of 999,990, which no human or AI has managed to achieve previously.
AI will soon make the games
What is the next logical step if AI can beat us in almost every game out there?
A researcher from the University of Falmouth recently revealed a machine learning algorithm that he claims can dream up its own games for us to play from scratch. Called Angelina, this AI system is improving itself from day to day but can currently make games using datasets it scrapes from sources as varied as Wikimedia Commons to online newspapers and social media.
So what does all this mean?
Perhaps the most significant, and potentially frightening, development of 2017 has been the dramatic progress of reinforcement learning systems. These programs can efficiently teach themselves how to master new skills. The most recent AlphaZero iteration, for example, can achieve superhuman skills at some games after just a few days of self-directed learning.
A large survey of more than 350 AI researchers suggested it won't be too long before AI can beat us as pretty much everything. The survey predicted that within 10 years AI will drive better than us, by 2049 it will be able to write a best-selling novel, and by 2053 it will perform better than humans at surgery. In fact, the survey concluded that there is a 50 percent chance that by 2060 AI will essentially be able to do everything we can do, but better.
2017 has undoubtedly been a milestone year in AI beating humans at increasingly complex games, and while this may seem like a trivial achievement the implications are huge. Many of these AI-development companies are quickly turning their sights on real-world challenges. Google DeepMind has already moved the AlphaGo Zero system away from the game and onto a comprehensive study of protein folding in the hopes of revealing a treatment for diseases such as Alzheimer's and Parkinson's.
"Ultimately we want to harness algorithmic breakthroughs like this to help solve all sorts of pressing real world problems," says Demis Hassabis, co-founder and CEO of DeepMind. "If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to drive forward human understanding and positively impact all of our lives."
Soure: NewAtlas