Skip to content

Can computers learn to lie on imperfect information games?

Latest
Compare
Choose a tag to compare
@dmtomas dmtomas released this 18 Jan 19:52
· 5 commits to main since this release
217a778

Computers are seen normally as perfect machines that follow fixed rules to get the best ''secure'' return, but what happens if it is impossible to get a perfect strategy with the information they have? Will machines learn to lie to compensate the lack of information?
I trained two different AI's as observers with a Q-learning algorithm and a neural network to play a simple "min-max" game, in this game there is the possibility to lie but it has a risk factor associated.
This two different AIs got different results making the Q-learning algorithm lie about 30% of the time and the neural network lying less than 3% of the matches.