We propose to employ optimistic initialization (oi) to encourage exploration for 2048, and empirically show that the learning quality is significantly improved.
This approach optimistically initializes the feature weights to very large values.
Since weights tend to be reduced once the states are visited, agents tend to explore those states which are unvisited or visited few times.
As a result, the network size required to achieve the same performance is significantly reduced.
With additional tunings such as expectimax search, multistage learning, and tile-downgrading technique, our design achieves the state-of-the-art performance, namely an average score of 625 377 and a rate of 72% reaching 32768 tiles.
In addition, for sufficiently large tests, 65536 tiles are reached at a rate of 0.02%.