I wrote a script to download hlt files from winning games. I then tried to train a neural net to predict the next move of a particular piece but I have not been too successful. The best I got with a top ranked player was a score in the mid 80s but that's only because most of the moves are STILL. For non-zeros, I think the best I got was around 40%. But when I implement my strategy in actual games, it doesn't seem intelligent which makes me doubt it learned anything useful.
I believe it can be done but only with a meaningful representation of the board. This is what I tried.
- Since board sizes vary, I limited to a NxN representation of the board (e.g. 10x10 but I tried other sizes as well as larger sizes that wrap the board around)
- I put the owners, productions and strengths side by side so each frame was really Nx3N
- I tried predicting the movement of one piece, so I centered the board around that piece
- I standardized the owners by using -1 for squares I controlled, 0 for unoccupied squares and 1 for enemy owned squares
- I standardized the productions and strengths, respectively (tried minmax and standard scalar)
- I used MLPClassifier with various parameters and layer sizes, none really performed significantly better than the rest
Perhaps an RNN would be better suited since many players have a strategy that involves subsequent moves. Anybody try this or have any ideas?