我正在尝试为 tictactoe 实现 q-learning。这样做的步骤之一涉及枚举tictactoe板的所有可能状态以形成状态值表。我已经编写了一个从空板开始递归生成所有可能状态的程序。为此,我隐式地执行搜索空间树的前序遍历。然而,最后,我只得到了 707 个独特的州,而普遍的共识是合法州的数量大约是 5000 个。
注意:我指的是合法州的数量。我知道如果允许任何一名玩家在比赛结束后继续比赛(我的意思是非法状态),状态数接近 19,000。
代码:
def generate_state_value_table(self, state, turn):
winner = int(is_game_over(state)) #check if, for the current turn and state, game has finished and if so who won
#print "\nWinner is ", winner
#print "\nBoard at turn: ", turn
#print_board(state)
self.add_state(state, winner/2 + 0.5) #add the current state with the appropriate value to the state table
open_cells = open_spots(state) #find the index (from 0 to total no. of cells) of all the empty cells in the board
#check if there are any empty cells in the board
if len(open_cells) > 0:
for cell in open_cells:
#pdb.set_trace()
row, col = cell / len(state), cell % len(state)
new_state = deepcopy(state) #make a copy of the current state
#check which player's turn it is
if turn % 2 == 0:
new_state[row][col] = 1
else:
new_state[row][col] = -1
#using a try block because recursive depth may be exceeded
try:
#check if the new state has not been generated somewhere else in the search tree
if not self.check_duplicates(new_state):
self.generate_state_value_table(new_state, turn+1)
else:
return
except:
#print "Recursive depth exceeded"
exit()
else:
return
如果需要,您可以在此处查看完整代码。
编辑: 我在链接和此处对代码进行了一些整理,并添加了更多注释以使事情更清晰。希望有帮助。