我想用 MDP(马尔可夫决策过程)来解决赌徒的问题。
赌徒的问题:赌徒有机会对一系列掷硬币的结果下注。如果硬币正面朝上,他赢的钱与他在该掷硬币上的赌注一样多;如果是反面,他将失去赌注。游戏结束时,赌徒达到他的目标 κ 美元获胜,或者因为钱用完而失败。在每次翻转时,赌徒必须决定下注多少(整数)美元。正面的概率是 p,反面的概率是 1 - p。
我使用完全随机的基础策略实现了无模型 Q 学习方法。但是代码没有像我希望的那样工作,我不知道为什么。感谢您的任何建议。:)
import numpy as np
import numpy as np
import matplotlib.pyplot as plt
import random
#data
kappa=100 #goal
p=0.25 #probability of the head, winning
eps=0.1 #0.1, 0.005 epsilon
gamma=0.9 #discount factor
alpha=0.1 # 0.1, 1, 10 learning rate
n=1000 #number of training episodes
#Q-learning with totally random base policy
S = [*range(0,kappa+1)]
A = [*range(0,kappa+1)]
R=np.zeros((kappa+1,kappa+1))
for i in A:
R[kappa,i]=1
Q=np.zeros((kappa+1,kappa+1))
optimal_policy=np.zeros(kappa+1)
for sa in range(1,kappa):
i=0
while i<n:
s=sa
while True:
#choose a random action
seged=min(s,kappa-s)
a=np.random.randint(low=1,high=seged+1) #the maximum of my stake is the coins I own
#take action, observe the state
rand=random.uniform(0,1)
if rand < p: #if I win, I got more coins
s_next = s + a
else: #if I loose, I loose the stake
s_next = s - a
Q[s,a]=Q[s,a]+alpha*(R[s_next,a]+(gamma*max(Q[s_next,b] for b in range(0,s_next+1))-Q[s,a]))
if s_next==0:
break
if s_next==kappa:
i=i+1
break
s = s_next
for s in range(1,kappa+1):
optimal_policy[s]=np.argmax(Q[s,])
Q=np.round(Q,2)
print(Q)
print(optimal_policy)
x = np.array(range(0, kappa+1))
y = optimal_policy
plt.xlabel("Amount available (Current State)")
plt.ylabel('Recommended betting amount')
plt.title("Optimal policy: Random base policy (p=" + str(p)+", \u03B1=" + str(alpha)+")")
plt.scatter(x, y)
plt.show()