栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

强化学习记录一Qlearning第二版

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

强化学习记录一Qlearning第二版

代码放在最下方


这次把鸡腿改成了100分
并且在奖励中减去所走的步数
因为最短路程为6
所以加了6

r = 100 - action_cnt + 6

这里是训练10次的结果

强推下这个做表格的软件vika


因为训练次数比较少
q表里的数据也很少
接下来训练500次
发现到二十次基本就饱和了
之后的波动是因为我们的策略有10%的几率随机选择行为

代码
import torch
import pandas as pd
import numpy as np

class maze_env:
    def __init__(self,row=4,column=4):

        self.done = False
        self.row = row
        self.column = column
        self.maze = torch.zeros(self.row,self.column)
        self.target_x = row-1
        self.target_y = column-1
        self.x = 0
        self.y = 0
        self.maze[self.x][self.y] = 1


    def show_maze(self):
        print(self.maze)

    def step(self,action):
        r = 0
        self.maze[self.x][self.y] = 0

        if action == 'up' and self.y >= 1:# up
            self.y -= 1
        if action == 'right' and self.y <= self.row - 2:  # right
            self.y += 1
        if action == 'left' and self.x >= 1:  # left
            self.x -= 1
        if action == 'down' and self.x <= self.column - 2:  # down
            self.x += 1
        self.maze[self.x][self.y] = 1

        if self.x == self.target_x and self.y == self.target_y:
            self.done = True
            r = 100 - action_cnt + 6



        return (self.x,self.y),r,self.done

    def reset(self):
        self.done = False
        self.maze = torch.zeros(self.row,self.column)
        self.x = 0
        self.y = 0
        self.maze[self.x][self.y] = 1
        return (0, 0)


class Qlearning:
    def __init__(self, actions, learning_rate=0.1, reward_decay=0.9, e_greedy=0.9):
        self.actions = actions
        self.lr = learning_rate
        self.gamma = reward_decay
        self.epsilon = e_greedy
        self.q_table = pd.Dataframe(columns=self.actions, dtype=np.float64)


    def choose_action(self, observation):
        self.check_state_exist(observation)
        # 90% 选择q值最大的行为 10% 选择随机行为
        if np.random.uniform() < self.epsilon:
            # choose best action
            state_action = self.q_table.loc[observation, :]
            # some actions may have the same value, randomly choose on in these actions
            action = np.random.choice(state_action[state_action == np.max(state_action)].index)
        else:
            # choose random action
            action = np.random.choice(self.actions)

        return action

    def check_state_exist(self, state):
        if state not in self.q_table.index:
            # append new state to q table
            self.q_table = self.q_table.append(
                pd.Series(
                    [0]*len(self.actions),
                    index=self.q_table.columns,
                    name=str(state),
                )
            )

            #print(self.q_table)

    def learn(self, s, a, r, s_,done):
        self.check_state_exist(s_)
        q_predict = self.q_table.loc[s, a]
        #print('q_predict',q_predict)
        if not done:
            q_target = r + self.gamma * self.q_table.loc[s_, :].max()  # next state is not terminal
        else:
            q_target = r  # next state is terminal
        self.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # update
        #print(self.q_table)


maze = maze_env()
agent = Qlearning(['up', 'down', 'left', 'right'])
for i in range(500):
    observation = maze.reset()
    action_cnt = 0
    reward = 0
    while True:
        action = agent.choose_action(str(observation))
        action_cnt = action_cnt + 1
        observation_,r,done = maze.step(action)
        reward += r
        agent.learn(str(observation),action,r,str(observation_),done)
        observation = observation_

        if done:
            # print(agent.q_table)
            print("一共移动了",action_cnt)
            print("本局的总收益为", reward)
            break
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/326119.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号