正在加载图片...
第12卷第1期 智能系统学报 Vol.12 No.1 2017年2月 CAAI Transactions on Intelligent Systems Feb.2017 D0I:10.11992/tis.201604008 网络出版地址:http://kns.cmki.net/kcms/detail/23.1538.TP.20170301.1147.002.html 基于事件驱动的多智能体强化学习研究 张文旭,马磊,王晓东 (西南交通大学电气工程学院,四川成都610031) 摘要:本文针对多智能体强化学习中存在的通信和计算资源消耗大等问题,提出了一种基于事件驱动的多智能体 强化学习算法,侧重于事件驱动在多智能体学习策略层方面的研究。在智能体与环境的交互过程中,算法基于事件 驱动的思想,根据智能体观测信息的变化率设计触发函数,使学习过程中的通信和学习时机无需实时或按周期地进 行,故在相同时间内可以降低数据传输和计算次数。另外,分析了该算法的计算资源消耗,以及对算法收敛性进行 了论证。最后,仿真实验说明了该算法可以在学习过程中减少一定的通信次数和策略遍历次数,进而缓解了通信和 计算资源消耗。 关键词:事件驱动:多智能体:强化学习:分布式马尔科夫决策过程:收敛性 中图分类号:TP181文献标志码:A文章编号:1673-4785(2017)01-0082-06 中文引用格式:张文旭,马磊,王晓东.基于事件驱动的多智能体强化学习研究[J].智能系统学报,2017,12(1):82-87. 英文引用格式:ZHANG Wenxu,MA Lei,WANG Xiaodong..Reinforcement learning for event-.triggered multi-agent systems[J]. CAAI transactions on intelligent systems,2017,12(1):82-87. Reinforcement learning for event-triggered multi-agent systems ZHANG Wenxu,MA Lei,WANG Xiaodong (School of Electrical Engineering,Southwest Jiaotong University,Chengdu 610031,China) Abstract:Focusing on the existing multi-agent reinforcement learning problems such as huge consumption of com- munication and calculation,a novel event-triggered multi-agent reinforcement learning algorithm was presented.The algorithm focused on an event-triggered idea at the strategic level of multi-agent learning.In particular,during the interactive process between agents and the learning environment,the communication and learning were triggered through the change rate of observation.Using an appropriate event-triggered design,the discontinuous threshold was employed,and thus real-time or periodical communication and learning can be avoided,and the number of commu- nications and calculations were reduced within the same time.Moreover,the consumption of computing resource and the convergence of the proposed algorithm were analyzed and proven.Finally,the simulation results show that the number of communications and traversals were reduced in learning,thus saving the computing and communica- tion resources. Keywords:event-triggered;multi-agent;reinforcement learning;decentralized Markov decision processes;conver- gence 近年来,基于事件驱动的方法在多智能体研究 体可以根据测量误差间歇性的更新状态,减少通信 中得到广泛关注1)。在事件驱动的思想中,智能 次数和计算量。文献[4]首次在多智能体系统的协 作中运用事件驱动的策略,并设计了基于事件驱动 收稿日期:2016-04-05.网络出版日期:2017-03-01. 机制的状态反馈控制器。随后,文献[5-7]将基于 基金项目:国家自然科学基金青年项目(61304166). 事件驱动的控制器扩展到非线性系统,以及复杂网 通信作者:张文旭.Email:wenxu_zhang(@l63.com第 12 卷第 1 期 智 能 系 统 学 报 Vol.12 №.1 2017 年 2 月 CAAI Transactions on Intelligent Systems Feb. 2017 DOI:10.11992 / tis.201604008 网络出版地址:http: / / kns.cnki.net / kcms/ detail / 23.1538.TP.20170301.1147.002.html 基于事件驱动的多智能体强化学习研究 张文旭,马磊,王晓东 (西南交通大学 电气工程学院,四川 成都 610031) 摘 要:本文针对多智能体强化学习中存在的通信和计算资源消耗大等问题,提出了一种基于事件驱动的多智能体 强化学习算法,侧重于事件驱动在多智能体学习策略层方面的研究。 在智能体与环境的交互过程中,算法基于事件 驱动的思想,根据智能体观测信息的变化率设计触发函数,使学习过程中的通信和学习时机无需实时或按周期地进 行,故在相同时间内可以降低数据传输和计算次数。 另外,分析了该算法的计算资源消耗,以及对算法收敛性进行 了论证。 最后,仿真实验说明了该算法可以在学习过程中减少一定的通信次数和策略遍历次数,进而缓解了通信和 计算资源消耗。 关键词:事件驱动;多智能体;强化学习;分布式马尔科夫决策过程;收敛性 中图分类号: TP181 文献标志码:A 文章编号:1673-4785(2017)01-0082-06 中文引用格式:张文旭,马磊,王晓东. 基于事件驱动的多智能体强化学习研究[J]. 智能系统学报, 2017, 12(1): 82-87. 英文引用格式:ZHANG Wenxu, MA Lei, WANG Xiaodong. Reinforcement learning for event⁃triggered multi⁃agent systems[ J]. CAAI transactions on intelligent systems, 2017, 12(1): 82-87. Reinforcement learning for event⁃triggered multi⁃agent systems ZHANG Wenxu, MA Lei, WANG Xiaodong (School of Electrical Engineering,Southwest Jiaotong University, Chengdu 610031, China) Abstract:Focusing on the existing multi⁃agent reinforcement learning problems such as huge consumption of com⁃ munication and calculation, a novel event⁃triggered multi⁃agent reinforcement learning algorithm was presented. The algorithm focused on an event⁃triggered idea at the strategic level of multi⁃agent learning. In particular, during the interactive process between agents and the learning environment, the communication and learning were triggered through the change rate of observation.Using an appropriate event⁃triggered design, the discontinuous threshold was employed, and thus real⁃time or periodical communication and learning can be avoided, and the number of commu⁃ nications and calculations were reduced within the same time. Moreover, the consumption of computing resource and the convergence of the proposed algorithm were analyzed and proven. Finally, the simulation results show that the number of communications and traversals were reduced in learning, thus saving the computing and communica⁃ tion resources. Keywords:event⁃triggered; multi⁃agent; reinforcement learning;decentralized Markov decision processes;conver⁃ gence 收稿日期:2016-04-05. 网络出版日期:2017-03-01. 基金项目:国家自然科学基金青年项目(61304166). 通信作者:张文旭.Email: wenxu_zhang@ 163.com. 近年来,基于事件驱动的方法在多智能体研究 中得到广泛关注[1-3] 。 在事件驱动的思想中,智能 体可以根据测量误差间歇性的更新状态,减少通信 次数和计算量。 文献[4]首次在多智能体系统的协 作中运用事件驱动的策略,并设计了基于事件驱动 机制的状态反馈控制器。 随后,文献[5-7]将基于 事件驱动的控制器扩展到非线性系统,以及复杂网
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有