正在加载图片...
第6卷第4期 智能系统学报 Vol.6 No.4 2011年8月 CAAI Transactions on Intelligent Systems Aug.2011 dbt:10.3969/j.issn.1673-4785.2011.04.005 自适应前馈神经网络结构优化设计 张昭昭'2,乔俊飞',杨刚" (1.北京工业大学电子信息与控制工程学院,北京100124:2.辽宁工程技术大学电子与信息工程学院,辽宁葫芦岛125105) 摘要:针对多数前馈神经网络结构设计算法采取贪婪搜索策略而易陷入局部最优结构的问题,提出一种自适应前 馈神经网络结构设计算法.该算法在网络训练过程中采取自适应寻优策略合并和分裂隐节点,达到设计最优神经网 络结构的目的.在合并操作中,以互信息为准则对输出线性相关的隐节点进行合并:在分裂操作中,引入变异系数, 有助于跳出局部最优网络结构.算法将合并和分裂操作之后的权值调整与网络对样本的学习过程结合,减少了网络 对样本的学习次数,提高了网络的学习速度,增强了网络的泛化性能.非线性函数逼近结果表明,所提算法能得到更 小的检测误差,最终网络结构紧凑 关键同:前馈神经网络:结构设计:自适应搜索策略:互信息 中图盼类号:·TP273文献标识码:A文章编号:16734785(2011)04-0312-06 An adaptive algorithm for designing optimal feed-forward neural network architecture ZHANG Zhaozhao',QIAO Junfei,YANG Gang' (1.College of Electronic and Control Engineering,Beijing University of Technology,Beiing 100124,China;2.Institute of Electronic and Information Engineering,Liaoning Technical University,Huludao 125105,China) Abstract:Due to the fact that most algorithms use a greedy strategy in designing artificial neural networks which are susceptible to becoming trapped at the architectural local optimal point,an adaptive algorithm for designing an optimal feed-forward neural network was proposed.During the training process of the neural network,the adaptive optimization strategy was adopted to merge and split the hidden unit to design optimal neural network architecture. In the merge operation,the hidden units were merged based on mutual information criterion.In the split operation, a mutation coefficient was introduced to help jump out of locally optimal network.The processof adjusting the con- nection weight after merge and split operations was combined with the process of training the neural network.There- fore,the number of training samples was reduced,the training speed was increased,and the generalization per- iormance was improved.The results of approximating non-linear functions show that the proposed algorithm can lim- it testing errors and a compact neural network structure. Keywords:feed-forward neural network;architecture design;adaptive search strategy;mutual information 前馈神经网络是应用最多的网络之一.其成: 网络的泛化能力下降,而没有泛化能力的网络没有 功应用的关键是神经网络的结构设计.如果神经网 使用价值.由于神经网络结构在某种程度上直接决 络的规模太小,会导致欠拟合;如果神经网络规模过 定了神经网络的最终性能,所以神经网络结构优化 大,则导致过拟合.无论欠拟合还是过拟合都使神经 设计一直是神经网络领域关注的基本问题”. 神经网络结构优化的方法主要有:删减方 收稿日期: 法”、增长方法3、增长删减方法”.删减方法是一 基金项目:国家自然科学基金资助项目(60873043),:国家"863“计划 资助项目(2009A404Z155)!:北京市自然科学基金资助项 种自顶向下的设计方法,即在网络的训练过程中,通 自(4092010):教育部博士点基金资助项自 (2008000500041. 过删除网络中冗余的节点和连接而达到简化网络结 构的目的.增长方法是一种自底向上的设计方法,且 通信作者:.张昭昭.Emi1:,Zz小ao23@126.com. 增加策略比删减策略更易于制定和实现:就设计紧收稿日期: 第6卷第4期 2011年8月 ZHANG Zhaozhao',QIAO Junfei',YANG Gang' (1.北京工业大学电子信息与控制工程学院,北京100124;2.辽宁工程技术大学电子与信息工程学院,辽宁葫芦岛125105) 基金项目: 国家自然科学基金资助项目(60873043) ;国家"863"计划 资助项目(2009AA04Z155) ;北京市自然科学基金资助项 (1. College of Electronic and Control Engineering,Beijing University of Technology,Beiing 100124,China;2. Institute of Electronic 通信作者: 张昭昭.E-mail: 目(4092010) ;教育部博士点基金资助项目 (200800050004) . zzhaol23@126.com. and Information Engineering,Liaoning Technical University,Huludao 125105,China) Vol.6 No.4 Aug.2011 神经网络结构优化的方法主要有: 删减方 法"、增长方法3、增长删减方法".删减方法是一 种自顶向下的设计方法,即在网络的训练过程中,通 过删除网络中冗余的节点和连接而达到简化网络结 构的目的.增长方法是一种自底向上的设计方法,且 增加策略比删减策略更易于制定和实现;就设计紧 关键词: 前馈神经网络;结构设计;自适应搜索策略;互信息 中图分类号: TP273 文献标识码: A 文章编号: 16734785(2011) 04-0312-06 An adaptive algorithm for designing optimal feed-forward neural network architecture 张昭昭'2,乔俊飞',杨刚' CAAI Transactions on Intelligent Systems 智 能 系 统 学 报 摘 要:针对多数前馈神经网络结构设计算法采取贪婪搜索策略而易陷入局部最优结构的问题,提出一种自适应前 馈神经网络结构设计算法.该算法在网络训练过程中采取自适应寻优策略合并和分裂隐节点,达到设计最优神经网 络结构的目的.在合并操作中,以互信息为准则对输出线性相关的隐节点进行合并;在分裂操作中,引入变异系数, 有助于跳出局部最优网络结构.算法将合并和分裂操作之后的权值调整与网络对样本的学习过程结合,减少了网络 对样本的学习次数,提高了网络的学习速度,增强了网络的泛化性能.非线性函数逼近结果表明,所提算法能得到更 小的检测误差,最终网络结构紧凑. 前馈神经网络是应用最多的网络之一.其成 功应用的关键是神经网络的结构设计.如果神经网 络的规模太小,会导致欠拟合;如果神经网络规模过 大,则导致过拟合.无论欠拟合还是过拟合都使神经 自适应前馈神经网络结构优化设计 doi: 10.3969/j.issn.1673-4785.2011.04.005 Keywords: feed-forward neural network; architecture design;adaptive search strategy;mutual information Abstract:Due to the fact that most algorithms use a greedy strategy in designing artificial neural networks which are susceptible to becoming trapped at the architectural local optimal point, an adaptive algorithm for designing an optimal feed-forward neural network was proposed. During the training process of the neural network,the adaptive optimization strategy was adopted to merge and split the hidden unit to design optimal neural network architecture. In the merge operation, the hidden units were merged based on mutual information criterion. In the split operation, a mutation coefficient was introduced to help jump out of locally optimal network. The processof adjusting the con￾nection weight after merge and split operations was combined with the process of training the neural network. There￾ore,the number of training samples was reduced,the training speed was increased,and the generalization per￾ormance was improved. The results of approximating non-linear functions show that the proposed algorithm can lim￾t testing errors and a compact neural network structure. 网络的泛化能力下降,而没有泛化能力的网络没有 使用价值.由于神经网络结构在某种程度上直接决 定了神经网络的最终性能,所以神经网络结构优化 设计一直是神经网络领域关注的基本问题
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有