正在加载图片...
·10· 智能系统学报 第15卷 tion in decision support[J].Decision support systems, FACE'05 audio-visual emotion database[C]//Proceedings 2018.115:2435 of the 22nd International Conference on Data Engineer- [11]ORTONY A,TURNER T J.What's basic about basic ing Workshops.Atlanta,USA,2006:1-8 emotions?[J].Psychological review,1990,97(3): [25]LIVINGSTONE S R,RUSSO F A.The Ryerson Audio- 315-331 Visual Database of Emotional Speech and Song [12]EKMAN P.FRIESEN W V.O'SULLIVAN M,et al.Uni- (RAVDESS):a dynamic,multimodal set of facial and vo- versals and cultural differences in the judgments of facial cal expressions in north American English[J].PLoS one, expressions of emotion[J].Journal of personality and so- 2018,135:e0196391 cial psychology,1987,53(4):712-717. [26]STEIDL S.Automatic classification of emotion-related [13]SCHULLER B W.Speech emotion recognition:two dec- user states in spontaneous children's speech[M].Erlan- ades in a nutshell,benchmarks,and ongoing trends[J]. gen,Germany:University of Erlangen-Nuremberg,2009: Communications of the ACM,2018,61(5):90-99. 1-250 [14]乐国安,董颖红.情绪的基本结构:争论、应用及其前 [27]GRIMM M,KROSCHEL K,NARAYANAN S.The Vera 瞻).南开学报(哲学社会科学版,2013(1):140-150. am Mittag German audio-visual emotional speech data- YUE Guoan,DONG Yinghong.On the categorical and base[C]//Proceedings of 2008 IEEE International Confer- dimensional approaches of the theories of the basic struc- ence on Multimedia and Expo.Hannover,Germany, ture of emotions[J].Nankai journal (philosophy,literat- 2008:865-868 ure and social science edition),2013(1):140-150. [28]BUSSO C,BULUT M,LEE CC,et al.IEMOCAP:inter- [15]李霞,卢官明,闫静杰,等.多模态维度情感预测综 active emotional dyadic motion capture database[J].Lan- 述J.自动化学报,2018.4412):2142-2159 guage resources and evaluation,2008,42(4):335-359. LI Xia,LU Guanming,YAN Jingjie,et al.A survey of di- [29]RINGEVAL F.SONDEREGGER A.SAUER J,et al.In- mensional emotion prediction by multimodal cues[J]. troducing the RECOLA multimodal corpus of remote col- Acta automatica sinica,2018,44(12):2142-2159 laborative and affective interactions[C]//Proceedings of [16]FONTAINE J R J,SCHERER K R,ROESCH E B,et al. the 2013 10th IEEE International Conference and Work- The world of emotions is not two-dimensional[J].Psycho- shops on Automatic Face and Gesture Recognition. logical science,2007,18(12):1050-1057. Shanghai,China,2013:1-8. [17]RUSSELL J A.A circumplex model of affect[J].Journal [30]METALLINOU A,YANG Zhaojun,LEE C,et al.The of personality and social psychology,1980,39(6): USC CreativelT database of multimodal dyadic interac- 1161-1178. tions:from speech and full body motion capture to con- [18]YIK MS M,RUSSELL J A,BARRETT L F.Structure of tinuous emotional annotations[J].Language resources and self-reported current affect:integration and beyond[J]. evaluation,2016,50(3:497-521. Journal of personality and social psychology,1999,77(3): [31]MCKEOWN G,VALSTAR M,COWIE R,et al.The SE- 600-619. MAINE database:annotated multimodal records of emo- [19]PLUTCHIK R.The nature of emotions:human emotions tionally colored conversations between a person and a have deep evolutionary roots,a fact that may explain their limited agent[J].IEEE transactions on affective comput- complexity and provide tools for clinical practice[J]. ing,2012,31):5-17. American scientist.2001.89(4):344-350. [32]饶元,吴连伟,王一鸣,等.基于语义分析的情感计算技 [20]ZHALEHPOUR S,ONDER O,AKHTAR Z,et al. 术研究进展[J】.软件学报,2018.29(8):2397-2426. BAUM-1:a spontaneous audio-visual face database of af- RAO Yuan,WU Lianwei,WANG Yiming,et al.Re- fective and mental states[J].IEEE transactions on affect- search progress on emotional computation technology ive computing,2017,8(3:300-313. based on semantic analysis[J].Journal of software,2018, [21]WANG Wenwu.Machine audition:principles,al- 29(8:2397-2426. gorithms and systems[M].New York:Information Sci- [33]WANG Yiming,RAO Yuan,WU Lianwei.A review of ence Reference,2010,398-423. sentiment semantic analysis technology and progress[C]// [22]WANG Yongjin,GUAN Ling.Recognizing human emo- Proceedings of 2017 13th International Conference on tional state from audiovisual signals[J].IEEE transac- Computational Intelligence and Security.Hong Kong tions on multimedia,2008,10(4):659-668 China.2017:452-455. [23]BURKHARDT F,PAESCHKE A.ROLFES M,et al.A [34]MORRIS J D.Observations:SAM:the self-assessment database of German emotional speech[C]//INTER- manikin-an efficient cross-cultural measurement of SPEECH 2005.Lisbon,Portugal,2005:1517-1520. emotional response[J].Journal of advertising research, [24]MARTIN O,KOTSIA I,MACQ B,et al.The eNTER- 1995,35(6):63-68tion in decision support[J]. Decision support systems, 2018, 115: 24–35. ORTONY A, TURNER T J. What’s basic about basic emotions?[J]. Psychological review, 1990, 97(3): 315–331. [11] EKMAN P, FRIESEN W V, O’SULLIVAN M, et al. Uni￾versals and cultural differences in the judgments of facial expressions of emotion[J]. Journal of personality and so￾cial psychology, 1987, 53(4): 712–717. [12] SCHULLER B W. Speech emotion recognition: two dec￾ades in a nutshell, benchmarks, and ongoing trends[J]. Communications of the ACM, 2018, 61(5): 90–99. [13] 乐国安, 董颖红. 情绪的基本结构: 争论、应用及其前 瞻 [J]. 南开学报(哲学社会科学版), 2013(1): 140–150. YUE Guoan, DONG Yinghong. On the categorical and dimensional approaches of the theories of the basic struc￾ture of emotions[J]. Nankai journal (philosophy, literat￾ure and social science edition), 2013(1): 140–150. [14] 李霞, 卢官明, 闫静杰, 等. 多模态维度情感预测综 述 [J]. 自动化学报, 2018, 44(12): 2142–2159. LI Xia, LU Guanming, YAN Jingjie, et al. A survey of di￾mensional emotion prediction by multimodal cues[J]. Acta automatica sinica, 2018, 44(12): 2142–2159. [15] FONTAINE J R J, SCHERER K R, ROESCH E B, et al. The world of emotions is not two-dimensional[J]. Psycho￾logical science, 2007, 18(12): 1050–1057. [16] RUSSELL J A. A circumplex model of affect[J]. Journal of personality and social psychology, 1980, 39(6): 1161–1178. [17] YIK M S M, RUSSELL J A, BARRETT L F. Structure of self-reported current affect: integration and beyond[J]. Journal of personality and social psychology, 1999, 77(3): 600–619. [18] PLUTCHIK R. The nature of emotions: human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice[J]. American scientist, 2001, 89(4): 344–350. [19] ZHALEHPOUR S, ONDER O, AKHTAR Z, et al. BAUM-1: a spontaneous audio-visual face database of af￾fective and mental states[J]. IEEE transactions on affect￾ive computing, 2017, 8(3): 300–313. [20] WANG Wenwu. Machine audition: principles, al￾gorithms and systems[M]. New York: Information Sci￾ence Reference, 2010, 398–423. [21] WANG Yongjin, GUAN Ling. Recognizing human emo￾tional state from audiovisual signals[J]. IEEE transac￾tions on multimedia, 2008, 10(4): 659–668. [22] BURKHARDT F, PAESCHKE A, ROLFES M, et al. A database of German emotional speech[C]//INTER￾SPEECH 2005. Lisbon, Portugal, 2005: 1517–1520. [23] [24] MARTIN O, KOTSIA I, MACQ B, et al. The eNTER￾FACE' 05 audio-visual emotion database[C]//Proceedings of the 22nd International Conference on Data Engineer￾ing Workshops. Atlanta, USA, 2006: 1–8. LIVINGSTONE S R, RUSSO F A. The Ryerson Audio￾Visual Database of Emotional Speech and Song (RAVDESS): a dynamic, multimodal set of facial and vo￾cal expressions in north American English[J]. PLoS one, 2018, 13(5): e0196391. [25] STEIDL S. Automatic classification of emotion-related user states in spontaneous children’s speech[M]. Erlan￾gen, Germany: University of Erlangen-Nuremberg, 2009: 1–250 [26] GRIMM M, KROSCHEL K, NARAYANAN S. The Vera am Mittag German audio-visual emotional speech data￾base[C]//Proceedings of 2008 IEEE International Confer￾ence on Multimedia and Expo. Hannover, Germany, 2008: 865–868. [27] BUSSO C, BULUT M, LEE C C, et al. IEMOCAP: inter￾active emotional dyadic motion capture database[J]. Lan￾guage resources and evaluation, 2008, 42(4): 335–359. [28] RINGEVAL F, SONDEREGGER A, SAUER J, et al. In￾troducing the RECOLA multimodal corpus of remote col￾laborative and affective interactions[C]//Proceedings of the 2013 10th IEEE International Conference and Work￾shops on Automatic Face and Gesture Recognition. Shanghai, China, 2013: 1–8. [29] METALLINOU A, YANG Zhaojun, LEE C, et al. The USC CreativeIT database of multimodal dyadic interac￾tions: from speech and full body motion capture to con￾tinuous emotional annotations[J]. Language resources and evaluation, 2016, 50(3): 497–521. [30] MCKEOWN G, VALSTAR M, COWIE R, et al. The SE￾MAINE database: annotated multimodal records of emo￾tionally colored conversations between a person and a limited agent[J]. IEEE transactions on affective comput￾ing, 2012, 3(1): 5–17. [31] 饶元, 吴连伟, 王一鸣, 等. 基于语义分析的情感计算技 术研究进展 [J]. 软件学报, 2018, 29(8): 2397–2426. RAO Yuan, WU Lianwei, WANG Yiming, et al. Re￾search progress on emotional computation technology based on semantic analysis[J]. Journal of software, 2018, 29(8): 2397–2426. [32] WANG Yiming, RAO Yuan, WU Lianwei. A review of sentiment semantic analysis technology and progress[C]// Proceedings of 2017 13th International Conference on Computational Intelligence and Security. Hong Kong, China, 2017: 452–455. [33] MORRIS J D. Observations: SAM: the self-assessment manikin—an efficient cross-cultural measurement of emotional response[J]. Journal of advertising research, 1995, 35(6): 63–68. [34] ·10· 智 能 系 统 学 报 第 15 卷
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有