正在加载图片...
Projects in VR Editors: lawrence rosenblum and Michael Macedonia Public Speaking in virtual Reality Facing an Audience of Avatars Mel slater tal question can have applications in a wider David- Paul ntherapy--for example, in our starting point Pertaub and erated-to an audience of avatars? If the virtual audi- earch, which took place in the context of col- Anthony Steed ence seems attentive, well-behaved, and interested, if in shared VEs. 3 hey show positive facial expressions with complimen- ary actions such as clapping and nodding, does the Designing the experiment College London speaker infer correspondingly positive evaluations of The project used DIVE (Distributive Interactive performance and show fewer signs of anxiety? On the Virtual Environment)as the basis for constructing a other hand, if the audience seems hostile, disinterest- working prototype of a virtual public speaking simula- d and visibly bored, if they have negative facial tion. Developed by the Swedish Institute of Computer expressions and exhibit reactions such as head-shak- Science, DIVE has been used extensively in various g, loud yawning, turning away, falling asleep, and national and international research projects investigat walking out, does the speaker infer correspondingly ing the possibilities of VEs. In this multiuser VR system, negative evaluations of performance and show more several networked participants can move about in an signs of anxiety? We set out to study this question dur- artificial 3D shared world and see and interact with ing the summer of 1998. We designed a virtual public objects, processes, and other users present in the world. speaking scenario, followed by an experimental study. TCL (Tool Command Language)interpreters attached to tiveness of virtual environments (VEs)in psychother- behaviors to things in the VE namic and interactive In this work we wanted mainly to explore the effec- objects supply interesting dy py for social phobias. Ratherthan plunge straightin and We constructed as a Virtual Reality Modeling lesign a virtual reality therapy tool, we first tackled the Language (VRMl) model a virtual seminar room that uestion of whether real people,s emotional responses matched the actual seminar room in which subjects are appropriate to the behavior of the virtual people completed their various questionnaires and met with with whom they may interact. We concentrated on pub- the experimenters. See the sidebar"Technical Details lic speaking anxiety as an ideal arena for two reasons. on page 9 for a synopsis of the equipment we used inour First, it's relatively simple technically compared to more experiment. )The seminar room was populated with an general social interactions. A public speaking scenario audience of eight avatars seated in a semicircle facin involves specific stylized behaviors on the part of the the speaker, as if for a talk held in the real seminarroom avatars, making it relatively straightforward to imple- These avatars continuously displayed random ment. Second, this application has high usefulness with- autonomous behaviors such as twitches, blinks, and in the broader context of social phobias, public speaking nods designed to foster the illusion of"life. "We pro- being a prevalent cause of anxiety among the general grammed the avatars dynamic behavior with TCL population. scripts attached to appropriate body parts in the dive Previous research into using VEs in a mental health database. This let us avoid the situation where only the etting have concentrated on specific phobias such as avatars under the operator' s or experimenters dire fear of heights, flying, spiders, and open spaces. A control are active while the others stay"frozen"ininan recent study suggested that VR might prove useful in imate treating public speaking anxiety This study provided We simulated eye contact by enabling the avatars to evidence that virtual therapy effectively reduced self- look at the speaker. Also, they could move their heads to reported levels of anxiety. Our research intends to follow the speaker around the room. Facial animation, address the prior step: Before developing an effective based around a linear muscle model developed by Parke tool, we should elicit the factors in a ve that will pro- and Waters, allowed the avatars to display six primary voke the desired response in clients. If people do not facial expressions together with yawns and sleeping report and exhibit signs and symptoms similar to those faces. Avatars could also stand up, clap, and walk out of generated during a real public talk, then VE-based ther- the seminar room, cutting across the speaker's line of apy cannot succeed. Moreover, the answer to this more sight. The avatars could make yawning and clapping March/April 1999 0272-1716/99/s10.001999EEE Authorized licensed use limited to: SHENZHEN UNIVERSITY. Downloaded on March 27, 2010 at 06: 37: 04 EDT from IEEE Xplore.What happens when someone talks in public to an audience they know to be entirely computer gen￾erated—to an audience of avatars? If the virtual audi￾ence seems attentive, well-behaved, and interested, if they show positive facial expressions with complimen￾tary actions such as clapping and nodding, does the speaker infer correspondingly positive evaluations of performance and show fewer signs of anxiety? On the other hand, if the audience seems hostile, disinterest￾ed, and visibly bored, if they have negative facial expressions and exhibit reactions such as head-shak￾ing, loud yawning, turning away, falling asleep, and walking out, does the speaker infer correspondingly negative evaluations of performance and show more signs of anxiety? We set out to study this question dur￾ing the summer of 1998. We designed a virtual public speaking scenario, followed by an experimental study. In this work we wanted mainly to explore the effec￾tiveness of virtual environments (VEs) in psychothera￾py for social phobias. Rather than plunge straight in and design a virtual reality therapy tool, we first tackled the question of whether real people’s emotional responses are appropriate to the behavior of the virtual people with whom they may interact. We concentrated on pub￾lic speaking anxiety as an ideal arena for two reasons. First, it’s relatively simple technically compared to more general social interactions. A public speaking scenario involves specific stylized behaviors on the part of the avatars, making it relatively straightforward to imple￾ment. Second, this application has high usefulness with￾in the broader context of social phobias, public speaking being a prevalent cause of anxiety among the general population. Previous research into using VEs in a mental health setting have concentrated on specific phobias such as fear of heights, flying, spiders, and open spaces.1 A recent study suggested that VR might prove useful in treating public speaking anxiety.2 This study provided evidence that virtual therapy effectively reduced self￾reported levels of anxiety. Our research intends to address the prior step: Before developing an effective tool, we should elicit the factors in a VE that will pro￾voke the desired response in clients. If people do not report and exhibit signs and symptoms similar to those generated during a real public talk, then VE-based ther￾apy cannot succeed. Moreover, the answer to this more fundamental question can have applications in a wider context than therapy—for example, in our starting point for this research, which took place in the context of col￾laboration in shared VEs.3 Designing the experiment The project used DIVE (Distributive Interactive Virtual Environment) as the basis for constructing a working prototype of a virtual public speaking simula￾tion. Developed by the Swedish Institute of Computer Science, DIVE has been used extensively in various national and international research projects investigat￾ing the possibilities of VEs. In this multiuser VR system, several networked participants can move about in an artificial 3D shared world and see and interact with objects, processes, and other users present in the world. TCL (Tool Command Language) interpreters attached to objects supply interesting dynamic and interactive behaviors to things in the VE. We constructed as a Virtual Reality Modeling Language (VRML) model a virtual seminar room that matched the actual seminar room in which subjects completed their various questionnaires and met with the experimenters. (See the sidebar “Technical Details” on page 9 for a synopsis of the equipment we used in our experiment.) The seminar room was populated with an audience of eight avatars seated in a semicircle facing the speaker, as if for a talk held in the real seminar room. These avatars continuously displayed random autonomous behaviors such as twitches, blinks, and nods designed to foster the illusion of “life.” We pro￾grammed the avatars’ dynamic behavior with TCL scripts attached to appropriate body parts in the DIVE database. This let us avoid the situation where only the avatars under the operator’s or experimenter’s direct control are active while the others stay “frozen” in inan￾imate poses. We simulated eye contact by enabling the avatars to look at the speaker. Also, they could move their heads to follow the speaker around the room. Facial animation, based around a linear muscle model developed by Parke and Waters,4 allowed the avatars to display six primary facial expressions together with yawns and sleeping faces. Avatars could also stand up, clap, and walk out of the seminar room, cutting across the speaker’s line of sight. The avatars could make yawning and clapping Mel Slater, David-Paul Pertaub, and Anthony Steed University College London 0272-1716/99/$10.00 © 1999 IEEE Public Speaking in Virtual Reality: Facing an Audience of Avatars________________________ Projects in VR Editors: Lawrence Rosenblum and Michael Macedonia 6 March/April 1999 . Authorized licensed use limited to: SHENZHEN UNIVERSITY. Downloaded on March 27,2010 at 06:37:04 EDT from IEEE Xplore. Restrictions apply
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有