1456 Part G Human-Centered and Life-Like Robotics bees were trained to enter a tunnel to forage at a sucrose to the animal's ecological niche to show that different feeder placed at its far end (Fig.62.la).The bees used cells in the retina and the visual midbrain region known visual cues to maintain their ground speed by adjusting as the tectum were specialized for detecting predators their airspeed to maintain a constant rate of optic flow, and prey.However,in much visually guided behavior, even against headwinds which were,at their strongest, the animal does not respond to a single stimulus,but 50%of a bee's maximum recorded forward velocity. rather to some property of the overall configuration.We Vladusich et al.[62.16]studied the effect of adding thus turn to the question what does the frog's eye tell goal-defining landmarks.Bees were trained to forage the frog?,stressing the embodied nervous system or, in an optic-flow-rich tunnel with a landmark positioned perhaps equivalently,an action-oriented view of per- directly above the feeder.They searched much more ac-ception.Consider,for example,the snapping behavior curately when both odometric and landmark cues were of frogs confronted with one or more fly-like stimuli. available than when only odometry was available.When Ingle [62.20]found that it is only in a restricted region the two cue sources were set in conflict,by shifting the around the head of a frog that the presence of a fly-like position of the landmark in the tunnel during tests,bees stimulus elicits a snap,that is,the frog turns so that its overwhelmingly used landmark cues rather than odome-midline is pointed at the stimulus and then lunges for- try.This,together with other such experiments,suggests ward and captures the prey with its tongue.There is that bees can make use of odometric and landmark cues a larger zone in which the frog merely orients towards in a more flexible and dynamic way than previously en- the target,and beyond that zone the stimulus elicits no visaged.In earlier studies of bees flying down a tunnel,response at all.When confronted with two flies within Srinivasan and Zhang [62.17]placed different patterns the snapping zone,either of which is vigorous enough on the left and right walls.They found that bees bal- that alone it could elicit a snapping response,the frog ex- ance the image velocities in the left and right visual hibits one of three reactions:it snaps at one of the flies,it fields.This strategy ensures that bees fly down the mid- does not snap at all,or it snaps in between at the average dle of the tunnel,without bumping into the side walls,fly.Didday [62.21]offered a simple model of this choice enabling them to negotiate narrow passages or to fly behavior which may be considered as the prototype for between obstacles.This strategy has been applied to a winner-take-all (WTA)model which receives a vari- a corridor-following robot(Fig.62.1c).By holding con- ety of inputs and(under ideal circumstances)suppresses stant the average image velocity as seen by the two the representation of all but one of them;the one that eyes during flight,the bee avoids potential collisions, remains is the winner which will play the decisive role slowing down when it flies through a narrow passage. in further processing.This was the beginning of Rana The movement-sensitive mechanisms underlying these computatrix(see Arbib [62.22,23]for overviews). various behaviors differ qualitatively as well as quanti- Studies on frog brains and behavior inspired the tatively,from those that mediate the optomotor response successful use of potential fields for robot navigation (e.g.,turning to track a pattern of moving stripes)that strategies.Data on the strategies used by frogs to cap- had been the initial target of investigation of the Re- ture prey while avoiding static obstacles(Collett [62.24]) ichardt laboratory.The lesson for robot control is that grounded the model by Arbib and House [62.25]which flight appears to be coordinated by a number of visuo- linked systems for depth perception to the creation of motor systems acting in concert,and the same lesson spatial maps of both prey and barriers.In one version Part can apply to a whole range of tasks which must con-of their model,they represented the map of prey by vert vision to action.Of course,vision is but one of the a potential field with long-range attraction and the map 9 sensory systems that play a vital role in insect behavior.of barriers by a potential field with short-range repul- Webb [62.18]uses her own work on robot design in- sion,and showed that summation of these fields yielded spired by the auditory control of behavior in crickets to a field that could guide the frog's detour around the bar- anchor a far-ranging assessment of the extent to which rier to catch its prey.Corbacho and Arbib [62.26]later robotics can offer good models of animal behaviors. explored a possible role for learning in this behavior. Their model incorporated learning in the weights be- 62.2.2 Visually Guided Behavior tween the various potential fields to enable adaptation in Frogs and Robots over trials as observed in the real animals.The success of the models indicated that frogs use reactive strategies Lettvin et al.[62.19]treated the frog's visual system from to avoid obstacles while moving to a goal,rather than an ethological perspective,analyzing circuitry in relation employing a planning or cognitive system.Other work1456 Part G Human-Centered and Life-Like Robotics bees were trained to enter a tunnel to forage at a sucrose feeder placed at its far end (Fig. 62.1a). The bees used visual cues to maintain their ground speed by adjusting their airspeed to maintain a constant rate of optic flow, even against headwinds which were, at their strongest, 50% of a bee’s maximum recorded forward velocity. Vladusich et al. [62.16] studied the effect of adding goal-defining landmarks. Bees were trained to forage in an optic-flow-rich tunnel with a landmark positioned directly above the feeder. They searched much more accurately when both odometric and landmark cues were available than when only odometry was available. When the two cue sources were set in conflict, by shifting the position of the landmark in the tunnel during tests, bees overwhelmingly used landmark cues rather than odometry. This, together with other such experiments, suggests that bees can make use of odometric and landmark cues in a more flexible and dynamic way than previously envisaged. In earlier studies of bees flying down a tunnel, Srinivasan and Zhang [62.17] placed different patterns on the left and right walls. They found that bees balance the image velocities in the left and right visual fields. This strategy ensures that bees fly down the middle of the tunnel, without bumping into the side walls, enabling them to negotiate narrow passages or to fly between obstacles. This strategy has been applied to a corridor-following robot (Fig. 62.1c). By holding constant the average image velocity as seen by the two eyes during flight, the bee avoids potential collisions, slowing down when it flies through a narrow passage. The movement-sensitive mechanisms underlying these various behaviors differ qualitatively as well as quantitatively, from those that mediate the optomotor response (e.g., turning to track a pattern of moving stripes) that had been the initial target of investigation of the Reichardt laboratory. The lesson for robot control is that flight appears to be coordinated by a number of visuomotor systems acting in concert, and the same lesson can apply to a whole range of tasks which must convert vision to action. Of course, vision is but one of the sensory systems that play a vital role in insect behavior. Webb [62.18] uses her own work on robot design inspired by the auditory control of behavior in crickets to anchor a far-ranging assessment of the extent to which robotics can offer good models of animal behaviors. 62.2.2 Visually Guided Behavior in Frogs and Robots Lettvin et al. [62.19] treated the frog’s visual system from an ethological perspective, analyzing circuitry in relation to the animal’s ecological niche to show that different cells in the retina and the visual midbrain region known as the tectum were specialized for detecting predators and prey. However, in much visually guided behavior, the animal does not respond to a single stimulus, but rather to some property of the overall configuration. We thus turn to the question what does the frog’s eye tell the frog?, stressing the embodied nervous system or, perhaps equivalently, an action-oriented view of perception. Consider, for example, the snapping behavior of frogs confronted with one or more fly-like stimuli. Ingle [62.20] found that it is only in a restricted region around the head of a frog that the presence of a fly-like stimulus elicits a snap, that is, the frog turns so that its midline is pointed at the stimulus and then lunges forward and captures the prey with its tongue. There is a larger zone in which the frog merely orients towards the target, and beyond that zone the stimulus elicits no response at all. When confronted with two flies within the snapping zone, either of which is vigorous enough that alone it could elicit a snapping response, the frog exhibits one of three reactions: it snaps at one of the flies, it does not snap at all, or it snaps in between at the average fly. Didday [62.21] offered a simple model of this choice behavior which may be considered as the prototype for a winner-take-all (WTA) model which receives a variety of inputs and (under ideal circumstances) suppresses the representation of all but one of them; the one that remains is the winner which will play the decisive role in further processing. This was the beginning of Rana computatrix (see Arbib [62.22, 23] for overviews). Studies on frog brains and behavior inspired the successful use of potential fields for robot navigation strategies. Data on the strategies used by frogs to capture prey while avoiding static obstacles (Collett[62.24]) grounded the model by Arbib and House [62.25] which linked systems for depth perception to the creation of spatial maps of both prey and barriers. In one version of their model, they represented the map of prey by a potential field with long-range attraction and the map of barriers by a potential field with short-range repulsion, and showed that summation of these fields yielded a field that could guide the frog’s detour around the barrier to catch its prey. Corbacho and Arbib [62.26] later explored a possible role for learning in this behavior. Their model incorporated learning in the weights between the various potential fields to enable adaptation over trials as observed in the real animals. The success of the models indicated that frogs use reactive strategies to avoid obstacles while moving to a goal, rather than employing a planning or cognitive system. Other work Part G 62.2