Flyatar Description
Flyatar connects humans, computers, and flies, allowing them to interact,
observe, and respond to one another. Humans and computers observe the
flies with a video camera and interact with them by moving a physical
object around them and among them. Flies observe the object and may
respond to it by altering their own motion. Computers use the camera
images to calculate the position and orientation of the flies and the
object as they move around on a surface, knowing the pose of the camera
with respect to that surface and the optical distortion of the camera
lens. Humans decide how the object will move with respect to the flies,
either by directly sending position and velocity commands to the object
actuator controller in real-time with a joystick or another human
interface device, or by writing algorithms to command the motion of the
object automatically over time based on the fly pose feedback from the
camera sensor. Entire experiments can be run automatically by programming
a set of trials to excute one after the other until the flies are
exhausted, changing experimental parameters between trials. Each trial is
triggered to start and stop by a set of fly and magnet pose conditions.
All kinematic data and other experimental state information, along with
the camera images, are passed as network messages between computer
processes controlling and analyzing the experiment in real-time. These
messages allow the experiment to be monitored and adjusted online as it
progresses, either locally or remotely over the network, and they can be
saved and played back later for future analysis.
For the experiments in this paper, we constrain the flies to a circular
flat plate 230mm in diameter with a heat barrier around the circumference.
We clip their wings so they will not fly away. An actuation stage beneath
the plate moves a magnet, forcing another magnet on top of the plate to
move with it. The actuation stage consists of two linear actuators to
control the magnet position on the plate and a rotational actuator to
control the magnet orientation. We only use the linear actuators for these
experiments since the top magnet is cylindrical, the rotational symmetry
makes the magnet orientation angle superfluous. A camera suspended above
the plate films both the files and the upper magnet and a computer
analyzes the camera images to identify the flies and the magnet and
calculate their pose. We calibrate the camera with checkerboard patterns
before running any experiments to find the distortion of the camera lens
and the pose of the camera with respect to the plate. Then we calibrate
the actuation stage to find the pose of the stage with respect to the
plate and to the camera, allowing us to map vectors from one coordinate
system to another. The computer idenfities the flies and the magnet by
their size and shape and by the actuator controller knowledge of the
location of the magnet beneath the plate. The computer maintains the
identies over time and estimates and predictes the pose and velocity of
the flies and the magnet using kalman filters. This pose information,
combined with the transforms found in the calibration, maps vectors in the
fly and magnet coordinate systems to any other. This enables the user to
tell the computer to move the magnet one centimeter directly in front of
the fly, for example, and it automatically calculates where that point is
located in the actuation stage coordinate system and commands the stage to
move there. The camera feedback confirms that the magnet arrives at the
commanded location or gives an error estimate so the computer can make
corrections. The feedback and commands are updated regularly so the
computer can command the magnet to stay one centimeter in front of the
fly, even if the fly is moving. The computer runs the Willow Garage ROS
robot meta-operating system to divide the experiment computing tasks into
loosely coupled processes sending messages to each other over the network.
There are many small processes running at the same time, each with a
specific task. For example, one process acquires images from the camera
and undistorts them, another keeps track of all the coordinate system
transforms as they change over time and makes them available to the rest
of the system, and another communicates with the actuator controller,
sending position and velocity commands and receiving status updates. We
use a state machine to control the experiment at the highest level. Each
state in the state machine describes a task or an action or can be a state
machine itself. Nested state machines enable complex robot behavior and
yet remain very reliable and easy to program and visualize. The first
state in the state machine for these experiments commands the stage
actuator to move the magnet to the center of the plate. The next state
commands the magnet to wait in that location until the fly walks from
outside to inside of a circular in-bounds area surrounding the center of
the plate at some specified radius. When that happens, the trial state
machine executes. The first state in the trial state machine begins saving
all the experimental data to the harddisk and then waits until the magnet
is in the trigger location in the fly's coordinate system. If and when
that happens, the next state randomly chooses an angular velocity value
out of a set of possible values, finds the fly's velocity direction and
the distance between the magnet and the fly and then uses that information
to calculate the magnet velocity magnitude and direction necessary to move
with that angular velocity value with respect to the fly. It sends those
commands to the actuator controller and the magnet moves with constant
velocity until it exits the in-bounds area. The trial ends when either the
fly or the magnet exits the in-bounds area. The trial state machine
completes and the higher level state machine starts back at the beginning.
This cycle continues until the fly is exhausted or until enough data has
been collected.