Visual Show Automation, or VSA for short, is a program for creating, editing and running animatronic shows. It is a product of Brookshire Software, and is modestly priced (but not free). You can find information about it at:


In order to create a show using VSA, you follow the following steps:

1. Using an audio editor, create sound files for each character’s dialog, and a separate mix file which contains all of the characters, effects, and background music.

2. Start VSA, and fill in the appropriate settings (under Tools -> Settings…) for your hardware. This will include:


The name of each actuator (e.g. Cat Mouth)


This is the type of controller board (e.g. Pololu Servo)


The port where the controller is connected (e.g. COM5 – if using USB, you may have to hunt a bit)


This says which spot (address) on the controller board a device is connected to.

+Value, -Value and Default

These are the maximum, minimum and starting values for each servo. This is a little tricky. You need to set the max and min values so that the servos never attempt to drive beyond their physical limits (either because the servo has reached the end of its possible rotation, or the mechanical linkage on the servo has gone as far as it can go). Most servos can be commanded to go past their mechanical limits. When this happens, the electronics in the servo just keep applying maximum power to try and get there. Not only can this strip gears, the servo may also get hot, and could burn out. Once you set the +Value and -Value limits for a device, the software will not let you go beyond them. The best way to find these is to move slowly out from a safe position until the servo stops moving, then come back a little bit until the servo moves just a little. That is your end point.

The default value is simply the position your servo will be set to at the beginning of your show. For example, for a head turn, you might start in the middle of the range, but for a mouth, you might start near an end with the mouth closed.

3. Generate the mouth commands. Load the first character’s audio file and run WaveMotion Analysis to the appropriate mouth track. This will generate the servo commands for that mouth. Test the result. If it doesn’t look good, tweak the parameters and try running it again. Once you are satisfied with the result, repeat this step for each of the remaining characters. Remember, you can hand edit the results to fix up problem spots, so it doesn’t have to be perfect everywhere.

4. Load the full mix audio file. You can now check to see that all the mouth movements look right together.

5. Program the remain servo tracks. You do this one track at a time, generally by using a joystick with the Execution -> Capture Events command.

6.  Program the lighting tracks. This can also be done with a joystick, but it’s generally quicker to place the events manually.