The Interactive Larynx:

The giant larynx model is the main landmark of the NMU Virtual Clinic. Standing at over 70 meters tall, if this were to exist in real life, the average size of an avatar would be equivalent to a ladybug.

Below the larynx, a circular room known as the Voice Education Center educates and raises public awareness about various voice disorders and their treatment options. Professional vocalists can also utilize the center to learn more about proper vocalizations to prevent damaging their voice on the job. These presentations were created by students in the SL220 Speech and Voice Science class.

Turning our attention back to the Larynx, there are currently two models. The ground model is referred to as Larynx A, which we will be addressing first, while the Larynx at sky level is referred to as Larynx B.

Larynx A currently has three types of tests that avatars can take. The various terminals are color coded and divided into five categories. These terminals can only be used by approved avatars with the exception of the blue terminal, which is for public use.

I'm going to start the blue terminal by touching its green button. It will then turn red and display the name of the test taker, visually informing others that they cannot use the terminal until the person is finished. If an avatar fails to communicate with the testing script after any 5 minute period, the test is canceled and the terminal resets.

After saying hello and giving me directions, the test script now asks whether I want to take the test for studying or grading. I'm going to choose study. It then asks how many randomized questions I'd like to answer. I'll say five.

Because this test is directed toward the public, which may or may not be familiar with a larynx, touching this blue button will give an avatar a study notecard.

Non-students may refer to this notecard as they are taking the test to help them answer each question. These answers only work with the blue terminal. As each question is being asked, a large arrow that is the same color as our terminal—in this case blue—appears in world and points to a specific area. I have several options for locating this arrow. I can either fly around the model like so, zooming in or out as necessary, or I can swivel around the model with my camera. A line of particles are also beamed between my avatar and the arrow, which is a third method of locating the object.

After finding the arrow, I can type my answer into open chat and the test will let me know if I am right or wrong.

The green text that you have been seeing is information that the testing script is relaying to my avatar through private chat. Nobody else can view my questions or answers. This test not only allows multiple answers per question, unique arrow shapes can be assigned on a per question basis if necessary. Each terminal keeps track of the number of right and wrong answers per completed test and stores the results for later retrieval.

Now I'm going to use a new test that is part of an on-going research study. Participants have never used Second Life, so the presentation of this test will differ from the blue terminal that I just used. The purpose of this experiment is to discover any differences in speed or ease when participants learn and are tested on a new subject in an interactive 3D environment compared to normal 2D formats. Unlike the other terminals, this test is controlled by the avatars of the students responsible for conducting the experiment. Their research subjects utilize the avatar to move around and take the test, but there is no way to identify a participant so as to comply with laws requiring subject anonymity. Instead, to create a unique identifier to help establish an identity to the results, participants select a self-generated user-ID number, and this number is what the test attaches a final score to. Students make sure that participants do not take the test twice, but the self-created user-ID prevents said students from knowing which participant took what test or got what score.

This is Bilo, and he is the person who has scripted everything in this video. Although I wrote the design specification for the original testing system, which Jim later modified to fit with this research study, it is only because of Bilo's programming talent that all of these intricacies come to life and actually work. Right now, Bilo is demonstrating another unique aspect of this research study test: during the study period, all possible questions are displayed at once via brightly colored arrows. As Bilo flies around and touches the arrows, they tell him what anatomical reference they are pointing to. They also turn black to help him remember which arrows have been touched, though he is more than welcome to touch a black arrow as many times as he wants.

When the test starts, a third difference is revealed: instead of making the avatar accurately type the geographical region an arrow points to, the script outputs four multiple choice options. Again, this is because the study sample will have little—if any—anatomy experience, and this test is intended to measure short-term learning instead of long-term memorization.

Once the test is complete, the answers are forwarded to the department secretary in an email such as this. We see: a random, user-generated ID number; the date and time the test was taken; the terminal name, region name, and location of the terminal within the region; the total number of questions answered and the overall score; the duration of the testing period; a detailed list of the questions being asked; the multiple choices for each question; the right answer for each question; and, finally, the avatar's answer.

The last thing that I'll talk about regarding Larynx A will be the animated vocal folds. The motion of these membranous flaps is responsible for vocalization. There is no such thing as a vocal cord or a voice box. This particular animation represents a person who is sustaining a loud pitch. The black arrows demonstrate the Bernoulli effect that is taking place, which is a change in air pressure responsible for opening and closing the vocal folds.

Now we can discuss Larynx B. Larynx B contains the same testing systems as Larnx A. However, unlike Larynx A, which doesn't move, Larynx B is fully animated. When a test question refers to an animation, the arrow automatically plays the cycle and the anatomical region becomes a bright blue.

These animations can also be triggered via this sign. The purpose of Larynx B is to demonstrate how the movement of the intrinsic muscles of the larynx pull on the cartilages in a variety of scenarios.

The Breathing Cycle shows how the cricoarytenoid muscle, shown in blue, abducts to pull on the tips of the arytenoid cartilages. This pulls the vocal ligaments apart, allowing airflow.

The Epiglottic Inversion Cycle shows how the intrinsic muscles, as well several extrinsic muscles that have not been built, lift the larynx up and forward toward the base of the tongue. This movement, along with the force of the water or food being swallowed, causes the epiglottis—the cartilage that is bright white—to fold down and over the laryngeal opening. This prevents choking.

The Pitch Change Cycle shows the two cricothryoid muscles responsible for high pitch. First, the pars oblique tilts the cricoid cartilage upward. The pars recta—in blue—then rocks the thyroid cartilage downward. These movements lengthen the vocal ligaments and this tension alters pitch. Currently, there is no animation for a lower pitch change. This would require the extrinsic muscles of the larynx to pull the cartilages downward instead of upward.

Larynx B and its three animation cycles took nearly one hundred hours to create.