Live Webinar - Optimize LLM performance through Human-AI Collaboration

Johns Hopkins University Uses AI to Study Behavioral Neuroscience Reflected in How Spiders Build Webs

The Appen Data Annotation Platform enabled Johns Hopkins University researchers to precisely and quickly label data to train their models, track spider movements and understand underlying behavioral motivations
December 17, 2021
Share

“What would have taken one person 1,500+ hours over a year or more was done in a few weeks by contributors through the Appen platform (then Figure Eight).”

-Andrew Gordus, Assistant Professor of Biology, John Hopkins University

 

The Project

Researchers of Behavioral Neuroscience have been turning to spiders and their web building in order to quantify spider behavior and how this behavior allows for the prediction of different stages of web building. One focal area has been trying to understand how spiders can organize their behaviors over a long time scale to build an ordered structure with good fidelity.

The classic challenge with studying animal behavior is the inability to ask the animal what its underlying motivations are. However, with animal architecture, the animal leaves behind a record of its behavioral intent. Orb-weaving spiders in particular build a structure that is easy to quantify, and gave Andrew Gordus, and his team of behavioral neuroscientists at Johns Hopkins University, the opportunity to associate behavior with this record, and identify a standard of baseline normal spider behavior.

Along with a team of researchers, Gordus set up night-vision cameras with artificial intelligence technology to track and record every movement of all eight legs as spiders built their webs. With artificial intelligence, the researchers were able to record and track every leg movement, creating an understanding of the patterns spiders use to build their elegant and complex webs.



 

The Challenge

To map the spider’s movement while web-building, the researchers faced a number of challenges. The first hurdle was being able to record spiders in the dark. To overcome this, they used infrared, night-vision cameras.

Another struggle for the researchers was the ability to accurately and efficiently track each movement the spiders made. While it would have been possible to go frame by frame to track and record the movement of each leg, this would have taken researchers a very long time.

The third challenge was training the machine vision models themselves, and adapting them to work for the intended use case. They discovered that they needed to annotate a significant number of frames - tens of thousands - and accurately, to enable the model to perform on the spider videos.

"Even if you video record it, that's a lot of legs to track, over a long time, across many individuals. It's just too much to go through every frame and annotate the leg points by hand so we trained machine vision software to detect the posture of the spider, frame by frame, so we could document every leg movement during web-building."

-Abel Corver, lead author, and a graduate student in the Solomon H. Snyder Department of Neuroscience at John Hopkins University, studying web-making and neurophysiology.

The Solution

Instead of tracking each leg movement individually and by hand, the researchers decided to use a machine learning algorithm and AI. After being introduced to a few companies that analyze large data sets quickly for training data, the team found Appen’s (CrowdFlower at the time) website the most appealing and the examples of past work the most closely aligned with their goals. They trained machine vision software to detect spider posture and document each movement of its limbs during the web building process.

For this study, the researchers studied six hackled orb weaver spiders over multiple nights. Over the course of the study, the scientists had to record millions of leg movements.

To analyze the movements of the spider, they tracked 26 points on its body by randomly sampling 100,000 frames from one recording and used the Appen Data Annotation Platform (then Figure Eight) to annotate the dataset. They then selected 10,000 high-quality annotations for the model training. The Appen annotations were so good, that the 10,000 images was a fraction of the total images they had but were more than sufficient in adequately training their model.

The Johns Hopkins University researchers evaluated two CNN (convolutional neural network) tracking frameworks - LEAP and DeepLabCut using the available high quality training data sourced from the Appen platform.

“I found the customer success and engineering support teams extremely helpful. They responded to emails nearly immediately and we had constant meetings to make sure that the project was running smoothly. I don't have a specific example of a time they helped mostly because they were so intune with our needs that problems didn't arise.”

-Nick Wilkerson, Research Specialist

The Results

According to the paper, both algorithms reached similar performance on the training sample (8.2 and 7.6 mean pixel error for LEAP and DeepLabCut respectively, 4.5% and 3.3% errors ≥25 pixels respectively). Read the research paper here to review the details of their methodology,

Ultimately, the learning software enabled the research team to discover that the spiders follow a general behavioral pattern when building webs. This information allowed researchers to begin to predict which part of the web the spiders were working on based on their leg positioning. While the final structure will vary slightly, the webs follow a general pattern.

“In behavioral neuroscience, these limb tracking algorithms have been a real game changer when it comes to quantifying animal behavior, but the real challenge is training these algorithms in the first place, and for certain behaviors, like the 8 legs of a spider, it's a very challenging problem. The services Appen provides really enables the training of the algorithms to move the research forward, so it will allow more complicated behaviors to be quantified in the future.”

-Andrew Gordus, Behavioral Biologist, Johns Hopkins University

Initially concerned contributors would have a hard time interpreting the images accurately, the team at JHU were pleased they actually excelled at the task. The Appen Data Annotation Platform platform was excellent in evaluating the contributors and making sure that only high quality work was included in their final results.

With this information, researchers were also able to better understand spiders and how their brains work. Because the same behavioral patterns are shared across spiders when building webs, the scientists believe that the web-building patterns are encoded in their brains.

Spiders are among just a few animals that can build such complex and yet delicate structures. They’re part of an exclusive club of animal builders such as weaver birds and pufferfish.

This experiment isn’t the end for spider web research. The researchers at Johns Hopkins University are interested in further studying spiders’ web-building process and learning more about their brains.

Future studies will observe how spiders build webs while on mind-altering drugs, conducting similar research with the help of machine learning and high-quality training data. This will help scientists to understand which parts of the brain are involved in the web-building process and how web structures are impacted when drugs impact spider behavior. The hope is these studies will then be used to draw parallels to how human brains can be impacted by different medications.