Tammy Garves and Phil Hall are Appen’s Senior Vice Presidents, the counterparts at the helm of the two main divisions of our company. Tammy leads our Content Relevance team. Phil heads up Language Resources. We caught up with them recently to talk about the need for structured data for machine learning. They also touch on industry trends, predictions, and why they like working at Appen.
Appen: So, Tammy, you lead the Content Relevance division for Appen. What is Content Relevance?
TG: We partner with companies working on machine learning or artificial intelligence (AI), helping them take their unstructured data and structure it. Tech companies have millions of “inputs,” or data points, and we help them make sense of that data by structuring it, tagging it, and helping to define what it’s saying. This work requires a large amount of human annotation. AI is only as intelligent as the human data that feeds it. Our evaluators or raters help data tell a better story. I have an amazing team who helps me do all of this. We work together to make sure we’re delivering the best data for our customers.
We work in a number of different industries, including search relevance, eCommerce and social media. For search companies, we help them make sense of search data by comparing queries with results. For example, say someone searches for hiking boots. If they get the homepage for an outdoor retailer, that’s a fair result. If they get the hiking boot category page, that’s a better result.
For eCommerce companies, we do a lot of the same kind of query matching to help online retailers improve results, and help their customers find what they need.
For social media, we help companies personalize content for their users. We evaluate what’s showing up in people’s feeds—content and ads—and determine whether or not it’s important and relevant to them. That personalization and relevance is what keeps people coming back, which in turn increases the value for advertisers.
Appen: And Phil, tell us about Language Resources.
PH: We work with companies making cutting-edge products—like in-home assistants with voice recognition, car navigation systems, and even self-driving cars—collecting and annotating data. For us, data collection is traditionally speech, but it also includes text as well as images and sounds. We use all of that data to build pattern-recognition technologies, like voice recognition. When clients want to build a brand-new product, they work with our data collection team. Together, we create the high-quality, structured data for machine learning required to support each client’s products.
For example, we have worked with clients to train in-home assistants that interact with users through voice recognition. In this style of collection, we rent houses, stage them with appropriate furniture, and hire people to come in and use the product so we can collect that sound data. Similarly, for a car manufacturer that wants to expand into a new market, we hire people in that market to drive around and use the technology. We capture the spoken language of native speakers and also the acoustic conditions of those particular environments. This approach of collecting data in real-world, or carefully simulated, environments results in a high-quality end product. The ambient acoustics in a car driving through snow, for example, are very different from one driving in dry conditions. It’s important to collect that environmental data so that voice-recognition systems can tell the difference between speech and “noise.” Our focus on collecting accurate data and structuring it for machine learning is what makes Appen’s solutions so effective.
Data annotation is typically what we do for clients looking to enhance an existing product. For that business, clients send Appen their real voice-interaction data. We then examine it to help them identify and correct recognition failures, so they can retrain their models and improve performance. This is something Clients can do themselves at small scale but they come to Appen when they need a lot of high-quality data – 10,000 hours of audio material is not an unusual volume of data to bootstrap a speech recognizer in any one language. They come to us because of our 20-year track record of doing this work, our deep expertise, our superior data, and our reputation for contributing to a higher-quality end result.
Appen: What do you like most about the work you do?
PH: I love that we have a window into the future. I’ve been with Appen since 2001. Throughout that time, I’ve been helping clients create products that will appear in the marketplace a year or two later. That’s exciting. It’s also very satisfying to help take technology out to a huge range of languages. Most new technology goes to the US market, and American English, first. That won’t change. But Appen has worked in more than 180 languages, so we help our clients take these technologies to smaller and less well-resourced markets such as those speaking Hausa, Lao, Sindhi and Dholuo.
Also, I love being surrounded by the smartest people in town. There’s something about our business that attracts a lot of really bright, like-minded people.
TG: There’s never a dull moment at Appen. It’s the thing I love the most about this job, and the thing that keeps me busiest. Our clients have the most exciting challenges—and sometimes they’re really big challenges. Helping our clients solve them and move forward is exciting and rewarding. But sometimes, as Phil mentioned, we do have to wait a year or two to see it hit the market. Machine learning technology is pretty amazing. I think that the industry is in the process of discovering all the incredible things we can do with AI. It’s endless. I’m surprised every day at what our customers are doing with AI.
Appen: As you both work with cutting-edge and emerging technology, are there areas where you’re seeing an increasing demand? What are the trends?
TG: There’s a huge demand for data structuring. Companies are collecting more of it than ever before, and trying to figure out what to do with it all, how to make sense of it so they can put it to good use. With the rise of the Internet of Things, with more smart devices out there collecting more data, this trend will certainly continue.
For example, airplanes have a lot of sensors on them, providing data about all kinds of things. Airlines and aerospace companies now have thousands upon thousands of data points from those sensors, and are working on structuring it in a way that makes it easy to understand what it all means so they can use it to improve their operations. This trend is also affecting the healthcare industry, finance, search, and the list goes on. There are tons of data, and the amount is only growing.
PH: One trend I see is that we used to think of ourselves as a language business, but more than that, we really are a data company; the data we work with includes text, images, video and audio, including speech and non-speech sounds. That’s a reflection of growing interest in autonomous vehicles and robots that need to process environmental data fast, and in a detailed way.
Another trend is the increasing desire for personalization. The more a site or social media feed is tailored to the individual user, the more useful it is. But offering recommendations requires us to work with private data, which can be a sensitive topic. With in-home assistants, for example, they’re in your home with your family, and listening all the time for your voice commands. The training data needs to help the assistants distinguish between when people are speaking to each other, and when they’re talking to the machine.
I’m also seeing a trend toward crowdsourcing, but curated crowdsourcing. As Tammy mentioned, so much of the work we do requires human beings to do the detailed annotation work: assessing query-result pairs, annotating images, etc. Companies want all the benefits of crowdsourcing—the cost-efficiency, diversity and immediacy—but with a more professional angle, and under secure conditions. That’s exactly what we provide.
Appen: Do you have any predictions for where things are heading?
TG: My prediction is everything we do will be more personalized and proactive. When we’re trying to, say, find a movie in our area, the information will be more readily available, geared to our personal preferences and sent to us before we ask for it.
My other prediction is that AI will not put the entire planet out of work! It will create interesting and challenging jobs. We’ll have to change the way we educate people so that we develop different skillsets. There will be a lot more automation, but I think that for every job it takes away, AI will create some sort of tech job.
PH: I agree about the jobs. Every advance just begets the next advance. Our clients used to build tools on 100 hours of data, then it took 1000 hours, and now it’s 10,000 hours. Humans have to touch all of that data. To train a robot, you might have to feed it millions of images or videos. Each photo, each video frame has to be annotated in detail by a person, because machines don’t generalize as well as humans. Humans can identify and deal with unseen situations, such unusual accents or or unexpected background noise, better than machines. We’re better at scanning a bunch of data and identifying what’s most important, most relevant to the situation, and we’re also better at complex problem-solving.
Appen: Do you ever find your teams collaborating on projects for clients?
PH: Sure. We had a data collection project recently that needed to be done in Seattle on very short notice. My team is based in Sydney, so rather than send our whole team over there, we trained up some of Tammy’s team who are based in Seattle, and they did it for us. We’re keen on cross-pollinating and growing people at Appen.
TG: There are also projects that require both data annotation and voice data collection. Sometimes it’s not enough for a client to have a typed version. We have one client who sends us data, and we do the annotation work on that data and then hand it off to Phil’s team for the voice recordings.
Appen: Why is the work Appen does important?
TG: I think that people are busy, society is busy, and anything we can do to make things easier for people, to use AI to create efficiencies, that will make us more productive as employees, parents, spouses and as humans.
PH: I’d say we are improving people’s lives by bringing tech to smaller markets, supporting counter-terrorism technologies, and making driving a safer experience.
I think I speak for a lot of us when I say I’m proud of the work we do. What we do is difficult and complex, and we put a lot of effort into finding top people to join us. It’s hard work, but once people are here, they don’t want to go anywhere else. Our retention rate is enormous. After 16½ years, I’m still here—and honestly, I never thought I’d ever have a desk job. *
* Phil is the former bass guitarist for the Australian psychedelic punk group Lime Spiders, and will temporarily re-join the band for a reunion show in Spain this year.