How Appen Struck Gold in Language Services

Language services and technology company Appen announced on October 4, 2016 the acquisition of UK-based transcription provider Mendip Media Group (MMG) for an undisclosed sum. Appen is headquartered in Sydney, listed on the Australian stockmarket ASX and has the bulk of its client base in the United States. […]

By | October 24th, 2016|Press|0 Comments

The holidays are coming.  Is your on-site search ready?

A recent study reveals that on-site search is key to engaging with your customers once they reach your site.  According to the survey results, 73% are likely to leave a retail site that doesn’t provide good search results, and 37% said they are not likely to return. […]

By | October 6th, 2016|Press|0 Comments


Level 6, 9 Help Street Chatswood, NSW 2067 Tel + 61 2 9468 6300 Fax + 61 2 9468 6311 ASX ANNOUNCEMENT 4th October 2016 APPEN ANNOUNCES STRATEGIC UK ACQUISITION Appen Limited today announced that it has acquired Mendip Media Group Limited (“MMG”) to enhance the provision of language services to government clients by the Language Resources division. […]

By | October 3rd, 2016|Press|0 Comments

Appen eCommerce Team to Exhibit at Retail’s Digital Summit 2016

Appen’s eCommerce team is pleased to announce that they will be exhibiting at’s signature event, Retail’s Digital Summit, September 26-28 in Dallas, Texas. Stop by and visit our team at booth #7087 and let the experts show you how to convert your visitors into buyers. […]

By | September 19th, 2016|Press|0 Comments

Appen to Exhibit at INTERSPEECH 2016 in San Francisco

Contact: Rachael Pappas, Appen Language Technology Service Provider Appen Announces New Enterprise Data Analysis Platform and Sponsors Student Scholar Sydney, Australia – September 7, 2016: Appen is pleased to announce that it will be exhibiting at the INTERSPEECH 2016 Conference in San Francisco, California, September 9 – 12 in Booth #1. Appen will be presenting its new enterprise data analysis platform, Appen Global. […]

By | September 7th, 2016|Press|0 Comments

Appen to Exhibit at SIGIR 2016

Appen, your machine learning partner, is excited to announce that we will be exhibiting July 17th through the 21st at the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval in Pisa, Italy. We love this show for it brings together some of the top Data Scientists and Machine Learning minds to collaborate on advances in web, social and eCommerce search. […]

By | July 12th, 2016|Press|0 Comments

Appen Named a Top 100 Company to Watch for Telecommuting Jobs in 2016

Appen is ranked #77 among the top 100 of companies hiring telecommuters in 2016 […]

By | February 5th, 2016|Press|0 Comments

How to Write Great Rating Guidelines

6 Steps to High Quality Objective Data By Ben Christensen & Carla Haces Human evaluation is a key factor in improving web search, eCommerce search, and social media relevance algorithms. Humans are simply better than computers at managing subjectivity, understanding intent, and coping with ambiguity. But human evaluation comes with a risk: the very subjectivity we rely on humans for can also lead to inconsistent data. How to mitigate this? With great rating guidelines. Clearly-written guidelines can harness the subjective power of the human brain and transform it into objective data that can in turn be used to train computers to better understand that most important human of all—the end user. A set of guidelines is a document containing instructions on using a given tool, or completing a given task. It can be very brief (a few pages) or as long as a book. This will depend on the complexity of the task, and the background of the evaluators. Guidelines are often written for non-technical (or semi-technical) individuals. It is very important that the wording used is clear, simple to understand, concise and overall effective in communicating the message without being overly technical, complex or redundant. Be brief but detailed. […]

By | February 4th, 2016|Press|0 Comments

Curating the Crowd: When to Use Curated Crowds vs. Crowdsourcing

By Ben Christensen Director, Content Relevance “When should I use crowdsourcing and when should I use a curated crowd?” This is the question anyone interested in staffing human annotation tasks should be asking, but many don’t because they don’t even know there are two different options. So let’s start there—defining the options. Assuming you need human annotation, for example for search relevance evaluation, there are two ways you can gather the necessary humans to do that work: 1. Crowdsourcing, where the task is made available to a large crowd without any training or management beyond a very limited set of task instructions and possibly a simple screening test; or 2. Curated crowds, where a smaller group is selected to complete the task accurately according to quality guidelines. The power of crowdsourcing is in its numbers. You can accomplish a lot quickly because many hands make light work. A hundred thousand people can do quite a bit more than a hundred can. The cost is less because crowdsourcing typically pays only a few pennies per task. Most members of the crowd aren’t trying to make a living—they’re just trying to make a few extra bucks in their spare time. There’s usually little overhead involved in crowdsourcing because the crowd looks after itself. You put the task out there, and if it’s interesting enough and pays enough, the crowd will get it done. […]

By | November 17th, 2015|Press|0 Comments

Appen’s Dorota Iskra to Speak at LT-Accelerate 2015

Appen is proud to announce that Dorota Iskra will be speaking at LT-Accelerate 2015 in Brussels, Belgium on Monday, November 23rd. Dorota’s presentation is entitled “Crowdsourcing in language data collection and annotation”. Appen has been active in the area of data collection and annotation for over 15 years. Starting with a traditional approach of working locally, we have gradually moved to using web-based tools. This presentation focuses on advantages of web-based tools such as greater reach, lower cost, and enabling us to build an extensive network/crowd. On the surface crowdsourcing appears an attractive alternative for data collection and annotation, but poses a lot of challenges. In order to address these challenges, we have built our own crowd in a controlled and tested environment. Dorota has extensive experience in speech and language technology where, after a period research, she has moved towards applications and language resources. She has worked in the telecom and software development industries and is currently responsible for the European business at Appen, a company collecting speech and language data and providing a wide range of linguistic services.  

By | November 11th, 2015|Press|0 Comments