Our team recently attended Crowdsourcing Week (CSW) in Seattle. CSW connects people with best practices in crowdsourcing and crowd innovation.
Several sessions at CSW focused on soliciting creative work from the crowd to create sellable assets such as music, writing, and design. Other speakers highlighted their work with crowdsourced efforts such as the XPrize or pooling funds in peer-to-peer lending.
One session focused on the importance of the crowd in Artificial Intelligence developments.
In her presentation titled “Humans to the Rescue: Troubleshooting AI Systems with Human-in-the-Loop,” Ece Kamar, Senior Researcher at Microsoft Research AI, discussed the need for human intervention to resolve issues and improve algorithms.
AI has made huge leaps forward in the past few decades. “Behind all the AI improvements has been large amounts of collected training data,” explained Kamar. “That data comes from the crowd.”
When AI has been released in “the wild,” though, the results can be less than desirable.
From Microsoft’s racist TayTweets chatbot to Google’s photo tagging of two African Americans as ‘gorillas,’ AI can fail without checks and balances. “Collaboration with human intelligence is the key for building reliable AI systems. Humans need to be in kept in the loop,” said Kamar.
There’s also power when humans and AI collaborate.
One example Kamar highlighted was the series between World Chess Champion Garry Kasparov and IBM’s Deep Blue in 1997. After Kasparov lost to Deep Blue, some thought humans wouldn’t play chess anymore. Indeed, according to research, humans haven’t drastically improved their chess-playing abilities since 1980, while chess-playing bots have. However, when humans and chess software play together, it’s superior to the best chess algorithm (see green line in the chart below).
To advance AI, identify failures, and enhance algorithms, a crowdsourced feedback loop is essential, Kamar said. “Perfecting these complex systems doesn’t work without the human input.”