Insights from Conversational Interaction 2018: NLP, Chatbots, and Comedians
Prithivi Pradeep, Appen VP of Business Development, and I recently attended Conversational Interaction in San Jose, CA. The conference provides companies and individuals a great opportunity to share and learn about the resources and knowledge that allow them to embrace AI. The importance of Artificial Intelligence technology has been emphasized by influential corporate executives and thoughtful analysts alike. Conversational interfaces are a major part of that trend.
The conference highlighted an exciting, dynamic technology space.
The rise of in-home speech technology, for example, has seen a huge boost recently. Alexa Skills in particular have inspired and facilitated lots of development.
Numerous industry players are also trying to solve for the disconnect between chatbots and human communication using natural language. There’s been lots of progress on the “understanding” side of the equation—i.e. getting the system to correctly detect users’ intents, but this is still a very challenging problem.
On the other hand, the language generation side of the equation—what the chatbot itself has to say—is still in its infancy for the most part. This idea was something of a theme throughout the conference, but pointedly showed up in a few anecdotes:
Several presenters mentioned the need for conversational UX design, and hence conversational UX designers—a discipline which doesn’t really exist yet but would involve the intersection of UX design, sociology, and language technology.
In a talk on “10 Key Learnings from Having Built More than 100 Chatbots”, Aakrit Vaish, CEO of Haptik, mentioned that customers responded very positively to bots that displayed some personality. Because of that, they had hired a stand-up comic to help create their bot’s responses. As he put it, “we set out to hire a bunch of engineers and wound up hiring a comedian.”
The 2017 Alexa Prize contest offered a $1 million prize to any student team able to create a bot that could coherently and engagingly converse with a human for 20 minutes. No one managed to do it. The winning team from the University of Washington created a bot that averaged about 10 minutes.
In addition to attending the event, I also spoke on the importance of high-quality data for building effective language-based solutions. In the presentation, I discussed the critical but often unacknowledged role of human-annotated linguistic data in the development of AI systems, touching on the cost of having inadequate training data, along with some of the practical concerns associated with acquiring high-quality human annotation.
Interested in the topic I spoke on? I’ll be expanding on this presentation in a free AI Trends webinar April 4th. If you’d like to attend, send us a message and mention this event. We’ll email you an invitation to join once registration is live.