Live Webinar - Optimize LLM performance through Human-AI Collaboration

5 Things We Learned at O'Reilly's AI Conference

Published on
September 21, 2017
Author
Authors
Share

There are no shortages of AI conferences out there, but the event the folks from O’Reilly threw this past week is one of the best in recent memory. They managed to couple immersive business keynotes with technical takes of the state-of-the-art in AI. And while we saw tons we enjoyed, here are a few things that stood out:

Algorithmic bias is a pervasive problem, but it’s one that can be solved

We’ll be writing about this next week, but algorithmic bias is a ubiquitous problem. Put simply, algorithmic bias describes when a model outputs unfair or prejudiced conclusion. These stories come out with clockwork regularity, from ad networks showing high-paying jobs to men far more often than women to bogus sentencing based on classifiers to an AI-judged beauty contest that demonstrated preference for white contestants.It’s worth mentioning that rarely are these outcomes intentional. It’s usually born from a confluence of engineering teams not asking the right questions up front and datasets that are subtly biased providing outcomes that actually exacerbate the problem.So how do you fix it? Diverse engineering teams are an important first step. Analyzing outputs for these biases is also extremely important. Creating inclusive products is the third. And if you're thinking "wait, that's the whole process!" then, well, you're right.Take the case of one of our AI for Everyone winners, KivaFaces. They found facial recognition models often simply couldn’t find people from the developing world so they’re creating a dataset of scored images from the exact population. Kiva noticed the problem and their dataset can be used to create a more inclusive algorithm.CAPTION: Daniel Guillory from Autodesk (right) and Matthew Scherer finished our first day talking about how to build an unbiased AI[/caption]

AI companies will be a whole different kind of business

Andrew Ng had a brief throwaway joke that stuck with us: “A shopping mall with a website isn’t an internet company.”Okay, admittedly, not the funniest thing you’ve ever heard, but the point remains quality. Web companies do operate differently: they’re more agile, have more diffuse decision making structures, and generally prioritize action.In the same way a shopping mall with a website isn’t an internet company, an internet company with a neural net isn’t an AI company. Ng outlined a few traits he feels will exemplify AI companies going forward and one of the big ones surrounded data. Namely, AI companies will strategically acquire data to build defensible businesses.This, of course, makes a ton of sense. Having the best quality and quantity of data means your AI is far more likely to be successful. It also means less competition. You can see that with Google search now–while a lot of their strength obviously comes from their engineering team, their algorithms, and the simple UI, Google remains the gold standard because their data lead is all-caps SUBSTANTIAL.Building an AI company requires a few other things too. For one: centralized data warehouses. Your engineers need data access and they need it on demand. Automating as much as possible–i.e. leveraging a key benefit of AI–will also be a defining trait. Lastly, be willing to update your job descriptions. Product managers won’t be working from wireframes, engineers will need ML chops, and your sales folks will need to understand a different level of nuance for you to be successful.

We care more about emotion than you’d think

There’s no shortage of audio personal assistants out there. Hound, Siri, Google Home, and Alexa come to mind, but that’s certainly not an exhaustive list. For the sorts of discrete tasks you’d expect these AIs to excel at–things like directions to a movie theater, recent stock prices, or flight information–their accuracy still isn’t anything to write home about. You’re looking at topping out around 77% and bottoming out somewhere around 50%. Additional training data, usage, algorithmic knob-turning, and the like should keep those numbers climbing.But interestingly, when it came to preference–as in, which assistant do users actually want to use–it turns out accuracy isn’t all that important.John Whelan from 10Pearls shared some fascinating research he and his team did on these AI assistants (that’s where those percentages above are from, in fact). And while they learned a lot, that bit really stuck out from his presentation.The take-away: we care far more about how these assistants make us feel than how right they are. People liked the banter and back-and-forth they had with Alexa even though it was measurably less accurate for certain tasks. And the folks in this study knew Alexa was falling flat on some of their questions. They just didn’t care. They liked her.Which is to say, if you’re designing an assistant, don’t skimp on its “personality.” It might be more important than its accuracy.

And emotionally aware machines are coming

Think about all the stuff your phone already knows about you. It knows where you are, right now. It knows the websites you visit, the games you play on the subway, the appointment you’re heading to, and what the weather will be when you get there. It’s completely integrated in your day-to-day life, but just at a logistical level. What if it–or really, any technology–knew how felt?This was the basis of Rana el Kaliouby’s short but provocative talk on emotionally-aware machines.Say you’re looking to make an assistant that understands how you feel. How would you do that today? Interestingly, if you’re looking at something like a simple conversation, only 7% of the emotional content comes across in the actual text of what’s said. 38% of your emotion is reflected in how you say it and the remaining 55% of your emotional cues are betrayed by your facial expression and affect.In other words, all the sentiment analysis in the world is only going to be able to get you to understand 7% of a user’s emotion. That’s fractional and, grand scheme, pretty unimportant. El Kaliouby’s company Affectiva focuses on emotional datasets that aim to help make emotionally aware AIs a real thing. Cars that know when you’re distracted or stressed, refrigerators that know you’re upset, call-trees that know you’re frustrated and solve your problem with a lighter touch: it’s all achievable through datasets of spontaneous emotional reactions. And if a goal of AI is to give us technologies that better understand our needs and can meet them with less fuss, this is the sort of work that gets us there.CAPTION: Rana el Kaliouby on the merger of IQ and EQ (a.k.a. emotional quotient)[/caption]AI doctors will usher in a new era of prevention, not drugsThere were copious chats about AI transforming healthcare and, well, that makes sense. It’s an industry with tons of high-quality data, lots of difficult, solvable problems, and one that employs plenty of smart people.One salient thing that stuck with us though, was Vijay Pande's Moore’s Law. Moore’s Law, for those who forgot, forecasted the exponential increase in compute power since the mid ‘60s. It’s one of the three big reasons the most recent AI winter thawed.The price of drugs, sadly, are climbing exponentially as well. And if prices keep tracking as they have for decades, we’re fast approaching an era where only the richest can afford the cures medical science discovers.Now, depressing as this is, the research being done in AI might help. That’s because a lot of that research deals with diagnosing diseases before they become intractable. Imagine getting blood taken and having an AI identify an early-stage cancer in time to arrest it. Imagine relying less on unaffordable drugs and more on a yearly checkup. Imagine prevention becoming the norm. It’s not far-fetched at all. And with the healthcare debate growing more risible by the week, it’s at least a small silver lining.

Related posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

No items found.
Dec 11, 2023