We’re talking about business process automation
AI solutions often involve taking repetitive, time-consuming tasks and developing methods to complete them more efficiently. This means that often, AI is a form of Business Process Automation (BPA), except the process you are automating is informed human judgment. BPA usually addresses predictable, highly repetitive tasks, but advances in computing have dramatically expanded the scope of what can be automated — including things like automatic processing of texts and images. Still, thinking about AI solutions in this way can help you structure your adoption approach on more familiar ground. Work on automatic text processing began half a century ago. When the US Postal Service started using Optical Character Recognition (OCR) to help sort the mail, it was traditional BPA through and through — machines were programmed to recognize characters in portions of typed address labels, and letters were routed as a result. It was the dawn of modern AI, though, that really kicked things into high gear. Now we can build systems that are able to do really impressive things like understanding the meaning and sentiment of texts, and recognizing objects in images.
Learning takes time. Be incremental.
You probably wouldn’t hire an inexperienced person and expect them to do a perfect job right away. Nor would you probably assign them the most complicated task first, when you could start them off with something more straightforward. Successful onboarding requires adequate training and reasonable expectations. Machine learning is learning too — your AI is only as good as the data it’s been trained on — so this wisdom still applies.
There are tradeoffs
Assembling a component involves the same steps whether it’s done by human or machine, and the machine can do it faster. However, using AI to automate tasks like text classification or image analysis isn’t quite the same situation. Humans have minds, whereas machines just look for patterns in features they extract from the data. What that means is that machines can behave differently from their human counterparts in how they classify things. Often, this improves results but sometimes it can also lead to errors that a human wouldn’t make. Don’t let that erode your confidence in AI — at least not necessarily. Your system probably also got some things right that humans would have missed. Remember that learning is an incremental process, and by focusing on your system’s high-confidence predictions, you can probably still automate a portion of your data processing with human-level accuracy, and with the right data your model will improve over time. Adopting AI also, almost certainly, means a degree of behavior change. Roles may shift, processes will be altered, and your results might look a bit different. Organizations are often reluctant to change behavior, because there’s a break in continuity and an up-front cost of adapting. But it’s worth clearing this hurdle because you will adjust more quickly than you think, and the positive benefits of automating your labor-intensive process will more than offset the cost of switching. In fact, it will probably even open up opportunities that didn’t exist before.
Building confidence
In evaluating a human employee, it might make sense to spot-check their work — select a handful of recent judgments they made, and see if they look solid. Based on this, you can make pretty reliable inferences about how good the rest of their work will be. While this might be a useful strategy in evaluating human employees, there are reasons to be careful using it when evaluating the performance of your AI system. First, because your AI can operate at a much greater scale than a human, your small handful of examples is less representative. Second, because machines and humans reason differently, you should evaluate them differently. Third and most importantly, too much spot-checking can cause you to fixate on certain isolated errors. I’ve seen organizations that were new to AI lose faith unnecessarily because they encountered one or two “inexplicable” errors, even though their system overall was performing fairly well. Rather, consider evaluating your classifier at a larger scale. One option is a technique known as cross-validation; another is to create a separate, large test set of items you hold out for evaluating your model’s predictions. Are the high-confidence predictions accurate? Does the model’s performance improve over time as you collect more training data? Does it excel or struggle with certain types of examples? Answering these questions using a larger dataset will help you evaluate your AI system in a much more useful way.
Maintain and improve your models over time
Accurate data classification is often a moving target. Your data changes over time, and so do the questions you are trying to answer. A news story that was relevant to your company a year ago may no longer be important today, and what was once a low-priority customer support issue may now be critical. To capture these changes, and maintain the accuracy of your AI system, you need to commit to ongoing evaluation and maintenance. The most important ongoing maintenance task is keeping your training data up-to-date. You’ll want to provide your classifier with fresh data to make sure that new trends are accurately captured and available to the model. You’ll also want to make sure that the labels being applied to your data are consistent, and that when things change, out-of-date items in the training data are either updated or removed so that you don’t confuse your model.