Artificial Intelligence (AI) has tremendous potential to increase productivity, uncover new insights and create new experiences. But such a high potential technology demands that organizations build AI with a responsible lens in mind to make sure it nets a positive impact. The European Commission (EC) has taken steps to transform such recommendations into law and has recently expanded its product safety and liability regulations and released a set of “trustworthy AI” guidelines. It’s all part of the Commission’s effort to foster development of AI systems that are lawful, ethical and will minimize the impact of unintended consequences.Gartner offered its own recommendations to help companies take stock of the opportunities and start planning. Below are a few key takeaways.
1. Develop modular applications in which the pieces of an AI system can be easily changed to accommodate the requirements found in different places.
Every geography has a unique set of requirements and priorities to help determine what is legal, ethical, and permissible. For example, much of the groundbreaking work around AI has happened In the United States and China. Both countries have taken more of a laissez faire approach to overseeing AI development, each in pursuit of its respective goals of technological innovation and economic leadership. By taking a modular approach to application development, companies can more easily adapt their AI systems to regulations and requirements of a particular region (the EU, in this case), country or industry vertical.
2. Study the seven key requirements for AI outlined in the EC white paper.
It won’t be enough to slightly change your AI strategy. To be compliant, companies must apply the spirit of the guidelines throughout the process, from design of an AI system, through development, training, deployment and beyond. Doing this effectively starts with a full understanding of the EC guidelines (summarized below) and their implications.
- Human agency and oversight—Ensure that people have oversight and can use an AI system without relinquishing autonomy.
- Technical robustness and safety—Minimize unintentional and unexpected harm, and prevent unacceptable harm, including physical and mental health.
- Privacy and data governance—Protect the individual's privacy and ensure the quality and integrity of data, including insights or decisions that a system generates.
- Diversity, non-description and fairness–Honor inclusion and diversity to ensure a fair and equitable system.
- Societal and environmental wellbeing—Be mindful of an AI system’s potential impact on individuals, society and the environment, and carefully monitor for negative impact.
- Transparency—document data sets, decisions, and processes as a means of recourse
- Accountability—Enable auditors (both external and internal) to evaluate an AI system’s decisions and how it reached them.
The Commission’s requirements map very closely to the four areas we consider when building responsible AI:
Being inclusive
Just as it's important to scrub and organize your data on a regular basis, you must also be intentional about training your AI model. For example, when tagging your meta data it’s critical to use an ethnically diverse group of people. This can help eliminate many of the biases that would otherwise creep into a model and skew the outcomes, resulting in AI decisions that are fair, accurate and equitable. (Learn more about the Top Eight Ways to Overcome and Prevent Bias).
Being explicable
For all the business and productivity benefits that an AI system provides, people must be able to trust the decisions that an AI model makes. That trust is only possible with a clear understanding of how the AI system was trained, and confidence that training data was collected, handled and stored responsibly. To this end, create a clearly defined and documented training regimen and establish a formalized process for reviewing the legitimacy of AI decisions.
Ensuring security
Whether working with a partner or building and training your AI model in-house, you must have strong security standards that are compliant with industry and region-specific regulations, including SOC2 Type II, HIPAA, GDPR and CCPA. If you’re working with a training partner, look for one that offers options such as secure data access, secure annotation and onsite service options, private cloud deployment, on-premises deployment, and SAML-based single sign-on.
Mitigating Impact
In the rush to innovate, it’s easy to forget the tradeoffs of technological advances. That’s why you need to keep an AI model’s intent, as well as the potential consequences of any wrong decision, top of mind during model design and development.
3. Incorporate the various kinds of risks that the EU has identified into your AI planning process.
The EC prioritized the individual’s health, well-being and security above all else, and these guidelines are meant to help minimize the risk of any unintended consequences. But they also open the door to compliance risks that may not exist in other geographies—especially when designing systems for healthcare, transportation, energy and the public sector. Rather than retrofitting an existing AI system, start with a new plan, and factor in these risks from the start.
4. Don’t draw on EU development resources without also contributing to shared facilities.
The EU is fully committed to being a leader in trustworthy AI and helping companies create a sustainable business model. As such, it’s gone to great lengths to provide guidance and resources that help companies to build solutions and create data governance bodies to safeguard against any abuse. For example, there are now more than 70 digital innovation hubs that function as industry-specific working groups. And the Enhanced European Innovation Council provides grants and other funding vehicles to support top-class innovators, start-ups, small companies and researchers.But the EU intends to create a community of companies that share resources and ideas. Therefore, companies wanting to do business in the EU must offer up their own ideas, rather than taking advantage of others’ contributions.