Live Webinar - Optimize LLM performance through Human-AI Collaboration

Beyond "Do No Harm:" Why AI Must be Ethical and Responsible

Published on
February 24, 2023
Author
Authors
Share

The Design and Deployment of Responsible AI

The pace of technological change is increasing. The long-term uptake of AI will depend not only on societies’ willingness to embrace new technology, but also on the willingness of those developing the technology to embrace all of society. Generative AI is the latest breakthrough in AI, and like many cutting-edge technologies generative AI is facing its own teething challenges.

Society sets a higher standard for AI than it does for humans. It’s not sufficient for it to simply do good – it must also not do bad and there are no allowances for “human mistakes.”

AI is Already Doing a World of Good

Media buzz and headlines aside, AI technology has been working behind the scenes to make sure we have a better, healthier, and more equitable future.

AI Can Help End World Hunger, One Crop at a Time

The thought of everyone having enough to eat is a comforting one; One that can soon become a reality. As with the introduction of most new things, there’s some hesitation about how much of a role AI should play in something so delicate as having control over the world’s food supply. Despite the hesitation, so many advances are being made towards AI saving our crops so more food makes it to more tables. AI can positively impact the entire growing process, from the early stages of identifying weeds and finding new locations to plant crops, to identifying plants dying of disease. Even in harvesting, AI helps by identifying which crops can be used for human consumption and where in the food chain they belong.

The Role of AI in the Fight Against Climate Change

Data plays an important role in the future of our planet. Machine learning is now being used to predict the weather and if/when a natural disaster will strike. These AI models are being trained with historical weather data, with the anticipated outcome being a reliable prediction of when the next storm will hit and the impact that it can have on the area. There’s some concern over the amount of energy needed to train the AI model before implementation and if it could cause an increase in GHG emissions. According to the International Energy Agency (IEA), currently, only 1% of all global electricity contains all the global data center energy demanded. One way to ensure this holds true is if hardware continues to become more efficient. This way a service demand increase won’t result in an increase of power output.

Helps the World Communicate

It’s important to recognize the significance of translation. Without it, most of the world would be unable to read texts written in unfamiliar or forgotten languages. This could result in the loss of important historical and cultural events, or important scientific and medical discoveries only being shared within specific groups. Language connects people from all around the world, but inadequate translation can lead to inequitable access to information. One of the way AI helps to resolve this is through chatbots and other conversational AI powered platforms that are trained on diverse datasets that are free from bias. Gathering data from people around the world from various demographics, ages, religions, and cultures speaking different languages and dialects helps train computers that broaden the world’s access to critical communication.

Getting it Done: AI Responsibility by Design

There have been many public examples recently where generative AI models were hallucinating and inventing facts, which might be ok for creative endeavors like fictional writing, but not ok if someone is looking for facts or local up to date information like a search engine.

Because these large language models are trained using currently available written data, mostly from the internet, it is very hard to filter out the source data that eventually leads to incorrect outcomes or bias in areas like gender and political preference. This is due to humans, no matter how hard we try, being inherently biased themselves or due to a skew of who is creating the content, based on a number of factors, such as access to the internet.

Individuals and companies are currently having to go through a process to understand the potential risk posed by currently flawed Generative AI. There have been many responses, from it being embraced as a work tool of the future to outright rejection and bans. Regardless, the implementation can have serious impacts to society, good, where done responsibly but potentially negative, where hallucinations and mistakes are permitted to permeate AI that is used to impact the lives of everyday people without appropriate safeguards.

Designing AI for Good

Answering the call for responsible AI requires an approach called Responsibility by Design, similar to the Privacy by Design concept championed by privacy experts. This requires embedding responsible AI practices into the design specifications of technologies, business practices and physical infrastructures from the beginning. Doing this upfront is far better than trying to do so retrospectively. It may be that Responsibility by Design is legislated (and we’ve already seen the first attempts out of the EU) but it may simply be that it makes good sense.

What we believe we will see in the future is less focus on the algorithms and training models themselves to prove that the AI is responsible, but more focus on the underlying data sets, human feedback and increasing the scrutiny and guardrails that are provided for the outputs.

Incorporating feedback from real people with real-world experience across a diverse set of backgrounds is the best way to have the models trained to act more like humans. If the feedback is diverse and expansive, the models have fewer hallucinations and bias.

AI needs to work for all, equally

AI needs to help people. And to the best extent, all people. When built responsibly, AI is more successful and works in a way that benefits everyone, regardless of race, gender, geography, or background. Large language models are generally built on English language data which accounts for the majority of online content, yet less than 20 percent of the world speaks English as a first or second language. Limiting language inputs not only leaves a tremendous language gap of users underrepresented in the models of today, but also a cultural gap.

It is widely known that when you are not represented in the underlying training data, it’s more likely the AI won’t work for you. Creating ethical AI that responds to, respects, and reaps benefits for everyone means those involved in initially training it and later refining it needs to not only reflect the diversity of the people it ultimately serves, but those creating it also understand the critical role they play in impacting those around them.

Appen’s role in AI for Good

Appen's role is significant in Responsibility by Design as our diverse, one million strong team of AI Training Specialists spans 170+ countries and speaks 235+ languages and dialects. This underpins an ethical and diverse AI supply chain that helps ensure AI works and is relevant across societies and cultures.

We’re in the data business. Crucial components in the AI lifecycle are data sourcing and data preparation–collecting, annotating, and evaluating data. We create datasets for companies all over the world and we couldn’t do it without the support of our global Crowd. These are the people who collect and label data, making AI and machine-learning solutions possible. That’s why we have established programs like our code of ethics and fair pay, to ensure our data is sourced and managed fairly. As part of our company’s promise to deliver ethical AI, we’ve established, what we call, our Crowd Code of Ethics. Taking care of each other enables us to deliver the highest-quality data possible.

As we embrace generative AI, we see the work of our AI Training Specialists expanding from simple annotation tasks to being involved in the evaluation of models and consulting on end user experiences. Their work will be critical to ensuring these technological advances are able to be adopted not only in developed societies, but across the global south, hopefully helping to reduce the current technological divide.

Our crowd has already played an integral part in the refining generative AI systems, and we are now providing critical guard rails in their learning.

We have greater hope now and do see significant efforts to reflect on the impact that new technologies are having on society. As tech pioneer Bill Joy said, “We have to encourage the future we want rather try to prevent the future we fear.” As we explore the possibilities of what generative AI can do for humanity, we are committed to being an ethical, responsible part of the AI supply chain powering AI for Good.

Related posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

No items found.
Dec 11, 2023