AI is a powerful tool that should be used responsibly and thoughtfully to reduce harm.
I ethics is a set of guiding principles designed to help humans maximize the benefits of artificial intelligence and minimize its potential negative impacts. These principles establish ‘right’ from ‘wrong’ in the field of AI, encouraging producers of AI technologies to address questions surrounding transparency, inclusivity, sustainability and accountability, among other areas.
AI ethics may require organizations to establish policies that respect data privacy laws, account for bias in algorithms and explain to customers how their data is used before they sign up for a product.
WHAT ARE AI ETHICS?
AI ethics are the principles around responsible and moral use of artificial intelligence.
To ensure artificial intelligence is being used in the most accurate, unbiased and moral manner, it is important for companies to put ethical AI into practice. That’s why companies like Microsoft and IBM have created comprehensive AI ethics guidelines, and why even smaller tech companies are creating standards around how to use AI responsibly too.
Why Are AI Ethics Important?
By following ethical AI principles, companies can build AI products that improve the lives of users while avoiding possible pitfalls. For example, not enforcing inclusion standards may lead to biased algorithms that make a product inaccessible for members of underrepresented groups. This can erode trust in AI technologies and land a business in legal trouble if it violates its own policies or local laws.
And if a team doesn’t take the time to understand its AI product before releasing it, engineers and other personnel may not be able to explain decisions AI makes, reduce bias and fix other errors. These mistakes can further weaken a company’s credibility and transparency, making it much more difficult to regain the public’s trust moving forward.
Fair, user-centered and accessible AI products can speed up processes in various industries and simplify many tasks for consumers. As a result, companies that do follow AI ethics can create technologies that enhance the quality of life for diverse groups and society as a whole.
In fields like healthcare, for example, businesses are charged with handling sensitive data and performing actions that can alter people’s lives. Following the ethics of AI is then crucial to protecting valuable information, perfecting vital processes and avoiding the reputational or legal damages that come with irresponsible decision-making.
“The precision needs to be much higher in healthcare than in other situations where we’re just going about our lives, and you’re getting a recommendation on Google Maps for a restaurant you might like,” said Patel, CEO at , a healthcare AI platform. “Worst case, you’re like, ‘Oh, I actually don’t want to eat that today,’ and you’re fine. But in this case, we want to make sure that you’re able to very specifically make a recommendation and feel like you’re 90 percent plus on that precision metric.”
Benefits of AI Ethics
Respecting the ethics of AI production has wide-ranging upsides for society, but companies also have much to gain from observing ethical AI practices.
1.INCREASED SOCIAL RESPONSIBILITY
Creating an AI ethics framework compels companies to take a more thoughtful approach to AI, which can result in safer, more effective technologies that leave a positive impact on users. Ethical AI policies also provide a legal avenue for holding organizations accountable, making it easier to encourage businesses to be socially responsible with their use of AI.
2.COMPLEMENTARY AUTOMATION
By considering the implications of introducing AI into different sectors, businesses must weigh the downsides of automation along with the possibilities of boosting efficiency. Companies can then strike a balance between these two factors, distributing AI products that employees can use to speed up their tasks without automating entire roles and displacing workers.
3.IMPROVED EMPLOYEE MORALE
Employees want to work for organizations that pursue meaningful missions, with 93 percent of workers believing companies must lead with purpose and 60 percent being willing to take a pay cut to join a purpose-driven company. Committing to ethical AI processes shows employees that their employer cares about its societal impact, and this can convince top talent to stick around.
4.POSITIVE BRAND PERCEPTION
If a company adheres to AI ethics by respecting users’ privacy, remaining transparent in its operations and following other practices, it can earn the respect of the public and raise its brand’s status. There’s also a positive link between corporate social responsibility and job applications, meaning businesses that adhere to the ethics of AI could win over more candidates as well.
Concerns of AI Ethics
There are many ethical challenges when it comes to the design and use of artificial intelligence. Below are some of the more urgent concerns that AI ethics must confront.
1. BIASED ALGORITHMS
The tech industry has a bad habit of letting biases like gender bias and racial bias seep into its products, and this trend isn’t going away any time soon. Diversity and inclusion efforts continue to fall short in the tech sector, so companies must implement inclusive AI practices to avoid building products that only take into account the needs of their homogenous workforces and unintentionally discriminate against marginalized groups.
2. DATA PRIVACY
Companies can compile data from social media profiles, online activity and other mediums where consumers aren’t always aware of this process. Data privacy laws are likely to play a major role in checking AI companies’ power to do this, and the European Union and several U.S. states have responded with stronger data privacy regulations.
3.ENVIRONMENTAL COSTS
The process of creating and training AI models requires large amounts of natural resources, polluting the soil and leaving behind a hefty carbon footprint. Tech can play an essential role in supporting sustainability initiatives and achieving carbon neutrality, but the industry has a long way to go in this area.
4.EXPLOITATIVE LABOR PRACTICES
Transparency surrounding the creation and training of AI models has fallen into doubt, especially with the rise of digital sweatshops. This term highlights how tech companies often outsource the training of AI to laborers in other countries while underpaying them and applying other exploitative methods. These practices may fly under the radar as issues like data privacy steal the spotlight, revealing that the scope of ethical AI must expand.
Best Practices for AI Ethics
1. KNOW THE IMPACT OF YOUR PRODUCT’S AI
Companies should consider how the use of AI will affect the people who use the product or engage with the technology and aim to use AI only in ways that will benefit people’s lives.
“I think the first thing to do for a company is the leadership has to fundamentally make a choice,” said Brian Green, director of technology ethics at the Center for Applied Ethics at Santa Clara University. “They have to say we want to be making technology that benefits the world, that’s not making the world a worse place, because we all have to live here together.”
Even in less extreme cases, AI can cause harm to individuals by making people feel more isolated or addicted to their devices. Relying on the addictiveness of an app to generate more profits raises questions around the intentions of a mobile game, and companies with an AI ethics policy may choose to change or discontinue the game altogether.
“There’s so many algorithms and apps out there that use machine learning or other kinds of tactics to try to keep you addicted to them, which kind of violates human freedom in some ways,” Green said. “You’re being manipulated, basically, by these things.”
2. ESTABLISH AND ARTICULATE COMPANY VALUES AND ETHICS GUIDELINES
When a company decides to proceed with using AI in its business model, then the next step should be to articulate the organization’s values and rules around how AI will be used.
“Just as long as they have a set of principles, that’s a good start, but then you have to figure out how to operationalize them,” Green said. “That means you need to somehow engage the engineers, to engage the product managers. You need to engage people who are in the leadership in that part of the company. Get them on board. They need to become champions of ethics.”
, a hiring platform that uses AI for pre-hire assessments and customer engagement, has created an AI explainability statement that it shares publicly. The document outlines for its customers why and how the company uses AI.
“At my time with , I have seen us move more and more toward just being more transparent because what we’ve seen is that if we don’t tell people what we’re doing, they often assume the worse,” said Lindsey , chief data scientist at .
3. EMPHASIZE TRANSPARENCY
Startups using AI often find themselves rapidly testing. While it’s necessary, it can lead to forgetting how algorithms were initially created and why certain decisions were made at a given time, Patel said. Transparency around the creation of algorithms can help with understanding the traceability and reasoning behind decisions.
Sometimes machine learning techniques can become so complex that humans can’t possibly understand them. Black box models in AI are created from data by an algorithm where there’s no explanation to humans as to why the decisions were made.
Transparency around algorithms is also a way to help reduce potential biases in AI decision-making, said Sameer , adjunct associate professor at Columbia University and CEO of , a machine learning company.
“These days, with deep learning systems with a hundred million parameters, it spits out a decision,” said. “A lot of engineers don’t have the needed transparency in figuring out why it made that decision, and that makes it even harder to figure out when the biases creeped in and how to fix the model.”
4. FOCUS ON ELIMINATING BIAS
Bias can creep into algorithms when the data used in AI models is over-representative, inaccurate or otherwise skewed by humans. One way to potentially decrease biases is to have a checklist for engineers to think through with regard to the data they receive before building a model, said. Those questions might be: How was the data collected? What’s the history behind it? Who was involved in collecting it? What questions were asked?
has trained evaluators who analyze thousands of data samples for bias to ensure job candidates are assessed consistently and fairly.
“Is the training data biased? Does it have groups that are not represented in the data? Does it represent the group of people that you want to apply the algorithm to?” said. “Often, if we do see any problems, it could be we have a customer that’s using an algorithm on a population that’s different from the population we trained on.”
5. CONDUCT AI RISK ASSESSMENTS AND AUDITS
AI ethics evaluations can identify potential risks of how a company’s AI is being used and ways to address these concerns. has a team of four who regularly assess whether or not the company is abiding by its AI ethics oath, Patel said.
“All businesses do some sort of quarterly risk assessments, usually in the IT security realm, but what we’ve added to it a few years ago is actually this AI piece, so it’s more of a risk and ethics meeting,” Patel added.
6. UPHOLD HIGH SECURITY AND PRIVACY STANDARDS AROUND DATA
Maintaining high-quality data hygiene ensures accuracy and relevancy. Companies using AI should also make sure people’s personal information is safe and kept private, Patel said.
adheres to the European Union’s General Data Protection Regulation, which is one of the toughest privacy laws in the world and regulates how companies must protect the personal data of EU citizens.
“We do business globally. We have to adhere to the strictest standards, so we’re seeing that Europe is really paving the way, and I think states are starting to follow,” said.
In an ideal world, said opt-in for users deciding to share their personal data, rather than opt-out, would be the standard, and ideally, people would be able to easily access and research all data that’s collected about them.
“It’s counterproductive for a lot of companies because they are using the same data to make money,” said. “That’s where I think the government and organizations need to come together to come up with the right framework and write policy that is more balanced, taking user privacy into account, but allowing businesses at the same time to collect data, but with a lot of controls for the users.”