Artificial intelligence has morphed from a topic of science fiction stories to widespread implementation.
AI, for example, is extremely commonplace in the world of Big Data, where mass amounts of data are analyzed to reveal patterns.
But the use of AI is no longer limited to a handful of tech companies.
Many local and state governments in the United States already offer web-based services to citizens, and some have proposed that implementation of AI within these government systems is not only inevitable but right around the corner.
When coming face to face with the incredible extent to which AI now makes so many crucial decisions, there’s a temptation to go the reactionary route, declaring AI itself to be unfair and unethical.
Indeed, there is a great deal of evidence that the automation of large-scale systems affecting hundreds or thousands of lives can quite often be unfair or deeply flawed, as is detailed in Virginia Eubanks’s book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.
But even when looking at poorly designed instances of AI and automation, it’s clear that AI itself isn’t really the problem here.
AI isn’t naturally occurring. It’s a tool created by human beings, and if anything, it’s human error that has led to unethical AI.
Of course, as AI continues to grow, it’s necessary that great care is taken to ensure that these systems are designed with fairness, equity, and ethics in mind.
Focusing on ethical AI: Saishruthi Swaminathan
Saishruthi Swaminathan is an ethical AI expert who has dedicated herself to supporting ethical AI and educating people on its importance.
According to Swaminathan, this drive to educate and promote ethical usage stems from her upbringing.
“My parents nurtured me to think about the impact of my decisions on individuals around me. When I started practicing data science, this led to asking questions about the ethical use of information and the unintended consequences of application. Adding to this, we all know the effects of a biased system on individuals.”
Swaminathan isn’t a layperson coming in from the sidelines, but rather a data science professional who has first-hand knowledge of how these systems are implemented.
As this article will detail, Swaminathan continues to bring to light just how important it is that we tackle this issue right now, while AI and its various applications are still relatively new.
The threats of unethical AI
If you haven’t already researched AI, then you may be asking, ‘What’s the big deal? How much ground does AI really cover here and now?’
Quite a lot, as it turns out. This isn’t just a problem for the future, it’s a problem for the present, as Swaminathan explains.
“AI systems are becoming a part of our life and used in critical applications like hiring, clinical diagnosis, and judicial systems. From missed opportunities to posing a threat to human life, the impact can vary.”
In situations where AI commits a mistake, it may appear that it’s the victim’s word against that of an automated system.
This is a difficult dynamic, as many of us have learned to trust technology implicitly.
If a system is right 98% of the time, how sure of ourselves would we be when identifying and calling out a mistake?
Still, AI does make mistakes, and the consequences of those mistakes can be extremely serious.
Many of these kinds of mistakes can be traced back to poor design choices or an inherently flawed design process.
Swaminathan describes how human biases can easily lead to biased AI systems:
“Imagine feeding candidate profiles covering only a specific part of the community for training candidate recommendation engines. Your algorithm will learn only those data and might produce biased results against communities that are not covered. Bias in the recruiting system will result in missed opportunities and negatively affect the organization’s diversity.”
Because these systems are designed by people who already have a number of biases, perhaps even subconscious biases, it’s useful to take into account input from people with a wide variety of backgrounds and identities.
To help resolve this pervasive issue, Swaminathan proposed a solution for mitigating bias in the job recruitment process and was selected as a semi-finalist in the Silicon Valley Business Competition.
Approaching the topic
In a journalistic setting, it’s easy to say that AI should be ethical in all of its applications. But in person, it’s much more difficult to broach the topic.
For those who have worked on AI directly, it can be challenging to accept criticism of such laborious and complex work.
For those new to data science and for anyone not working in tech, it’s uncomfortable to discuss the harsh realities of unethical AI.
Swaminathan doesn’t just support ethical AI privately. In her own work; she has been an outspoken supporter of ethical AI efforts, and she has delivered a number of talks on the subject.
She has, in fact, spoken to more than 10,000 people over the course of approximately 55 different events. These events include All Things Open, OpenUP Summit, AI for India, AI Engineering, and Data Science Seed.
Many of these talks also included hands-on workshops to solidify concepts surrounding ethical AI design and usage.
Well aware of how difficult it can be to share the details of unethical AI with audiences, Swaminathan’s public speaking experience has given her the chance to find ways to present important information in a helpful way.
“It’s a sensitive topic to present, but I see it as a great responsibility. Giving the facts without attacking my audience’s beliefs is something I’ve learned over the years. I learned just how powerfully an example scenario can affect people’s sentiments.”
This idea falls in line with the old saying, ‘You’ll catch more flies with honey than vinegar.’
When presenting someone with a new perspective, it’s not helpful to simply tell them that their existing preconceptions are wrong and that they need to think about the topic differently.
Also, the ethics of AI is a subject that’s far more nuanced. It’s not so much that tech professionals are willfully creating unethical AI; they just need to take a lot more into account during that process.
Diversity among data scientists can also help bring a range of perspectives into the process.
Benefits of open source AI
One potential means of improving the ethics of AI is to make AI technology open source.
Open source describes software whose underlying source code has been made public and available for modification.
This is in comparison to privately-owned software, which can only ever be studied and modified by those who have been given explicit permission by the owner.
Swaminathan argues that open source AI offers many opportunities to develop more ethical AI.
“Open source allows developers worldwide to come together to create a system or software. Diversity is needed for AI, and open source brings in this critical component. Open source, along with open governance, can bring many eyes, values, and innovative minds into one place, paving the way for building trusted systems.”
If these instances of AI were still owned by private companies, many people directly affected by that AI would just have to hope that the relevant companies make their AI more ethical for everyone, not just certain groups.
But open source essentially brings this technology out in the open for everyone to see.
Suddenly, this AI can become a part of public discourse. Professionals and laypeople alike can weigh in on what AI is doing right and what it’s doing wrong.
This is similar to another concept that Swaminathan mentioned: open governance.
Open governance is the idea that citizens should be able to see how their government operates.
In both cases, openness and transparency bring more people into the conversation, and when it comes to ensuring ethical operations, the more people looking in, the better.
Supporting ethical AI
Many of you reading this may not be AI experts who work with the technology on a, professional level.
So what can non-experts do to weigh in on the conversation? How can the average person who’s passionate about ethics in technology actively support ethical AI?
Swaminathan’s advice is to pay close attention to what’s happening in tech and to raise concerns:
“Be aware and ask questions when you interact with technology. We have many open source, inclusive communities where you can voice your questions and fears and learn more about the technology.”
It’s surprisingly easy to accept different types of technology at face value, especially when the underlying mechanics of this technology are complex and difficult to understand without years of study.
Even so, it’s easy to look at the impact technology has on real people.
If you feel that an automated system is treating certain people unfairly based on their background or identity, at the very least it’s worth asking questions that might reveal why this is happening.
In a best-case scenario, this could even bring these problems to light for the first time and lead to a push for improvements to be made to that system.
Pay close attention to how technology affects us. When people look out for each other, technology serves all of us better.