Ethics and AI
Artificial intelligence can solve many problems — while creating new ones.
You know the drill: You’re scrolling through Amazon or any other shopping website, and an item sparks your interest. Even though you didn’t “add to cart” or put it in a “favorites” list, you spent a decent amount of time looking at that page. When you finally move on to another website, an advertisement for that same item pops up, calling your name.
This is a relatively mundane example of artificial intelligence (AI) at work. In this case, it’s an algorithm targeting your preferences based on your online behavior. But that can easily veer from a simple ad to full-on, algorithm-driven bias. Perhaps you’re blocked from seeing specific job opportunities based on your socioeconomic background. Maybe certain home loan programs aren’t popping up on your feed based on your race.
We’re our own foes, and AI just kind of propagates that.
— Fred Morstatter
AI systems are sometimes considered invasive because they replicate and reinforce prejudices (especially online) and can aid in academic or artistic fraud or plagiarism. But they can also help people organize their schedules, screen for cancer, aid wildlife conservation or spot “fake news.”
Regardless of the differing opinions on the application of artificial intelligence, the technology has become an inextricable part of our daily lives. Whether it makes our lives easier or more difficult is often up to us, USC experts say.
“We’re our own foes, and AI just kind of propagates that,” says Fred Morstatter, research assistant professor of computer science at the USC Viterbi School of Engineering. “It all comes down to that adage of ‘garbage in, garbage out.’”
Reinforced bias
Morstatter specializes in AI bias. He says some of those earlier examples — job postings and loan programs — are just a couple of ways algorithms can reinforce prejudices.
Algorithms — a set of instructions that can be as simple as an “if-then” equation — aren’t inherently problematic. AI is an overlapping term that refers to a group of algorithms that can modify itself and create new algorithms based on learned inputs and data.
“There are algorithms now that predict whether or not you should get a loan, and they’re trained on past data — people who got loans and who didn’t,” Morstatter says. “There’s racial bias in those data sets, so they propagate that data going forward.”
People are aware of AI’s presence in our daily lives, Morstatter says, but they may not realize how ubiquitous it is.
“People have a cloudy understanding that [AI] can be biased, but the fact that it’s baked into the things you interact with daily is missing.”
Useful tool or enabler?
A significant concern about AI, especially in academia, is how it can develop content with minimal input — e.g., a quick prompt — leading to issues of plagiarism and academic dishonesty. One name that has gained popularity recently is ChatGPT, an open AI chatbot “designed to respond to text-based queries and generate natural language responses.” Users can provide ChatGPT with a prompt or question, such as “Explain tornadoes,” and receive an AI-generated response.
However, ChatGPT also has the ability to write full pieces of academic or creative work, which is where the concerns come from — students using ChatGPT and passing it off as their own. But for faculty members like Mike Ananny, an associate professor at the USC Annenberg School for Communication and Journalism, ChatGPT can be a tool just like anything else. He argues that framing it as a tool for cheating might say more about people than the actual AI.
“It feeds this culture of treating students like they’re about to cheat,” Ananny says. “There’s something about that approach to student-teacher relations that I think misses the mark.”
Ananny, who teaches his students about ChatGPT in some of his classes, says the issue is more around how it is used than what it is capable of. The generator might be able to create longer pieces, but it’s still up to the students to edit and ensure what was generated is accurate.
The act of editing and revising something ChatGPT produces can be a really valuable learning experience because you have to know a lot more [about the subject] to edit something.
— Mike Ananny
“The act of editing and revising something ChatGPT produces can be a really valuable learning experience because you have to know a lot more [about the subject] to edit something,” Ananny says.
The outrage surrounding a new technological tool in academia is nothing new. According to Kimon Drakopoulos, an associate professor at the USC Marshall School of Business who specializes in AI, ChatGPT is just the latest target.
“It’s the same situation as in the ’90s when people were demonizing the internet and how encyclopedias weren’t going to be relevant anymore — I’m actually happy encyclopedias are not relevant anymore,” Drakopoulos says.
“I see it as an opportunity and not at all a threat.”
Lost in an online world
Stephen Byars, a professor of clinical business communication at USC Marshall, says that people, as consumers, need to be honest with themselves about AI.
“In every application and design of AI, we have to decide whether it is making our lives better,” Byars says. “Is it giving us useful technology for the work we want to do and for the human relationships we want to have? Or does it isolate us more and put us in environments where our only contact is with our laptop or phone?”
In every application and design of AI, we have to decide whether it is making our lives better.
— Stephen Byars
In a way, Byars notes, the COVID-19 pandemic allowed the public to let its guard down in terms of privacy — more viewing content online, purchasing online and especially communicating online. When the world went virtual, how could we not divulge more about ourselves just to stay connected?
“To the extent that AI can free us up to have more time [for ourselves] or have richer contact with each other, those are good things,” Byars says. “But AI also can make us prisoners of a technical environment — it’s just so easy to get lost in an online world.”
Byars acknowledges the pitfalls of being online. But when he brings up the prospect of less privacy to his predominantly Gen Z students — for example, all phones have cameras and location services — the majority don’t see an issue.
“Not all my students, but many of them, just accept this as a normal accompaniment to the greater use of technology,” Byars says.
That attitude isn’t exclusive to the younger generation, either.
“My mom says, ‘Oh, I have nothing to hide. I don’t care if you can mine my emails,’” Morstatter says. “I feel differently. I think more control should be put into people’s hands to manage those things.”
There are ways to trick algorithms without going entirely offline, he adds. For example, you can tag yourself as multiple people on Facebook or Instagram so the algorithm can’t recognize your face. You can use a virtual private network (VPN) to encrypt browsing activity from an internet service provider (ISP) or simply choose not to allow cookies from a particular website.
Morstatter says AI ethics come down to education: knowing what data is collected and how to limit what you allow. Ultimately, AI is intended to make our lives easier, which gives users the power to determine how it’s used.
“I seldom blame the algorithm,” Morstatter says. “I just don’t think there are too many algorithm designers out there trying to destroy the world.”