This story was written by a human with help from artificial intelligence.
A human researched the story, interviewed sources, and typed these words. Artificial intelligence (AI) powered web browsers that facilitated online research. It transcribed recorded interviews. It provided keywords and search engine optimization terms for the story’s digital edition.
Appears InThe disclaimer above hardly seemed necessary a year ago. But maybe it is in the age of ChatGPT.
You may have heard of the advanced chatbot that became the fastest-growing consumer software application in history after its release last November. Able to quickly and coherently generate text based on human prompts, ChatGPT has demonstrated the power of artificial intelligence in a way easily grasped by the public. The biggest names in tech—Google, Microsoft, Apple, Adobe—have invested heavily in AI. Some of their products already include tools like ChatGPT. More are on the way.
“AI has been used successfully in devices for quite some time,” said Charles Edamala, Illinois State University’s chief technology officer. “What’s new, of course, is that AI is now directly available to the general public.”
ChatGPT’s abilities have captured the most public attention, but comparable models with capacity to produce images, audio, and video further showcase AI’s explosive growth. Applications of the emerging technology are wide-ranging, and opinions of this new concept of “generative AI” are polarized.
For some, it’s a tool that will advance society. For others, it’s a step toward human extinction.
“Maybe it’s hyperbole, but this might be the biggest thing we’ve done since fire,” said Dr. Roy D. Magnuson, an Illinois State associate professor of Creative Technologies.
Magnuson has used generative AI to develop virtual reality programs and write source code. He’s even used it to figure out family dinner plans by inputting a list of items in his refrigerator into ChatGPT. He uses the technology almost every day.
“It’s like an iPhone moment right now with increased computing power and ease of use,” he said. “But it may be even more tectonic than that and change the way the world works.”
The AI flashpoint arrives decades after the technology has become ingrained in modern society. AI is at work in things we use every day: the internet, social media, smartphones. But the advent of generative AI tools marks a new era where end users can direct AI to accomplish specific tasks with relative ease.
Generative AI models are powered by neural networks, so named for their mimicry of the human brain, that identify structures and patterns within data to draw conclusions and generate content. Text-based platforms like ChatGPT are run by a large language model, a type of neural network trained on a vast data set, able to understand and compose text.
Neural networks also form the engines that drive AI systems capable of generating other forms of media. Notable applications in the past year include a controversial AI-generated image that won the Colorado State Fair’s digital arts competition, a deepfake song featuring Drake and The Weeknd that went viral this spring, and a nightmare-inducing text-to-video test commercial for the Toronto Blue Jays created by an unaffiliated TV producer.
The realm where generative AI is most advanced, however, is in text models. “It’s very clear that ChatGPT has made people aware of the power of AI in a way that no other tool has before,” said Dr. Elahe Javadi, an Illinois State associate professor of Information Technology. “ChatGPT has made it salient for everyone.”
Introduced to AI nearly 25 years ago as an undergraduate student, Javadi sees potential for generative AI models to increase efficiency and automate mundane tasks. But she has concerns, too. She worries about privacy and data security not just in the text inputted into generative AI systems but in all published content that can be pulled into a data set. Because of this, she refrains from posting photos of her daughter online. “I think that’s a decision she has to make,” Javadi said. “Every time you post a picture publicly—or even privately on a platform that assumes rights to it—you’re training an AI model.”
Javadi further explained that data containing bias learned by an AI model will generate content reflecting those flaws through what she calls “algorithmic bias.”
Assistant Professor of Information Technology Nariman Ammar is keenly aware of this reality. A computer scientist who first delved into health informatics during postdoctoral work, Dr. Ammar has helped construct AI models that predict health outcomes, gauge hospital capacity, and recommend services based on patients’ needs and locations. But problems arise, Ammar said, when AI models contain ethnic and racial disparities.
“The question is: When the AI algorithm is trained on data, does it take those minority groups into account?” she posed. “Or does it learn based on the majority population?”
AI also gets it wrong sometimes when neural network wires get crossed in a phenomenon known as “hallucination,” when chatbots confidently generate false responses.
Ammar and others shared concerns in using AI systems to provide direction—without human validation—in situations with life-altering consequences. “We’ll continue to need human experts because of AI biases and hallucinations,” said Edamala, who used a hypothetical WebMD chatbot as an example where human fact-checking would be necessary.
Beyond threats to human life are threats to livelihoods. The rise of AI capable of replicating voices and mimicking art is particularly concerning to creatives. One of the lead designers of State magazine, Mike Mahle maintains a robust freelance business with some of the biggest names in entertainment among his clients. Mahle is fully aware of AI’s ability to generate content using artists’ published, proprietary work. He hasn’t pulled his work offline, but other artists have. “It’s worth more to me to get my stuff out there than the threat of AI generating something from it,” he said.
AI-generated art has become a frequent source of controversy—and litigation. Getty Images filed suit against AI image creation software Stable Diffusion earlier this year after its copyrighted photos appeared in AI-generated images, some with Getty watermarks plainly visible. And while some AI usage of copyrighted material is clearly unethical, there is plenty of gray area in instances like generative AI’s ability to create content “in the style” of an artist.
“My style is one I’ve developed over the years, but it didn’t come out of nowhere. It’s a conglomeration of my favorite artists, and me taking their styles and incorporating them into my own,” Mahle said. “And I don’t know if that’s really all that different than what’s happening here.”
Those gray areas may give rise to a legal specialty as regulation of AI plays out in courtrooms. Professor and Director of Creative Technologies Rose Marshack draws a parallel between what’s happening now and the advent of filesharing in the late 1990s with programs like Napster.
“I was putting out records on a major label when people became able to trade MP3s, and I still functioned in the music industry for many more years,” said Marshack, a founding member of the indie rock band Poster Children. “I researched carefully, understood the landscape, and I adapted.”
Higher education is quickly learning the importance of adaptability as it responds to the fast-moving technology. Illinois State has no codified academic policy on generative AI; instead, the University’s Center for Integrated Professional Development has curated a faculty guide offering tips on course design discouraging generative AI contrasted by advice on learning experiences that integrate it.
“If we ignore it and pretend it’s not happening, our students will come out behind students at other institutions that are embracing it and using it in teaching and learning,” said Dr. Craig Gatto, Illinois State associate vice president for academic administration.
Many departments at Illinois State are welcoming the technology. “AI presents a once-in-a-lifetime opportunity for us to empower students by amplifying their analytical and communication skills,” said College of Business Dean Ajay Samant. “The possibilities that AI brings for learning success and career success are endless.”
Unethical use of generative AI in course work at Illinois State is subject to the University’s academic integrity policy. But there is consensus—at least, among faculty sources interviewed for this story—that any new policy specific to generative AI must be flexible to accommodate technological advances and special circumstances. Some suggested an AI course as part of the University’s general education curriculum.
“What we teach and how we teach will need to fundamentally change,” Magnuson said. “When students leave here, their employers aren’t going to ask them to turn the AI off.”
The future of AI prompts more questions than answers. Will it cure cancer or solve climate change? Or will it be weaponized? Is it as evolutionary as fire? Or as deadly as a pandemic? Will it be used ethically or unethically? And will people value the authenticity of humanity over the convenience of artificiality?
Only time will tell, but the future of AI will largely be shaped by decisions made now and actions taken by sovereign nations, tech companies, and institutions of higher learning like Illinois State.
“When a new technology emerges, it’s important to be able to acknowledge and adapt, and then face the future,” said Marshack. “And that’s what we teach here.”
AI 101: An artificial intelligence glossary
Algorithm—a set of rules or instructions a machine follows to achieve a goal.
Artificial intelligence (AI)—machine ability to perform functions normally associated with the human mind.
Chatbot—a computer program designed to answer questions from humans.
Data set—a collection of data, the key component of machine learning.
Deepfake—an image or recording manipulated to misrepresent what someone did or said.
Generative artificial intelligence—AI trained on data sets that can produce new content.
Hallucination—the phenomenon of chatbots confidently providing false information.
Large language models (LLMs)—deep learning algorithms that recognize, summarize, translate, predict, and generate text based on large data sets.
Machine learning—the use and development of computer systems that can analyze and draw inferences from patterns in data.
Neural network—a form of machine learning that teaches computers to process data in a way inspired by the human brain.
Prompt—the instructions a human provides AI to perform a task.