Facing the future: Artificial intelligence is evolving at warp speed—but at what cost?

Contact: Erin Flynn

KALAMAZOO, Mich.—The printing press, electricity— the internet—human innovation has fueled the evolution of society for centuries. Now artificial intelligence (AI) is opening up a world of possibility that could redefine the future and revolutionize the way we communicate, work and live—for better or worse.

Drs. Autumn Edwards and Chad Edwards

"It's definitely going to take the world of work by storm. We're going to see massive implications," says Dr. Chad Edwards, professor of communication, who co-directs the first-of-its-kind Communication and Social Robotics (Combot) Labs at Western alongside Dr. Autumn Edwards and in collaboration with colleague Dr. Patric Spence at the University of Central Florida.

While forms of AI have been around for decades, the emergence of generative AI, which uses algorithms and large language models to create original text, images, audio or video, is relatively new. It's not Alexa telling you the weather or Siri digging up an obscure fact on a celebrity; this technology is learning from data sets, iterating and improving its output as time goes on.

Tech company OpenAI opened the floodgates when it released ChatGPT to the masses in November 2022. Google followed with Bard and Microsoft introduced its Bing chatbot in early 2023.

"I think that really did create a massive, game-changing event for the general public," Chad Edwards says. "I think it captured a lot of imagination. I, personally, would put it on the scale of the Wright Brothers flying at Kitty Hawk—it's that big."

The evolution of AI is “unstoppable—and we have already experienced several waves of it,” says Dr. Kuanchin Chen. “The question is, how are we going to create unique opportunities for humans while using it?”


The speed at which these chatbots can sift through large volumes of data has the potential to make both work and home life more efficient. It can help with meal planning, creating grocery lists and recipes for an entire week in minutes, or devising a workout plan based on specific goals. Need an explanation of quantum physics for a group of middle schoolers? It can do that. Want to create an agenda for a work meeting combining analytics and reports from the previous month? Save yourself a few hours and let the chatbot do it for you.

And if you haven't joined the chatbot revolution yet, its influence is nearly unavoidable with Microsoft's next iteration, Copilot, which will put GPT software on all Office products. The company says it will create "the most powerful productivity tool on the planet."

But the swift release of increasingly agile AI systems with little oversight and no system of checks and balances has many people on edge. Begin typing "will artificial intelligence" into a Google search bar and the engine's autocomplete function paints a bleak picture drawing from popular user searches: Will artificial intelligence take over the world? Take away jobs? Replace humans?

A contingent of dozens of tech leaders and researchers, including Bill Gates and Elon Musk, are also raising concerns. They signed an open letter in March urging AI developers to hit the proverbial pause button on the "out-of-control race to develop and deploy digital minds that no one … can understand, predict or control." They recommend taking at least six months to temper the explosion of technology and allow for time to discuss potential impacts and create guardrails.

Dr. Kuanchin Chen

There’s one problem: The animatronic cat is already out of the bag. The evolution of AI is “unstoppable—and we have already experienced several waves of it,” says Dr. Kuanchin Chen, professor of business information systems and director of the Center for Business Analytics, who has been studying AI for more than two decades. “The question is, how are we going to create unique opportunities for humans while using it?”

“There’s a lot of hype around AGI (artificial general intelligence) or super intelligence or consciousness of AI; that’s just hype. It’s really just code and math and algorithms. Of course you can do bad things with it, but it’s not going to suddenly become alive. That’s not how that works,” adds Chad Edwards. “It’s just important to think about how you’re using it and how other people are using it. It’s not a great answer, but it’s about all you can do short of regulations.”

Potential regulations, Chad Edwards says, should involve transparency—knowing who is funding and developing software and where their data sets come from—as well as access to data and training data biases.

"What you get back (when you prompt ChatGPT) is a mirror of society. You get a reflection of the general mood of the internet, in a way, because everything you're hearing is what we've been putting out into the digital sphere for the last several decades," Autumn Edwards says.

Dr. Gwen Tarbox

"As (AI) continues to grow with more and more data as people interact with it, the guardrails that are put on it by the corporations that create these AI models are only so strong. So, we have to be really vigilant as users," adds Dr. Gwen Tarbox, director of the Office of Faculty Development in WMUx. "We're interested in what AI tells us about ourselves and also how we can be mindful of the pitfalls of AI as well as the innovation."

There are also ethical implications.

AI can be used to disseminate misinformation in mass amounts and create deepfake videos that blur the lines of reality with the potential for widespread negative impact.

"How are we going to get a sort of defense against the dark arts fast enough to detect all of this?" says Autumn Edwards. "It may come to the point where we have to consider whether our image, whether our optical physical likeness, is protected information in the same way that our genetic material or biological waste if we go to a hospital is protected."


While it's essential to consider the implications of generative technology and the many other branches of AI, there are also countless applications that could transform society for the better. Chen emphasizes the importance of creating symbiotic opportunities to enhance both humans and technology.

"I'm not worried about job replacement now. I'm not a doomsday thinker," he says, adding that humans will always be needed to frame the scope of questions for AI, to play as strategic navigators in complex or novel scenarios and to make decisions that require an assessment of human needs. "We're talking about expanding the capabilities of humans. Right now, this can be in terms of additional capabilities that were not possible before or sharpening existing skills and making things faster or more efficient. The key is to think about how this collaborative relationship with AI brings about value or innovative opportunities for both parties."

Chen has already worked with companies to improve user experiences and better predict user choices by integrating AI into their websites and has several projects in the works related to using voice AI for problem solving, educational assessments, clothing matching and improving personal performance through these emerging technologies. The possibilities, it seems, are endless—but it's all about balance and seeing AI as a partner, not a rival.

"Everything's moving so quickly; I think the best we can do is try not to get trapped in either a utopian or dystopian vision of it—that this technology is going to determine our path either in a good or bad way," Autumn Edwards adds. "It could be great, it could be terrible, but what really matters is our choices; that's the only thing we can control. If we can intentionally, together in our communities, anticipate possible futures and try to link up potential choices we could make with their likely consequences at an early stage, then we've got the best chance of ending up living in a world where humans and intelligent machines can flourish together."


As AI and generative systems become more ingrained in society and more widely available, some academics are raising concerns about what it means for education. Could students use it to cheat and write papers? Will they even complete reading and research assignments if they can generate summaries and reports in a matter of seconds?

"We're not going to be able to ban it; that's just never going to work. We need to figure out how to work with it, how to teach students that this is another tool. (Teach them) how to use it better, how to know when it's not effective, how to use it for brainstorming or certain tasks and also how to edit it," says Chad Edwards.

Close-up stock photograph showing a touchscreen monitor being used in an open plan office. A woman’s hand is asking an AI chatbot pre-typed questions & the Artificial Intelligence website is answering.

"We're talking about expanding the capabilities of humans. Right now, this can be in terms of additional capabilities that were not possible before or sharpening existing skills and making things faster or more efficient," says Chen.

In fact, there are infinite benefits when educators embrace AI and use it as a tool to spur creative thinking and innovation.

"The implications for teaching and learning are profound. We can use these technologies to help students and faculty with research, with writing, with planning projects, with creating lesson plans; there are so many ways," says Tarbox.

WMUx has taken a proactive approach to AI integration at Western, putting the University on the leading edge of institutions offering resources and training for faculty on the technology.

"We recognize that our students will be encountering and working with AI for the majority of their lives, so as faculty and as instructional designers and as Western's innovation hub for teaching and learning, we (at WMUx) know that we need to be there for our faculty partners as they work on developing and integrating AI into their courses," says Tarbox. "Many of our students will leave in the next year or so for jobs where AI will be a major component of what they do."


The School of Communication has already begun incorporating AI into its curriculum, adding a user experience/human-computer interaction (UX/HCI) minor. Students in the program study the social and societal implications of human-machine communication and artificial intelligence as well as user experience and how to make technology for everyone. Other courses and programs are in development across the University, including a Haworth College of Business course on AI and business that will debut in the fall, which will focus on business implications, topic modeling, natural language processing and generative AI.

"AI is traditionally offered as a computer science course. It has been like that for years, and that is how my AI training years back began as well," Chen says. "But AI has evolved to a point that we see consumer AI everywhere.”

WMUx is also in the process of creating two working groups related to AI. One will focus on AI and ethics, encompassing not only issues of academic integrity but also ethics of AI and best practices. Another will examine bias in AI.

Alyssa Moon

"Our main goal is education: Having the conversations so we can make educated and informed decisions as we work with people," says Alyssa Moon, associate director of instructional design and development in WMUx. "The more people we can get involved in getting those multiple perspectives, the better it is."

WMUx will be inviting the University community to participate in the working groups beginning in fall 2023. The unit has already created a clearinghouse for information called AI @ WMU, which is hosting a series of events designed to familiarize the University community with ChatGPT and other generative AI technologies. Workshops are planned for the summer on creating generative pre-trained transformer (GPT) prompts, bias in AI and using AI in course planning and design.

Plans are also in the works for an AI speaker series in the fall, which will include nationally recognized experts in a variety of fields. Tarbox says the goal is to be a conduit of information for the Western community. She says it’s important to recognize that although the evolution of AI is uncertain, the University has a responsibility to prepare students to take on the challenges of tomorrow and equip faculty and staff with the tools to guide them.

"People are worried it's going to grow a brain and take us over and things like that. I don't think that's a concern," says Tarbox. "I think the concern will be different. We don't know what it is; we don't pretend to know. But I think that's the point about higher education. We are all here to learn more, and even if something unsettles us—or perhaps most importantly if something unsettles us—we have a real need to learn more about it, to get information from experts and to do our best to provide information for our community that is grounded in research and thoughtful discourse." ■