graphic featuring a woman smiling in a hallway setting
Audio file
LISTEN: Play Button Icon of a play button Pause Icon of pause button
00:00 | 32:47

E35: AI-Generated Content | Rodica Neamtu | Computer Science

In this episode of The WPI Podcast, we explore why video, images, and other content generated by artificial intelligence seems to be everywhere on social media.

Rodica Neamtu, professor of teaching in the Department of Computer Science, explains how AI-generated content is easily produced and distributed and why it can be difficult to distinguish it from human-generated content. She discusses the societal challenges that emerge when people struggle to determine what’s real versus what’s fake, and she proposes strategies to navigate the fast-moving disruption caused by AI development.

Neamtu also shares how she works with WPI students to create AI tools to meet public needs and to consider the potential implications of technology they develop.

Related links:

About AI at WPI

Computer Science Department

Bucharest, Romania Project Center

Project-Based Learning

Major Qualifying Project

Interactive Qualifying Project

Host
Guest:
Preview AI-generated graphic depicting Professor Rodica Neamtu with words behind her profile graphic.

 

AI-generated graphic depicting Professor Rodica Neamtu and her expertise, created through several re-prompts and iterations.

Transcript

Jon Cain: AI generated content is seemingly everywhere. You'll find video clips that look like movies, high quality photos and more. All it takes is a few user prompts and bingo, you can create your own. There's fantastical scenes like crooning canines that'll make you laugh, but there's also examples like fake footage posing as news that make it hard to know what's real. Today on The WPI Podcast, we'll explore AI generated content, how to navigate the confusion it causes, and how AI can be used to help people. Hi, I'm Jon Cain and this is your home for news and expertise from the classrooms and labs of Worcester Polytechnic Institute. Our guest today is Rodica Neamtu. She's a professor of teaching in the Department of Computer Science at WPI. She's a machine learning expert. She studies bias in AI systems and ways to improve data mining with the goal of developing technology to address societal needs. Rodica, thanks for being here on the WPI Podcast.

Rodica Neamtu: Well, thank you for having me. 

Cain: And before we get into the conversation, I just need to give a disclosure that this program is 100% human generated content. No AI involved. 

Neamtu: Yes, we are not AI. No. that’s just a joke.

Cain: You’re not a bot? 

Neamtu: Yeah. No, we. I am not a bot. 

Cain: Rodica is not a bot. I'm not a bot. So rest assured everybody. I wanted to start by asking you how is it that people create the type of AI generated content that you might see in your social media feed every day? 

Neamtu: So, in so many way, Jon. Lately, the software that allows people to generate content through AI has become so accessible because it requires less and less expertise with technology, if any, at all. Uh, and there are many, many ways like generated content span the range from just text or captions or audio or video or full experiences for people. So there are many, many tools out there. So, some of these tools that allow you to create content also allow deployment across multiple media platform tools. And that is why, uh, we're experiencing such a high volume of, uh, of content. So, for example, simple Canva, um. Meta AI, Buffer, Chat GPT. All these allow people to just prompt engineer, as in they will say in natural language what they want, and then often they can support that with some existing text that they file, maybe a photo, maybe a little video, and not only be able to request the appropriate format, they can request as much as the engagement tone. Do I want this to be funny? Do I want this to be appropriate for a business setting? So many, many different ways for people to create content and deploy it very rapidly.

Cain: And I know certainly when I've looked at them the content that's generated is so realistic. Not only is it fast, but it looks, uh, movie quality in, in some senses. And people I think are really amazed by what is produced. 

Neamtu: Yeah, so I think it is an increasing trend towards what is coined as hyperrealism, which means that it's becoming harder and harder to distinguish AI generated content from content created by people, which is in some respects a good thing, right? Because now we can have these super good quality things that can allow us to express our thoughts or to teach people certain things, but they could also be dangerous because now people will have to continuously ask themselves how can they assess the provenance of whatever media it is that they're looking at.

Cain: Can you talk a little bit more about why it's so difficult to distinguish between AI generated and the human generated content?

Neamtu: One of the major factors is the sheer volume of content that we are being literally inundated with and to do it justice, even though we could, in reality it's very time consuming. And if it wasn't only the time consumed, it's becoming oppressive on people, is this overload of information that we have to deal with that comes from all directions, in all forms and in all aspects of our lives. And if that wasn't enough, often people are just not prepared they do not know enough about AI, to know what to question, uh, how to question it, how far to go down the questioning, uh, road, and then when is it kind of enough is enough. Now I can tell that I'm positive what I'm looking at, and to aggravate all of that, the way this information gets usually spread over social media has to do with the way the algorithms direct, uh, the flow of information. Uh, so interestingly enough, um. It has been quite a while since people, whether they know it or not, have been isolated in these affinity bubbles, so to speak, where generally the feed that people get in their social media accounts has to do with their preferences and their activities, which leads to people being fed generally news and, um, media that is agreeable to them or they seem interested in. And, uh, all of the sudden the things that go into your orbit, so to speak, uh, become the things that pretty much you want to see, you know that you see. And you're not even aware that you are completely separated now from any other points of view, from any other opinions, and on a more technical note, the algorithms do actually prioritize engagement. The more a certain video or piece of audio or post gets read or shared, the more is going to be propelled to be shared by others. So if the generated AI content seems to get traction in a certain bubble, then it'll multiply that much faster than let's say an authentic piece created by a human. One other thing that comes to mind is that often people actually do understand what we're talking about. They understand how the algorithms work, and they could theoretically use tools to evaluate if what they're looking at is authentic or not. But it is a reality that between the volume and the quality, the amount of expertise that we need and the time that we have, we just get tired, tired of having to constantly question if what we are looking at is human generated or AI generated.

Cain: It's really fascinating. It sounds like there's a lot of factors that go into it, and I've certainly seen it in my own feeds. You know, you look at one video of a Shetland Sheepdog 'cause I have a Shetland Sheepdog, and suddenly there there's more there. There's a lot going on behind the scenes

Neamtu: I think it actually is something that we should get into the habit of always wondering when their media feeds come in, am I seeing the other side of things and reminding ourself that most likely we are not, unless we would make a conscientious effort to keep on reading like from both sides and avoid just polarizing our attention in one direction. 

Cain: And on a lighter note, I guess I need to be looking for cat videos. 

Neamtu: I would say that is a good idea. Jon. 

Cain: I'd like to sort of expand on this beyond just the individual, what's the danger if the public struggles to figure out what's AI and what's real?

Neamtu: Confusion, distrust, anxiety, polarization, bias, all being like if you want labels of the things that, uh, are consequences. So the one that I do wanna address first is this idea of us, living in the echo chambers as they're known. And that definitely not only deprives us of understanding the other point of view, but creates a way in which it is very hard to accept that there are other points of view. So that basically breaks the communication. People cannot live well unless there is communication, unless we can discuss our, uh, different opinions and come to some sort of compromise on how to move forward. Um, the second thing that, um, comes out of this is the fact that if we continue to only listen to similar points of view, then we intrinsically become biased. We become biased towards the things that we believe in, towards the way we want to see the world to work. AI really has amplified the bias in the way people think, in the way people act and has fed, hidden biases that we did not even know that we have. The other consequence that we have. I think in some respects we are becoming, a little more de socialized because we now hang out with our own social media accounts and they can, uh, uh, bring to us the things that make us feel comfortable. And thus, maybe we seek less and less the company of others. Maybe we seek less and less engaging with points of view that could be dissenting from ours, antagonistic to ours. And thus I think we lose the human ability of being accepting and of being tolerant and of being open-minded. But in addition to the idea that we are being isolated because of the way the algorithms, this direct information to us, there is also this danger of us starting to become less trusting into the things that we see. We do not want to turn people towards this mistrust road in which every single thing that comes our way needs to be checked and validated. And that is something that maybe, possibly we took for granted for a long time until AI came and opened this new Pandora's box of things that now need to be checked consistently.

Cain: So, Rodica, I'm wondering how would you recommend that we as a society grapple with this fast moving challenge?

Neamtu: So what comes to mind first and foremost as a teacher, is the idea of awareness, making people aware that this does exist, that this is real, as in the AI generated content is real. And then help try to help people in the same line understand that these are things that they need to question and that not everything that comes on your screen, no matter how good looking is to be trusted. You can, if you really want, to at least explore the idea of understanding if the content that you're looking at is human or AI generated. There are tools out there, some are completely free, that one can use at least as an entry point to examine things. So I'll just mention the two ones that I use. Um, GPTZero is an extension of, uh, Chrome and you can use it like for on the go verification. And the other one that, uh, as a professor I have once in a while use is called Copyleaks, which allows you to really, cut and paste a lot of text in there and find out, uh, at least at a high level how what percentage of that text has been generated by AI and things like that. But there are way more sophisticated tools that people can use. So. Public awareness, although can only go so far. My next step in my mind, is the corporate responsibility and transparency since we can't shoulder on our own, this burden . Then these organizations, companies, media platforms, should have a responsibility to help us and how they can help us is to be transparent, to disclose in no uncertain terms. And I'm saying humoristically and not on a whole lot of pages that we have to read to disclose they to their knowledge, the use of any of those tools, like at the least, and they used to be the case in the past that, uh, some of the media platforms would post things like likely to be generated by, which would of course kind of skirt the line of liability for assessing that content. So some help from these companies as well as helping people better understand when they use AI, that this is not the ultimate and always accurate answer to what they're looking at. And to go to the next step. Of course, it is not likely that companies on their own will uh, come up with, the necessary resources to implement these things. So, of course, I am a believer that, uh, some government regulation will be needed to help people find their way through this jungle of AI generated content. 

Cain: I wanted to talk a little bit about younger generations I'm wondering what your thoughts are on what we can do for our students to ensure that they build the appropriate skills to distinguish AI generated content from human generated content.

Neamtu: Younger generations are certainly becoming very super familiar with the use of AI. And because they're starting at such young ages when they still need to build their critical thinking skills, they might take the ai, more for granted in terms of accepting and not questioning what they get than the rest of us who have seen, uh, uh, more things. For younger generations, it's very important to build the awareness that AI is not the answer that does not need to be questioned.

Cain: Mm-hmm. 

Neamtu: That there are always things and angles that they need to look at before accepting what they see as the ultimate truth. Now closer to our students, our students too have, uh, lived in a way more technologically advanced era so they're familiar with these technologies. They're part of their lives. So I think for us, and I see that for myself as somebody who teaches and somebody who is, uh, whose research area is centering around artificial intelligence, I constantly remind them that even though AI capabilities are becoming increasingly good,  that there are some things that AI cannot yet, hopefully not soon enough, do, which is replace critical thinking, replace human, human skills, human feelings, human expressions. And in that regard, uh, I'm trying to make my students help my students become familiar with AI, but in a cautious way. I want them to understand how AI works. I want them to understand how the models, what is inside the models and how they work, as well as what is the testing and training strategy from which the answers are drawn so they can actually understand how to interpret what AI produces for them. Since we are in a project based learning institution, I think that the best way to help them build those skills is by doing. So, it is always a preoccupation of mine to try to engage my students into using those tools. But more so thinking about how could we make these tools better? How could these topics that you are discussing with me today maybe become less of a struggle for people and more of a gain for people moving forward. I'm also trying to also realize that AI is here to stay. I believe in AI becoming, in some respects, like a companion. Something that helps us, allows us to do things faster, better, but never replaces who we are and what we do and how we think. I guess in a summary, I would say literacy. For our students, AI fluency because we are in a technical institute. And to that extent, I have been involved in, uh, several initiatives here at WPI one of my nearest and dearest was introducing the AI for All program at WPI. That, uh, basically helps students, especially from other departments, not necessarily computer science, uh, start learning about AI becoming more familiar, seeing it instead of a threat as a tool that they can use to move forward. And most importantly, as computer scientists. And now I'm really speaking for the people in my department, we are the people who create those tools. So, our contribution should be to make these tools work for the better of the society. 

Cain: So I'm wondering what your thoughts are on how we can make sure that our future generations, our current WPI students, are ready to use AI in a moral and ethical manner. 

Neamtu: I believe that there is a lot of work that we need to do in creating this awareness that the tools that we're creating, and it’s not just the AI tools, have massive and profound social, moral, ethical, economical implications. And although we all know that every program has courses in ethics or at least parts of ethics, I think that where I see us going to really make a difference is making sure that we are infusing this awareness that the tools that we are creating really change people's lives to the core from the way they bank, to the way they learn, to the way they raise children, to the way they go on vacation, and make sure that we practice that. We don't just learn it by saying these are the right things to do, but infusing these ethical and moral and social implications at the level of everything we do like examining what would be the impact of any tool that we're building in our projects, in our MQPs, in our IQPs, the research that we do. Keeping that to the forefront because in the past this has always been almost an afterthought— we first create the tools and then we examine what the tools can do. We are trying to prepare our students so we are trying to make them lifelong thinkers of what the impact of the products that they create should be on the world. It should become like second nature that whenever they develop or we develop a tool, we should always think, can somebody ever turn this tool around and use it in a way that I have not thought about and that might not be beneficial to the society. 

Cain: . I just wanted to mention a couple things. The MQP, that's the major qualifying project at WPI. It's a senior level design project required for graduation. And the IQP is the Interactive Qualifying Project, uh, typically done in the junior year here , tackling multidisciplinary, interdisciplinary, uh, challenges, uh, at the intersection of technology, uh, engineering, science, and uh, society. And, a lot of these ethical questions are core to the undergraduate and master's programs, uh, in artificial intelligence that WPI offers. Rodica, you mentioned that you're doing a lot in your teaching to engage students in AI and, and some of these questions. Are there other ways that you do that?

Neamtu: I do see always myself as teaching at least as much outside the classroom as I do in the classroom. So what I'm actually talking about is the various kinds of projects where I can actually help my students engage even better with these topics, so for example, um, the major qualifying projects, are the perfect environment for students to experience both the development of the tools, uh, the tuning of these tools to make sure that they do serve the society the right way, but it also offer us the time to ponder how to develop them considering these implications. I do wanna mention some of my most socially engaged projects, which have to do with, uh, using the power of AI to enhance the experiences that people with various disabilities have. So these are accessibility projects, and they have with, um, a few organizations in Worcester, in the Greater Worcester area. Uh, they are the Worcester Art Museum, the Worcester Ecotarium, the Audio Journal, which serves, uh, people with visual impairments. So the apps that we're building here help. Um, I'll give you an example. They enhance the experience of blind, visual and visually impaired people who go to the art museum. AI allows us to provide people with personalized and customized experiences where they can touch an exhibit that they can see with their eyes, or it guides them so they can visit the outdoors exhibits at the Ecotarium. I have been engaged in the past in one of the most interesting things, an app that helps people who cannot speak at all and have severe motor skills, which means they can't handle a cell phone easily, uh, to communicate through pictograms. AI makes a big difference because AI can predict based on past behavior, based on the time of the day, based on the geographic location, what it is that a person might want or need at the same time. And it equally helps caregivers give the best care to these people. So these are pretty computer science-ey, And thus, I'm gonna talk a little bit about my preoccupation to also help students who are not necessarily computer science. And I'm talking about now the IQP. I am the co-director of the Bucharest Center. I'm trying to combine these two roles in a way that will most benefit our students and of course, the society. Um, I'm born in Romania, so I believe that our project in Romania should not just help our students transform who they are, but also help the people that I come from to say it in a more poetic way. So I'm gonna give just a couple examples of the kinds of projects that our students at WPI have completed in Romania. One of them, for example, is a chat bot, they prototyped in collaboration with a Romanian, uh, organization that help female refugees from Ukraine who were seeking help on matters that were sensitive, culturally different, and that people were reluctant to discuss with others. Another one, another group has developed, uh, an app that it's like a game. So it's a gamification of the idea of understanding what the communist society was like by younger kids. So they play this game in which they get a feel of how people were, how the government intervened in people's lives and things like that. And last but not least, I wanna, uh, mention that I am also very involved in interdisciplinary research that is centered about around AI and machine learning. It has to do with using the power of AI to imagine and create and design new things, like new materials that will replace, uh, carbon energy, the new materials that would help, uh, create, um, antibacterial substances, antiviral substances. And a little closer to what I really teach in my classroom, to kind of close this loop a little bit. In the last few years I have worked with students to develop some educational tools. They are all semi-AI powered that allow students to expand what we learn in the classroom, experience with it more like a game, take it outside the classroom, play with it, understand how someday people might not need to know all the things that they learn in the classroom, but in a different way. Learning that, going back to the roots and understanding the fundamentals is what makes us confident that in the future it is not AI that is going to lead the world. And the tools that we're creating are just gonna be that, tools. We are going to continue to lead the world. 

Cain: It's really inspiring to hear about the different examples of the way that AI can be leveraged for good and, and the fact that our students are actively engaged in that now, building, designing, creating, refining these different tools and thinking about, um, the implications of them and, and trying to find ways to, to help people. I should mention the Romania Project Center is one of around, uh, 50 project centers WPI has on six continents where our students can go to complete any of the required, uh, projects for graduation. One of the key components of, uh, the WPI education is the Global Projects Program. Rodica, you do a lot of work as well in, um, studying bias in AI systems. I'm wondering if you could tell us a little bit about, uh, what you do there, what motivates you and maybe what you've seen as some examples of how bias is baked into some of these AI systems. 

Neamtu: I'm glad you asked me that because this is really something that I would like people to hear. The AI models could potentially carry different kinds of bias. So there's bias in the model, which is basically boiling down to what are the weights that you assign to certain things to make a decision. So whether people know or not, it's like the credit score. There are these rules that nobody knows, but then impact our lives. So that's how bias in the model work. And then there is bias that comes from the data. So these AI tools can only provide things based on how they were trained, how they learn. If you train a model using a specific kind of data, like if your model only refers to cats, it'll be very hard for the model to answer correctly questions about dogs, and I think it's very important for people to understand that. So, talking about questioning things, even if they might not question if the cat or dog was generated by AI, they should always question if the answer that they receive, especially in like medical situations or loan rates or insurances, if these answers might contain this bias that can come either from the model or from the data. On a more humoristic way to kinda let people, uh, know of how that really could come about, uh, so I am of course a user of AI and whenever I give a talk about AI, I try to, take people into that world by showing instead of a real photo of me, um, an AI generated profile that basically simply, feeds off my WPI profile. So there's a website that each faculty has their information posted on. So I had scraped my information, using a couple different models and every single time, interestingly enough, although it seems to capture exactly the essence of my teaching and my research and all that, and it even portrays me wearing, uh, the gowns that we were at graduation, the regalia, it only seems to portray me as a white male with glasses. And, uh, I always fight back. So I re-prompt the model and I say, Professor Neamtu is not a white male with glasses, but the rest of the information is correct and it seems to be quite a bit of stubbornness. Uh, so the models persist in producing that. So combining multiple models and refining prompts multiple times eventually does lead to something that looks like me.  And although interestingly enough a closer look will show in the background lots of, I guess you could call them errors, words that are not spelled correctly, misinterpreting of some of the things that I do. So AI can definitely capture a lot. It needs a lot more work to refine it.

Cain: , If you check out the episode page, we'll show you a sort of comparison between what Rodica actually looks like and what the AI model, uh, drafted her to look like. I'm wondering if you have any words of wisdom to sort of leave us with as we, uh, embark and take our steps into the future in this new AI world. 

Neamtu: I don't know if there are words of wisdom, but they are my parting words right now. So I wanna say that for better or worse, AI is here right now. This is not the first time that the humanity has experience disrupting technology. This is probably the biggest one that happened, but, uh, as much as computer scientists can control to some extent what these tools do and how these tools treat the information and try to minimize the bias that they introduce in people's thinking and in people's lives, if we are not computer scientists, everybody else just has to keep in mind that we have control over how we use these tools. We have control over how much we accept versus how much we check what they give us. And more importantly, I think we all have the power to think of this as an opportunity, a challenging opportunity, an opportunity that if we wanna embrace, we now have to keep pace with this fantastically fast development of a technology that is taking over all aspects of our lives, and hold our ground and instead of being rushed into it, take our time to understand it. Take our time to understand the deep bias that can be planted in us and learn how to fight it. The more people will get engaged both into the creation of the tools, the creation of the testing and training sets, and the way these things are going to be disseminated, then the better off we are going to be because it's very easy for specific groups to overlook certain things. So, the more people with the different points of views engaged in this, the better we are going to be as a society and the more we are going to serve the society. 

Cain: Really well said. Well, you've given us a lot of great information today that I hope will help people as they navigate these challenges on an individual level and maybe think about how they can engage in, in the community to, uh, you know, be aware of and, and tackle this challenge. Um, Rodica, thanks so much for being on the WPI Podcast. 

Neamtu: Well, thank you so much for having me here. 

Cain: Rodica Neamtu is a professor of teaching in the Department of Computer Science at WPI. Before we say goodbye, I wanna let you know you can now listen to The WPI Podcast on Pandora. It's the fifth and latest audio platform that we're on. So look for us wherever you get your podcasts, or as always, right on wpi.edu/listen. On that page, you can find all our episodes and other podcasts from across campus. You can also check out WPI News on the go. That's a section with audio versions of stories about our students, faculty and staff. You can also get the latest WPI news by asking Alexa to “Open WPI.” This podcast was produced at the WPI Global Lab in the Innovation Studio on campus. I'd like to thank PhD candidate Varun Bhat for the audio engineering help. Tune in next time for another episode of The WPI Podcast. I'm Jon Cain. Talk to you soon. 

Degrees

Bachelor of Science

Bachelor's Degree in Artificial Intelligence

A few students with their professor gathered together looking at a screen and working on a problem together.

Build Powerful Solutions in Any Field, from Healthcare to Sustainability, with Our Degree in Arti

[...]
BS IN Artificial Intelligence
Master of Science (On Campus or Online)

Master of Science in Artificial Intelligence (AI)

WPI Professor Elke Rundensteiner works with students on an NSF-funded project on fairness in artificial intelligence.

Prepare for the AI Career Opportunities of the Future

[...]
MS IN Artificial Intelligence

News

Chev Right Icon of hollow arrow pointing right Arrow Right Icon of arrow pointing right See More News