Suhong Li, Ph.D., describes herself as a pioneer user of ChatGPT. The professor of Information Systems and Analytics finds this artificial intelligence (AI) chatbot more helpful than Google when searching for data science and programming functions via the internet; instead of having millions of solutions to scroll through, ChatGPT saves Li time by giving her one result that works.
“For the data science and IT field, ChatGPT is a tremendous help,” says Li, adding that her students who previously used Google as a search engine have now opted for ChatGPT.
The San Francisco-based company OpenAI released ChatGPT in November 2022. As a form of generative AI, the chatbot produces human-like text based on input it receives from users. But the advancement of AI is outpacing the technology’s policies and regulations, causing industry leaders such as Elon Musk and Apple co-founder Steve Wozniak to call for a pause on AI deployment of the most advanced systems.
Li says AI systems need to be safe, reliable, and ethical since they capture sensitive information.
“Regulations need to make sure AI systems are transparent,” says Li. “Furthermore, AI developers should create clear ethical guidelines and build accountability mechanisms into their AI systems.”
She adds that there has been misuse and bias from the development of generative AI. This discrimination is reflected in an AI model’s outputs, Li says, which can be attributed to either the data used to train the model or the model itself. She explains that a 2021 study out of Carnegie Mellon University found that the autoregressive language model GPT-3 had gender, race, and religious biases; researchers discovered GPT-3 produced gender-stereotyped language, such as associating various occupations with specific genders.
“Facial recognition systems have also been discovered to be biased toward specific racial and ethnic groups, resulting in misidentification and false allegations,” Li continues. “A National Institute of Standards and Technology research study, for example, discovered that some facial recognition algorithms were more likely to misidentify Asian and African American faces than the images of Caucasians.”
The public has already witnessed how generative AI can be used to spread misinformation — especially through imagery. Recent falsified photos include Pope Francis taking on a more modern look with a white puffer coat; another image, of police arresting former President Donald Trump, went viral on social media in mid-March.
It may be difficult to judge whether a photo is real or fake, so Li suggests people use Google Reverse Image Search or TinEye to verify an image’s authenticity. These websites allow people to upload images or URLs and search for other instances of the photo on the internet. If the image appears in multiple contexts, it's more likely to be real, says Li.
“Another way to verify the authenticity of a photo is to check the source. If the photo is from a news outlet or reputable source, it's more likely to be real. If the photo is from an unknown or unreliable source, it's more likely to be fake,” Li says.
She adds that a photo may have been manipulated if individuals detect background inconsistencies, missing details, or out-of-place shadows and reflections. Online photo forensics tools, such as FotoForensics, can help people detect areas of a photo that have been altered.
Looking toward the future, Li sees AI automating repetitive tasks, exploring new possibilities, and generating ideas. By replacing humans with AI for certain tasks, companies could reduce costs, improve efficiency, and facilitate innovation. She believes more businesses will explore the use of AI in their companies — noting how the technology could be used in code development, product and document search, personalized marketing and financial services, and customer service automation.
“A good use of AI lies not in its ability to replace humans but in its ability to augment and enhance human capabilities — making our lives easier, more efficient, and more fulfilling,” Li says.