Academy of Management

By Daniel Butcher

As generative AI’s usage continues to grow rapidly, critics point out that OpenAI’s ChatGPT fabricates sources, including citations, journal names, and articles, that sound legitimate and scholarly, but are often made up. Sometimes, it doesn’t attribute direct quotes to the person who said or wrote them. Other generative AI platforms have been accused of plagiarism or failing to properly cite sources or even producing “hallucinations” that fill information gaps with inaccurate statements or outputs. Still, these may be more growing pains rather than chronic illnesses.

That’s according to Academy of Management Scholar Herman Aguinis of the George Washington University School of Business, who said that every new technology presents ethical challenges in producing and using it. Problems arose in the early days of the Internet, too. He noted that generative AI platforms such as ChatGPT have largely corrected the issue of hallucinations.

“You can Google something and copy and paste something from the search results, but plagiarism has been around for a long time…you could grab a paper book from the library and copy a whole paragraph from it,” Aguinis said. “AI is making these possibilities and the potential for cheating in these ways much easier and more straightforward.

“ChatGPT 3.0 was doing that, but the GPT-4o version not only gives you the right source name but also a quote or sentence from the source—it is absolutely incredible, and that’s going to get even better,” he said.

As for using generative AI in the workplace, Aguinis believes that leaders have to create sensible policies.

“One sensible general blanket policy that applies across industries, jobs, and tasks is to openly and honestly describe exactly how you use AI for your specific task, essentially, user beware,” Aguinis said. “It’s really important to offer an explanation, qualification, or warning of how you used AI.

“Second, the issue of AI output verification is absolutely key—you should verify the accuracy and appropriateness of the information that you received through ChatGPT,” he said. “Those are the guardrails, and while this is evolving, and we’re immersed in it, people shouldn’t be too scared about it, because every time we lived through those technological advancements, there were all these alarms going off that there’s going to be all kinds of problems.

“Every technology can be used, abused, and misused, so there’s nothing new about AI—we need to think about verifying the information and being open and honest about how we use it.”

Author

  • Daniel Butcher is a writer and the Managing Editor of AOM Today at the Academy of Management (AOM). Previously, he was a writer and the Finance Editor for Strategic Finance magazine and Management Accounting Quarterly, a scholarly journal, at the Institute of Management Accountants (IMA). Prior to that, he worked as a writer/editor at The Financial Times, including daily FT sister publications Ignites and FundFire, Crain Communications’s InvestmentNews and Crain’s Wealth, eFinancialCareers, and Arizent’s Financial Planning, Re:Invent|Wealth, On Wall Street, Bank Investment Consultant, and Money Management Executive. He earned his bachelor’s degree from the University of Colorado Boulder and his master’s degree from New York University. You can reach him at dbutcher@aom.org or via LinkedIn.

    View all posts
Click here for sharing

Leave a Reply

Your email address will not be published. Required fields are marked *