How are researchers responding to AI? 

23 May 2024
4 min read

Most academic researchers and research authors say they are using artificial intelligence (AI) tools in their research practice, despite concerns over the loss of critical thinking skills, respect for intellectual property (IP) rights, and mistrust in AI providers.

We recently conducted a survey of over 2,000 researchers across geographies, subject disciplines—including Humanities, STM, and Social Sciences—and different career stages to hear directly from the research community about how they are reacting to and using AI in their work.

The results reveal the key considerations in researchers’ decisions to engage with AI, including what excites and concerns them, and how they already use—or plan to use—tools already available to them.

At OUP, we are committed to supporting academic researchers in harnessing AI to improve research outcomes and help to protect their role by working closely with technology providers to define clear principles for future collaboration.

Speaking about the research, David Clark, Managing Director of our Academic division, said:

“Throughout OUP’s history we have embraced new opportunities offered by technological advancement—in line with our mission to publish rigorous, high quality academic resources—responding to the needs of the academic community, while ensuring that the scholarship we publish remains valued and protected.

“This research will help us to understand how researchers are thinking about AI and its use in their work. As these technologies continue to rapidly develop, our priority is in working with research authors and the broader research community to set clear standards for how that evolution should take place.”

Our key findings include:

1. Majority say they have used some form of AI

76% said they have used a form of AI tool in their research—with machine translation and chatbots being the most popular.

3. Vast majority say they are suspicious of AI companies

Only 8% trust that AI companies will not use their research data without permission, while just 6% said they trust companies to meet their data privacy and security needs.

5. Fears about intellectual property

3 in 5 respondents feel that the use of AI in research could undermine intellectual property, and result in authors not being recognized appropriately for use of their work.

“I’m worried about the threat to IP, the increasing likelihood of inaccurate information, and poor custody of sources”

7. Generational differences in researchers’ opinions of AI

A quarter (25%) of those in the early stages of their careers have reported having sceptical or challenging views of AI. However, this proportion falls to 19% among respondents who are later in their careers. Early career researchers also have more divisive opinions on AI, with fewer expressing neutral views than later career researchers.

9. Confusion around available guidance

46% said that the institution they work at does not have a policy on AI usage in their work, and an additional 26% said they did not know if there is a policy.

2. Over 2 in 3 have felt the benefits of using AI

Of the respondents who have already used AI in their research, 67% feel it has benefitted them in some way. 27% of all respondents are excited for the prospects of AI for academic research, with data analysis and surfacing content seen as ways which it could potentially improve research outcomes.

“Because I am not a native speaker, I receive writing and language editing benefits”

4. Concerns about AI affecting research quality

Half (50%) said they are concerned about the impact that AI could have on academic research in the future. While 37% agreed that AI would save time for researchers, only 19% said that AI would improve the overall quality of work.

6. AI may reduce critical thinking skills

Across all subject disciplines, 1 in 3 (32%) say they are concerned that AI will negatively affect researchers’ skills and 25% feel that AI technology reduces the need for critical thinking.

“Researchers coming up with AI may never learn basic skills and trust AI outputs too much”

8. Considering implications of AI use is important

69% feel it is important to fully assess the implications of using AI tools before using them in their own research. Only 1 in 10 (12%) said that they would not look for guidance on using AI in their work before using it.

10. More than half would look to academic societies for AI guidance

54% say they would look to academic societies for guidance on AI, with 43% saying they would look at their own institution, and 27% saying they would look to publishers.

Speaking on OUP’s commitment to supporting the research community, David Clark added:

“This is a fast-moving, complex area—but we strongly believe that publishers like OUP are well positioned to act as a bridge between research authors and tech providers, making a real difference as these tools continue to evolve.

“We are actively working with companies developing LLMs, exploring options for both the responsible development and usage, to not only improve research outcomes, but also recognize the vital role that researchers have—and must continue to have—in an AI-enabled world.”

From June 2024, we will be hosting a series of webinars to delve deeper into the topics covered in our research, bringing together academic research authors from across the world to discuss how those in the research community can work together to build a sustainable, AI-enabled future.

You can read more on this topic from David Clark in his article for Times Higher Education on how academic publishers and AI companies do not need to be enemies, and his blog on how publishers are uniquely positioned to advocate for the protection of researchers and their research within LLMs.

Related articles