CIENS Artificial Intelligence Network: Workshop on ethical use of AI in research

  • clock 15. oktober 2025 – 15. oktober 2025

Bakgrunn

On October 15th, CAIN hosted a workshop on the ethical use of AI in research. The workshop explored the complex and evolving landscape of AI in research. While AI is transforming many fields, its use in scientific work raises important questions. Together with researchers, ethicists, and editors, we explored:
• How should AI be used in research?
• Where do we draw the line between efficiency and scientific integrity?
• What guidance and good practices can help us navigate this evolving field?

Thomas Østerhaug from The National Research Ethics Committee (NENT) spoke about generative AI and research ethics. He shared advice for both individual researchers and research organisations, which can be found on pages 29–30 of the attached presentation. He also highlighted useful resources for those interested in learning more:
Living guidelines on the responsible use of AI in research (EU) – (Living means it’s continuously updated).
Guidelines for research ethics in science and technology (NENT).

Thomas also recommended an upcoming webinar “Uten hype: ansvarlig KI i praksis” on November 12th, for those of you who couldn’t attend our workshop: You can sign up here!

Ola Nordal from Store Norske Leksikon (SNL) shared an editorial perspective on how AI affects encyclopedia production. SNL has been producing Norwegian encyclopedias since 1907 and has been open access and online since 2000, with content written by 1300+ different topic experts.

Ola emphasised the importance of maintaining user trust. SNL enforces a strict yet balanced policy for its topic-experts: AI-generated content is not allowed in the encyclopedia. On the other hand, AI tools may be used for meta-writing tasks such as structuring, outlining, or grammar checking. They keep a distinction between AI generated content and using AI as a tool and a writing assistant.

The event was wrapped up with a panel discussion moderated by Maximilian Nawrath from NIVA (Norsk institutt for vannforskning), featuring Yuri Kasahara from NIBR (By- og regionforskningsinstituttet), Thomas Østerhaug, and Ola Nordal. The discussion covered topics such as the use of AI in peer review — generally discouraged — versus its acceptable use for refining and polishing text. The panel also stressed transparency in AI usage, the importance of understanding how AI systems function, and the sustainability challenges tied to the high energy consumption of large AI models like GPT.

Best regards,
Maximilian Nawrath (NIVA), Taheera Ahmed (NINA) and Yuri Kasahara (NIBR)