Panel: 165
Integrating AI, Open Science, and Research Security into Responsible Conduct of Research Policies and Practice to Promote Research Integrity
Abstract:
This panel explores the intersection of artificial intelligence (AI), open science, research assessment, and research security in promoting ethical and responsible research practices which help foster public trust in science. Funders, research performing organizations, , publishers, and researchers all share responsibility for ensuring integrity, transparency, and accountability. Speakers will address how AI is transforming methodologies and reporting, the benefits and challenges of open science, and the importance of safeguarding intellectual property and data. The discussion will highlight tensions between research integrity and security. Serving as a prelude to the 2026 World Conference on Research Integrity, the session aims to inspire dialogue.
Summary of Conversations
The principle that research data should be “as open as possible and as closed as necessary ” is key to navigating the intersection of open science and research security. A central theme was that the core elements needed for strong research security—including capacity building, formalized training, and dedicated funding—are precisely the same elements required for effective open science implementation. Initially viewed as conflicting objectives, the two concepts are now seen as potentially complementary ways to strengthen national systems of research.
Concerns over dual-use research evolved from focusing on risky partnerships to the government’s pivot toward actively embracing and funding defense-related technologies. The challenge of maintaining a trustworthy scholarly record was highlighted, as artificial intelligence facilitates the creation of sophisticated fake papers and fabricated data. A mathematical model was introduced to analyze global cooperation, showing that collaboration networks naturally tighten into exclusive “clubs” when research is both highly dual-use and susceptible to “choke-points,” a shift observed in core AI research after specific trade policy interventions. The difficulty of regulating non-choke-pointable research outputs, such as software, was also discussed.
Take Away Messages/Current Status of Challenges
- Immature Implementation of Core Practices: Both research security and open science initiatives are widely implemented in an underdeveloped, informal, “on the side of our desks” manner, lacking the necessary formal capacity, expertise, and sustained funding.
- Navigating the Dual-Use Pivot: The research community faces complexity as the government embraces dual-use and defense-related research, creating a tension between the drive for open science and the necessity of research security.
- Erosion of Scholarly Trust: The trustworthiness of the scientific record is under significant threat from AI-generated content, including sophisticated fake articles and fabricated data, which requires publishers to perform resource-intensive gatekeeping functions.
- Policy Oversimplification: Current security policies often rely on an unsophisticated, binary “open or closed” framework, which is inadequate for managing collaboration in a complex, multi-polar world and risks leading to economically irrational or scientifically detrimental decisions.
- Regulatory Difficulty for Software: Research software is inherently hard to regulate or “choke-point” because it is easily shared, making attempts at regulation impractical and shifting control efforts toward high-performance computing hardware.
- Defining AI for Governance: For effective governance of AI, definitions of AI need to focus on what the AI tool does rather than defining the tool itself. The recent tendency to broaden the definition of AI to include many types of algorithms makes policymaking about AI difficult..
- Formation of International “Clubs”: Geopolitical and trade policies implemented by allied nations (e.g., export controls on technology) directly cause rapid shifts in global collaboration networks, forcing researchers into tighter, exclusive “clubs” in technologically sensitive fields.
- Academic Risk Aversion: Conflicting and complex policies across different countries and institutions often lead Canadian researchers to adopt a reactive, risk-averse approach of simply avoiding collaborations with certain international partners, which unnecessarily blocks projects and limits open science.
Recommendations/Next Steps
- Establish a Modern Research Mindset: Researchers and institutions must cultivate a sophisticated understanding of risks and benefits of their research.
- Formalize and Fund Capacity Building: Implement formal and funded programs for knowledge sharing, capacity building, and creating expertise to mature the national-level implementation of both research security and open science policies.
- Develop Context-Aware Collaboration Policies: Adopt more sophisticated policies that move past simple blacklists, using factors like dual-use potential and choke-point risk to make intelligent, proportionate decisions about research partnerships in a “polylateral” world.
- Create a National Data Governance Hub: Establish a national focal point or convening body to standardize data governance, promote community building, and facilitate thinking about standards for the entire data domain.
- Enhance Integrity of the Scholarly Record: Publishers must continue to evolve and enforce best practices, including requiring explicit author declarations on AI usage, to safeguard the credibility and security of the scientific knowledge base.
- Standardize Research Software Recognition: Work toward developing systems and standards, potentially modeled after initiatives like the Open Source Programs Office (OSPO), to formalize the depositing, citing, and granting of scholarly credit for research software output.
- Implement Continuous Education and Training: Establish formal, ongoing education for academics and graduate students to keep them current with the rapidly changing policies, mandates, and risk profiles in both open science and research security.
Prioritize Science Diplomacy: Employ science diplomacy as a strategy to enable open science by focusing on collaboration in “good faith,” ensuring security measures work to facilitate rather than obstruct international research projects.
* This summary is generated with the assistance of AI tools


