AI for Cyberpeace

AI

Addressing the Risk of Autonomous Cyberattacks

We investigate how AI acts as a double-edged sword in cyberspace: it can be a tool for defenders or a weapon for threat actors. We document how AI disrupts the cyber threat landscape: on the one hand, augmenting criminal capability, on the other hand, creating the unacceptable risk of autonomous cyberattacks.

  • We are calling governments to lead international efforts to prevent autonomous cyberattacks.
  • We use AI to investigate its malicious use: under the Data Practice Accelerator Program, supported by the Patrick McGovern Foundation, we have advanced our AI capabilities for publicly tracing cyberattacks.

Using AI to Increase Digital Resilience

AI must be an equitable opportunity for underserved organisations, globally.
The CyberPeace Institute supports public interest organisations (e.g small critical infrastructure) to leverage AI for digital resilience.

We help our beneficiaries to

  • Transform data into actionable insights.
  • Provide public interest organisations with AI capacity building to create and implement AI strategies, implementing an approach to responsible use of AI.
  • Train staff, boards and management of public interest organisations to navigate the intersection between AI and cybersecurity.
AI for Cyberpeace Using AI to Increase Digital Resilience
AI for Cyberpeace Engaging the AI Community

Engaging the AI Community

We engage actively with a community of like-minded organisations and individuals, dedicated to help investigate, tackle and prevent harm from the malicious use of AI. We are engaging with industry, civil society, and academics to ensure that AI solutions benefit human rights defenders and frontline workers in their efforts to protect the most vulnerable.

Our Resources

The AI Global Regulation Monitor

The following list includes examples of noteworthy AI regulation and governance efforts we are monitoring. You can find more in-depth analyses of cyber-related policies in our CyberPeace Watch programme.

Our AI Training for Public Interest Organisations

The disruption of AI introduces new threats to organizations, both external and internal. As part of our mission to protect the most vulnerable in cyberspace, we offer comprehensive capacity building focused on the intersection of AI and cybersecurity for public interest organisations and foundation boards, management, and staff. This training empowers organisations with vital tools to enhance and safeguard their missions.

What is AI? How does it work? How is it disrupting digital resilience and cybersecurity? How is it impacting the work of public interest organisations?

What AI-powered tools are available? How can these be implemented most effectively into an organisation?

What are the risks and challenges of AI? How can organisations implement AI responsibly? How to create a responsible AI strategy and operational guidelines?

Our Action

17 April 2024
Blog post
Our policy to ensure the responsible use of AI
Guidelines for the Responsible Use of AI
11 April 2024
Patrick J. McGovern Foundation’s
Data to Safeguard Human Rights
Towards an AI-Powered Future in Support of Human Rights
Towards an AI-Powered Future in Support of Human Rights
26 - 28 March 2024
Forum InCyber 2024
How will 79 elections in 2024 impact the online community
How will 79 elections in 2024 impact the online community

In 2024, around 80 countries will be holding political elections, with social media platforms being the main communication channel to target voters.

How can AI help in detecting and preventing misinformation so that voters get the information they need?

19 October 2023
Blog post
Our Journey Developing a Targeted Approach to Responsible Use of AI
Abstract visual on the use of AI
20 September 2023
Patrick J. McGovern Foundation’s
Data to Safeguard Human Rights
Developing a machine learning project in support of human rights
10 July 2023
Blog post
Future(s) of Cyberpeace: Generative AI and Cybersecurity

Ready to improve AI capabilities and awareness in your organisation?