AI
AI as a tool for cyberpeace
Addressing the Risk of Autonomous Cyberattacks
We investigate how AI acts as a double-edged sword in cyberspace: it can be a tool for defenders or a weapon for threat actors. We document how AI disrupts the cyber threat landscape: on the one hand, augmenting criminal capability, on the other hand, creating the unacceptable risk of autonomous cyberattacks.
- We are calling governments to lead international efforts to prevent autonomous cyberattacks.
- We use AI to investigate its malicious use: under the Data Practice Accelerator Program, supported by the Patrick McGovern Foundation, we have advanced our AI capabilities for publicly tracing cyberattacks.
Using AI to Increase Digital Resilience
AI must be an equitable opportunity for underserved organisations, globally.
The CyberPeace Institute supports public interest organisations (e.g small critical infrastructure) to leverage AI for digital resilience.
We help our beneficiaries to
- Transform data into actionable insights.
- Provide public interest organisations with AI capacity building to create and implement AI strategies, implementing an approach to responsible use of AI.
- Train staff, boards and management of public interest organisations to navigate the intersection between AI and cybersecurity.
Engaging the AI Community
We engage actively with a community of like-minded organisations and individuals, dedicated to help investigate, tackle and prevent harm from the malicious use of AI. We are engaging with industry, civil society, and academics to ensure that AI solutions benefit human rights defenders and frontline workers in their efforts to protect the most vulnerable.
Our Resources
The AI Global Regulation Monitor
The following list includes examples of noteworthy AI regulation and governance efforts we are monitoring. You can find more in-depth analyses of cyber-related policies in our CyberPeace Watch programme.
European Union
- General Data Protection Regulation (2016)
- Digital Services Act (2020)
- Digital Markets Act (2022)
- EU AI Act
OECD
China
- New/Next Generation Artificial Intelligence Development Plan (2017) (original / EN translation)
- Measures for the Management of Generative Artificial Intelligence Services (original / EN translation) (2023)
Singapore
African Union
United Nations
- UNESCO’s ‘Recommendation on the Ethics of Artificial Intelligence’, emphasizing human rights and oversight (2021)
- The UN ‘s AI Advisory Body (2023) published its first interim report: Governing AI for Humanity (2023)
United States of America
Our AI Training for Public Interest Organisations
The disruption of AI introduces new threats to organizations, both external and internal. As part of our mission to protect the most vulnerable in cyberspace, we offer comprehensive capacity building focused on the intersection of AI and cybersecurity for public interest organisations and foundation boards, management, and staff. This training empowers organisations with vital tools to enhance and safeguard their missions.
1. AI and Cybersecurity
What is AI? How does it work? How is it disrupting digital resilience and cybersecurity? How is it impacting the work of public interest organisations?
2. AI Hard-Skills - Efficient Use of AI in Organisations
What AI-powered tools are available? How can these be implemented most effectively into an organisation?
3. AI Soft-Skills - Responsible Use of AI
What are the risks and challenges of AI? How can organisations implement AI responsibly? How to create a responsible AI strategy and operational guidelines?
Our Action
(please note that this article is in Italian)
Data to Safeguard Human Rights
In 2024, around 80 countries will be holding political elections, with social media platforms being the main communication channel to target voters.
How can AI help in detecting and preventing misinformation so that voters get the information they need?
Data to Safeguard Human Rights