AI for Cyberpeace

AI

The Programme

Artificial intelligence (AI) has profoundly disrupted our world in the past years. Recognizing the impact AI is having on our society, our AI for Cyberpeace program is a framework designed to monitor, investigate, raise awareness,  and harness the nexus between AI and digital resilience. Grounded in our Cyberpeace Disruption framework, the programme pursues five objectives.

Anticipating AI-enhanced Cyber Threats

We anticipate and forecast how AI is changing the cyber threat landscape. 

Building on our research and investigation capabilities, we investigate how AI acts as a double-edged sword in cyberspace: the same characteristics of AI technologies can strengthen both threat actors and cybersecurity experts. This includes AI-powered malware that can learn from its environment, fully AI-generated malware code, and AI-generated phishing and disinformation campaigns that are harder to distinguish from real messages and real news.

While the near-human text generated by AI is one of its biggest advantages, threat actors are using this to strengthen their attack vectors. For example, in November 2023 a threat actor launched a malicious spam service powered by ChatGPT that was able to bypass traditional spam and phishing filters and achieve much higher success rates through randomized text and human-like emails.

Moreover, as more organizations start using AI to achieve their mission, we are monitoring cyber threats against AI tools and services. Attacks designed to manipulate AI systems or steal sensitive training- and user-data pose new vulnerabilities for organizations using AI.

Privacy attacks against AI systems are attempts to learn sensitive information about the users of an AI tool. For example, during a nine-hour window on March 2023, it was possible for some ChatGPT users to see another user’s first and last name, email address, payment address, credit card information, and prompts.

Our work in foresight and anticipation of AI in cybersecurity is guided by a commitment to cyberpeace and the protection of all individuals in the digital age. By staying ahead of the curve in understanding and mitigating the risks associated with AI, we aim to build a safer, more secure cyberspace.

Find out more about our foresight efforts and read blog on generative
AI in cybersecurity

Monitoring Global AI Regulation

We monitor and analyze AI regulation around the world, with a specific focus on cybersecurity.

The following non-exhaustive list is intended to provide an overview of noteworthy AI regulation and governance efforts we are monitoring. You can find more of our in-depth analyses on cyber-related policies in our CyberPeace Watch programme.

Strengthening NGOs Digital Strategy

We are accelerating NGOs’ responsible use of AI. Based on our own journey to responsible use of AI we are developing AI capacity building for NGOs to guide them through creating and implementing AI strategies internally.

Our Journey to Responsible Use of AI: A Blueprint for NGOs

What does the journey to a responsible approach to AI and operational internal guidelines look like for an NGO? At the Institute, this ongoing journey has been a valuable learning experience and one we are eager to share with both those currently undertaking this internal process as well as those who wish to start but might not know where to start.

Our Journey Responsible Use of AI_Visual

Our journey has led us to establish five principles specific to our organization to guide our work ensuring we use AI responsibly. Now, in close collaboration with our teams, we are operationalizing our principles into internal guidelines. 

You can find out more about our journey, our approach, and our policies for the responsible use of AI below:

Discover our journey to
a responsible use of AI

Capacity Building: AI Skilling for NGOs

The disruption of AI is introducing new threats to organizations both from the outside and inside. As part of our mission to protect the most vulnerable in cyberspace, we offer comprehensive capacity building on the intersection between AI and cybersecurity for NGO and foundation boards, management, and staff, designed to empower organizations with the vital tools for enhancing and safeguarding their mission.

What is AI? How does it work? How is it disrupting digital resilience and cybersecurity? How is it impacting the work of NGOs?

What AI-powered tools are available? How can these be implemented most effectively into an organisation?

What are the risks and challenges of AI? How can organisations implement AI responsibly? How to create a responsible AI strategy and operational guidelines?

Ready to improve AI capabilities and awareness in your organisation?
We can help you find the training adapted to your organisation and your needs.

Securing NGOs' Digital operations

As we work on increasing our internal AI and ML capabilities, we are exploring different AI/ML methods to enhance our core mission of supporting the most vulnerable in cyberspace. Specifically, we are examining how we can use AI/ML for creating a recommender system for the CyberPeace Builders Program, our unique initiative matching NGOs to corporate volunteers.

Leveraging AI to
Empower NGOs

Data is at the heart of our work. Our CyberIncident Tracers provide independent, date-driven insights on the cyber threat landscape in the healthcare system and in times of conflict. As we seek to enhance and expand our data-capabilities, we are looking to AI to upgrade our data collection and treatment. By being able to ingest more data, and process it faster, we can serve vulnerable communities in cyberspace more effectively.

In 2023, under the Data Practice Accelerator Program, supported by the Patrick J. McGovern Foundation, we advanced our ML capabilities and designed an improved, AI-powered data pipeline capable of automating the integration of unstructured data into our database. In 2024, we aim to increase our AI capabilities, and develop an AI-powered CyberIncident Tracer for NGOs.

AI/ML and Data Science Capabilities

As we navigate the rapidly evolving AI landscape, we are working to developing and expanding our in-house AI-related technical capabilities. Through dynamic collaborations with our partners and donors, we enhance these capabilities, staying at the forefront of innovation. We are always eager to explore new partnerships with organizations looking to leverage AI and technology for transformative impact.

Under the 2023 Data Practice Accelerator Program, we have been advancing our ML capabilities to automate the analysis of vast, unstructured data. We’ve been focusing on developing a comprehensive pipeline for data storage, extraction, and named entity recognition, integrating technologies like Airflow, Kubernetes, and Google’s Vertex AI.

Find out more here.

Fine-Tuned NER for Cybersecurity

As part of the 2023 Data Practice Accelerator Program, we fine-tuned a natural language model designed for Named Entity Recognition (NER) with a data set containing cyber threat intelligence terms. This fine-tuned model is capable of extracting any names and terms relevant for our cyber threat analysis work. 

You can find and test our model here.