Datasphere Initiative Foundation

Address: Route de Chene 30, 1211 Geneve 6, Switzerland

Website: https://www.thedatasphere.org/

The Datasphere Initiative is a global network of stakeholders with a mission to develop agile frameworks to responsibly unlock the value of data for all.

It has been incubated by the Internet & Jurisdiction Policy Network – a multistakeholder organisation addressing the tension between the cross-border nature of the internet and national jurisdictions.

https://www.youtube.com/watch?v=DqtnexKnFKIu0026list=PLa6vw8V5aV_v8kCoNqSylSRWx_yRNTe7Ju0026index=7

The Datasphere’s main objectives are the following:

  • Bring a new, holistic and positive approach to the governance of the Datasphere
  • Provide a platform to unlock the social and economic value of data access and
  • Improve coordination and accelerate the adoption of concrete proposals to overcome the current tensions and polarisation around
  • Produce evidence-based analysis on data policy
  • Catalyse human-centric technical,  policy,  and institutional innovations.

Digital activities

The Datasphere Initiative works on three programmes that promote transnational cooperation and mission-oriented partnerships: Dialogue, Intelligence, and the Lab.

Digital activities are dedicated to increasing awareness, building capacity, and identifying real-world challenges that can be addressed by data and innovative data governance frameworks.

Digital policy issues

Data governance

The Datasphere provides the fundamental perspective shift needed to govern data for the well-being of all, building innovative data governance frameworks that are inclusive, agile, and scalable.

The Intelligence Program gathers evidence on concrete challenges, identifies innovative data governance practices, and translates complex technical data issues into actionable outcomes. The main objectives of this programme are mapping the relevant actors, policy processes, innovative approaches, concepts, and trends on data governance and developing holistic analytical and measurement frameworks for the Datasphere. The Intelligence Program includes Framing the Datasphere, Datasphere Observatory, Mapping the Datasphere, Sectorial Deepdives, Datasphere Reviews, and Datasphere Toolkit(s).

The Datasphere Governance Atlas maps organisations from around the world all with a mission to address the multi-dimensional topic of data governance. The analysis identifies technical and normative approaches to data governance and puts forward insights into how the data governance environment is evolving across regions and sectors.

The Lab Program creates a collective space to showcase and experiment with innovations in governance frameworks and technical solutions advancing the vision of a Datasphere for all. Cross-border Datasphere Sandboxes, Framework Convention for the Datasphere, Protocol(s) for the Datasphere, and Datasphere Hackathons are an important segment of this programme. Moreover, it is foreseen to launch a Global Sandboxes for Data Forum in 2024 that will spur the development and implementation of innovative data governance frameworks across borders, through a multistakeholder process, to which local, regional, and global experts will be invited to contribute and learn from each other. As the first regional application of the Global Sandboxes Forum, the Datasphere Initiative has launched the Africa Forum on Sandboxes for Data.

Capacity development

The objectives of the Dialogue Program are to inform (increase awareness of data opportunities and challenges across regions), advise (develop capacity-building programmes with a regional partner), and innovate (identify real-world challenges that can be addressed by data and innovative data governance frameworks). The Datasphere Dialogues seek to engage systematically with stakeholders in Europe and North America but will count with a special focus on the Global South and sustainability challenges, supporting local and regional empowerment and leadership on Data Governance. In addition to global dialogues, it consists of structured regional efforts in Africa, Latin America and the Caribbean, and Asia. The Dialogue Program includes Datasphere Dialogues, Datasphere Academy, Datasphere Challenges, and the Datasphere Summit.

Digital tools

Mapping the Datasphere seeks to investigate paths to enable the visualisation of the Datasphere as a whole and its different dimensions, building on datasets of personal and/or non-personal data in collaboration with data engineers and policy practitioners. The ultimate objective is to develop an interactive Datasphere Observatory platform where the Datasphere and the repository of collected information can be visualised in its different layers and segments of different dataspheres tangible for users and decision-makers.

Future of meetings

More information is available on the event page.

Social media channels

Instagram @youth4data

LinkedIn @datasphere-initiative

Medium @thedatasphere

TikTok @youth4data

X @thedatasphere

YouTube @The Datasphere

HealthAI: The Global Agency for Responsible AI in Health

Acronym: HealthAI

Established: 2023

Address: Chemin Eugène-Rigot 2A, 1202 Geneva, Switzerland

Website: https://www.healthai.agency

HealthAI – The Global Agency for Responsible AI in Health – is a Geneva-based non-profit organisation with the mission of advancing the development and adoption of Responsible AI solutions in health through the collaborative implementation of regulatory mechanisms and global standards.

HealthAI envisions a world where artificial intelligence (AI) produces equitable and inclusive improvements in health and well-being for all individuals and communities.

As the premier implementing partner to ensure global standards for Responsible AI in health are actively applied, HealthAI works with countries, normative agencies, the private sector, and other stakeholders to build national and regional regulatory capacity so that countries can actively validate AI technologies, reducing both risks and long-term costs of AI-enabled health.

With a network of over 45 partners, HealthAI’s work is rooted in three core principles, namely cultivating trust, catalysing innovation, and centring equity.

 Logo, Text

An Organisational Refresh:

Following four years of operation under The International Digital Health and AI Research Collaborative (I-DAIR), we have transformed into HealthAI: The Global Agency for Responsible AI in Health.

Digital policy issues

HealthAI new strategy

AI and other emerging technologies have immense potential to improve health and well-being but they also bring a unique set of risks and challenges that must be addressed to safeguard individuals and communities from potential harms. Globally, a lack of effective governance is increasing the risk and hindering the adoption of Responsible AI solutions towards better health outcomes. Strong, responsive regulatory mechanisms are required to establish AI systems’ safety and effectiveness and build trust for the long-term acceptability and success of AI-enabled progress in the health sector.

Some countries, mainly those with the highest gross domestic product (GDP) and the most advanced technology sectors, have begun integrating AI regulation into governance structures and national regulations. Most countries have only just begun considering the regulation of AI in general terms and even less so within the context of health. This risks deepening inequity in both access and outcome between early adopter countries and countries that do not have the resources or flexibility to match the pace of technological innovation.                    

Global efforts addressing the need for AI regulation through the harmonisation of existing standards are critical, but require collaborative partners who can support the implementation of the resulting standards and recommendations at a local level. With the new strategy for 2024-2026, HealthAI positions itself as a premier implementing partner for countries, normative agencies, the private sector, and other stakeholders to ensure global standards of Responsible AI in health are actively applied in the push towards improved health and well-being outcomes for all in alignment with the SDGs.

HealthAI’s Core Outputs

To achieve our mission, HealthAI’s work spans four key areas (Figure 1): 

i) Building and certifying national and regional validation mechanisms on Responsible AI in health:

  • Establish in-country, government-led regulatory mechanisms by implementing global standards and guidance set by the World Health Organization (WHO) and others at the country level.
  • Support the implementation of existing auditing tools, and provide guidance on the use of data for AI solutions validation.

ii) Establishing a global regulatory network for knowledge sharing and early warning of adverse events:

  • Facilitate knowledge sharing so as to streamline the certification of the same technology and to identify AI solutions that require refinement or re-evaluation.
  • Rapid notification of adverse events arising from an AI-driven health solution.

iii) Creating a global public repository of validated AI solutions for health:

  • Allow countries to evaluate solution options against local health needs.
  • Surface unmet health needs as insights and inspiration for technology developers.

iv) Delivering advisory support on policies and regulations:

  • Provide technical guidance and insights into global trends and best practices so as to assist public and private stakeholders in developing effective and contextually relevant strategies, policies, and regulations.
  • Democratise AI for health policy-making through diverse stakeholder and citizen engagement to cultivate trust and improve inclusiveness.
 Text, QR Code, Symbol

Figure 1 – Responsible AI Solution for Health

The outputs will lead to the following outcomes. Stronger policies, regulations, and institutions will enable the effective governance and validation of AI and other emerging technologies, reducing both the risks and long-term costs of AI-enabled health. In the long term, countries will be able to identify validated AI solutions with greater certainty in their efficacy to meet local health needs, while private sector partners will have clarity about regulatory requirements and a better understanding of AI use in health systems and services. 

HealthAI’s Impact

HealthAI is dedicated to contributing to enhanced health and well-being outcomes for all in alignment with the SDGs. HealthAI aims to achieve this by facilitating increased access to safe, high-quality, effective, and equitable AI solutions. This involves ensuring that AI solutions are not only safe for use but also comply with rigorous quality standards, delivering the intended health outcomes or system improvements.

HealthAI commits to providing information on market access authorisation, and reimbursement processes while supporting an early-warning mechanism to alert countries of adverse events. Through streamlined information sharing between countries and the establishment of a global repository of validated AI solutions, the organisation seeks to propagate the availability of proven Responsible AI solutions. Furthermore, HealthAI envisions a positive impact on government revenue from regulatory activities, generating new sources of income for regulatory agencies and government budgets. This financial support is crucial for the sustained funding of regulatory mechanisms and additional investment capacity, ultimately accelerating approval processes across countries and leading to cost savings and bureaucratic streamlining. 

Finally, by fostering an ecosystem that ensures compliance with internationally defined Responsible AI standards, protects national data sovereignty, and supports local validation processes that enable feedback from civil society, HealthAI’s work will increase trust, investment, and innovation in Responsible AI solutions for health.

Definition of Responsible AI

Responsible AI is characterised by AI technologies that align with established standards and ethical principles, prioritising human-centric attributes. In the context of HealthAI, Responsible AI is defined as AI solutions that exhibit ethical, inclusive, rights-respecting, and sustainable qualities. These attributes encompass a commitment to protecting and respecting human autonomy, promoting well-being and safety, ensuring technical robustness, safeguarding privacy and data, adhering to laws and ethics, prioritising transparency and explainability, maintaining responsibility and accountability, fostering inclusivity and equity, upholding diversity and non-discrimination, and considering societal and environmental well-being. HealthAI applies these principles across all facets of AI technologies, from technical development and data use to technology implementation and its ultimate impact. This comprehensive definition is drawn from reputable sources, including WHO, the International Development Research Center’s AI for Global Health Initiative, the European Commission’s High-Level Expert Group on AI, and pertinent journal publications on the ethics and governance of artificial intelligence in health.

Social media channels

LinkedIn @healthaiagency

X @thehealthai

YouTube @I-DAIR

DCAF – Geneva Centre for Security Sector Governance

Acronym: DCAF

Established: 2000

Address: Maison de la Paix, Chemin Eugène-Rigot 2D, 1211 Geneva, Switzerland

Website: https://www.dcaf.ch/

DCAF is dedicated to improving the security of states and their people within a framework of democratic governance, the rule of law, respect for human rights, and gender equality. Since its founding in 2000, DCAF has contributed to making peace and development more sustainable by assisting partner states, and international actors supporting these states, to improve the governance of their security sector through inclusive and participatory reforms. It creates innovative knowledge products, promotes norms and good practices, provides legal and policy advice and supports capacity‐building of both state and non‐state security sector stakeholders.

Digital activities

Cyberspace and cybersecurity have numerous implications for security provision, management, and oversight, which is why DCAF is engaged in these topics within its work. DCAF has implemented a cycle of policy projects to develop new norms and good practices in cyberspace. At the operational level, cybersecurity governance has become a prominent part of SSR programming.

Digital policy issues

Cybersecurity

Digitalisation and cybersecurity are the challenges of today and tomorrow. They have an overarching impact on the security sector and the role of the security sector and governance reform (SSG/R) in the digital space. In our recent study SSG/R in the digital space: projections into the future policy, we shed light on the complex intersection of digitalisation and security sector governance. It examines how security sector actors have adapted to the digital transition and the emergence of new actors within the security ecosystem. It also provides concrete recommendations on how to navigate the complexities of digital technologies and shape ethical technology use and robust digital governance frameworks.

Capacity development

For newcomers to the field, DCAF offers the introductory series SSR Backgrounders, with a special issue on the impact of digitalisation on good governance in the security sector. It is a first-stop resource to understand the challenges and considerations for best policy and practice. 

DCAF implements projects that focus on improving cybersecurity laws and policies, increasing the capacity of cybersecurity actors, and strengthening accountability in cybersecurity. One of our priorities is to strengthen the individual and institutional capacities of national Computer Emergency Response Teams (CERTs). These teams are responsible for effectively and efficiently preventing and responding to attacks on national systems.

We also run the annual Young Faces research and mentoring programme, which helps to develop the next generation of cybersecurity experts in the Western Balkans. Each year, we select around 30 dynamic, forward-thinking young professionals to join the programme that enhances their knowledge of emerging trends in cybersecurity governance.

Research shows that women, girls, and LGBTQ+ people are the most affected by cybersecurity risks. Our publication and podcast series analyses how they have been pushed out of cyberspaces by abuse and discrimination, and what solutions exist to take a human-centred approach that considers everyone’s needs in cybersecurity.

In our Donors’ Talk podcast series, we spoke with DCAF’s Justice Advisor to draw on her 15 years of experience in justice sector reform to look at success stories, challenges, and what needs to be considered when supporting digitalisation projects related to justice reform. In Morocco, DCAF supported the National AntiCorruption Commission with training on the prevention and investigation of cyber-corruption and financial cybercrimes. The government commission digitalised its internal processes, resulting in more effective tracking and response to citizens’ data protection requests

Digital tools

Legislation databases 

DCAF’s three legal databases gather policies, laws, and decrees governing the security sectors in the Occupied Palestinian Territory, Libya, and Tunisia. Each database covers the main providers of security and justice, the formal supervision and management institutions, and the legislative and regulatory texts covering and authorising the work of informal control actors (political parties, media, NGOs, etc.). 

A resource for legislators, the justice system, academia, and civil society, the databases offer both a current resource and a historical perspective on the evolution of security sector legislation in the respective countries.

Handbook on effective use of social media in cybersecurity awareness-raising campaigns

This handbook provides condensed and easy-to-follow guidance and examples for designing content strategies and the efficient use of social media towards effective public awareness raising on cybersecurity. It shares the do’s and don’ts of social media, and how to have a strategic social media presence to support better cybersecurity.

For more tools and resources on cybersecurity governance and the security sector, visit our website

Social media channels

Facebook @DCAFgeneva

LinkedIn @DCAF

Spotify @dcaf

X @DCAF_Geneva

YouTube @DCAF Geneva Centre for Security Sector Governance

Université de commerce et d’études internationales (UBIS)

Acronym: UBIS

Address: Avenue Blanc 46, 1202 Geneva, Switzerland

Website: https://www.ubis-geneva.ch/

Stakeholder group: Academia & think tanks

Association mondiale de psychiatrie

Acronym: WPA

Address: 2 chemin du Petit-Bel-Air, 1226 Thônex / Geneva, Switzerland

Website: https://wpanet.org

Stakeholder group: NGOs and associations

Organisation Mondiale Contre la Torture – OMCT

Acronym: OMCT

Address: Rue du Vieux-Billard 8, 1205 Genève, Switzerland

Website: https://omct.org

Stakeholder group: NGOs and associations

Fédération mondiale du cœur

Acronym: WHF

Address: Rue de Malatrex 32, 1201 Genève, Switzerland

Website: https://world-heart-federation.org

Stakeholder group: NGOs and associations

Alliance mondiale pour la participation des citoyens – CIVICUS

Acronym: CIVICUS

Address: Av. de la Paix 11, 1202 Genève, Switzerland

Website: https://civicus.org

Stakeholder group: NGOs and associations

Skip to content