SAP x OpenAI: Sovereign AI in Germany’s Public Sector and the Challenges Ahead
The opinion piece takes a look at the framing of the recently announced "Open AI for Germany" by SAP SE and OpenAI. Does the pursuit of Sovereign AI risks exchanging data sovereignty for an opaque algorithmic systems?
OPINION
Indras Ghosh
10/3/20257 min read
SAP SE and OpenAI officially announced their joint initiative, “OpenAI for Germany,” on 24 September 2025. This high-profile partnership, which also involves Microsoft, is slated for a 2026 launch and promises to deliver a Sovereign AI solution to German government and research bodies. As highlighted in a joint statement featuring quotes from SAP’s Christian Klein, OpenAI’s Sam Altman, and Microsoft’s Satya Nadella, it seeks to streamline operations and enable government employees to "spend more time on people, not paperwork."
A High-Stake Partnership
The ambition is clear: rapid innovation balanced with guaranteed sovereignty. As the deployment of AI systems within the public sector faces ever-greater calls for increased due diligence, monitoring, and transparency, this high-stakes partnership for the German public sector is navigating an environment defined by stringent data privacy rules and the newly introduced EU AI Act.
Three potential challenges lie ahead: AI governance, the preservation of democratic values, and respect for human rights. These are the true hurdles that will determine whether this initiative proves a long-term success or simply a costly case study.
The integration of Artificial Intelligence (AI) into the operational core of government represents one of the most profound shifts in public administration in decades. The partnership between SAP and OpenAI, aimed at accelerating AI adoption within Germany’s public sector is framed as a crucial step towards establishing "Sovereign AI" capacity, securing data residency, and bolstering technological resilience against foreign dependencies. Yet, as governments increasingly delegate complex, life-altering decisions to algorithmic systems (from social welfare screening to regulatory compliance), the focus should shift from mere technical sovereignty to democratic and human rights accountability.
What does "Sovereign AI" Means?
For the German public sector, this definition of “Sovereign AI” is critical due to the constitutional imperative of Datenschutz (data protection) and the necessity of maintaining operational resilience in key AI infrastructure.
One definition offered by NVIDIA, defines "Sovereign AI" as "a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks." The principles of Sovereign AI goes beyond simple data storage and AI infrastructure. It includes an end-to-end framework: controlling the underlying compute infrastructure, owning the algorithms, and strictly governing the training data to align with German law and cultural context.
In the context of public sector, Sovereign AI should also encompass scalability from the central cloud infrastructure down to the citizen interface. At the same, it should not enable the replacement of publicly elected or democratically accountable officials with unreviewed algorithms. If AI sovereignty is merely a mechanism for faster deployment of opaque models, it fails the democratic value test. The underlying political objective should remain the empowerment of civil servants to make informed, human-validated decisions, not the automation of governance itself (Hickok & Hu, 2024).
Cautionary Tales: Governance, Fundamental Rights, and Democratic Value.
The urgency of embedding robust human rights assessments into public procurement is dramatised by historical failures where algorithmic systems, deployed with inadequate oversight, caused devastating societal harm. The Australian "Robodebt" scheme exemplifies the dangers of algorithmic rigidity combined with a reversal of the burden of proof. The system automatically calculated welfare debts by averaging annual income data, often erroneously generating thousands of false claims. This automated compliance and debt collection led to financial ruin, mental health crises, and, tragically, suicide among welfare recipients. The subsequent Royal Commission highlighted how a flawed algorithm, coupled with a lack of human discretion and a dismissive governmental response, systematically violated citizens' rights (Robodebt Royal Commission Report, 2023).
The Dutch Child Care Benefit Scandal remains the most salient European example of algorithmic injustice. Tax authorities used an algorithm to detect potential fraud in childcare benefits, resulting in tens of thousands of families - disproportionately those with dual nationality or lower incomes - being falsely accused. The system’s bias and opacity led to the wrongful imposition of massive fines, the destruction of families, and the forced resignation of the entire cabinet in 2021. This case serves as a definitive warning against using opaque AI systems for high-stakes, social-welfare decision-making, demonstrating a fundamental breakdown of administrative law.
Beyond administrative failures, the rise of sophisticated generative AI poses an acute security challenge to democratic institutions. OpenAI themselves have reported malicious actors are already leveraging deepfake technologies and large language models to conduct hyper-realistic social engineering attacks. These attacks, which can mimic the voice of high-ranking government officials, are designed to penetrate secure networks or influence public policy and critical decision-making. The strategic threat intelligence community has already identified the necessary countermeasures to mitigate such risks, confirming the need for a “secure-by-design’” approach to any Sovereign AI implementation. (OpenAI, Disrupting Malicious Uses of AI, 2025)
The push for a rapid Sovereign AI, particularly one involving deep vendor integration can also carry a risk of long-term digital vendor lock-in, making future systems adjustments or termination prohibitively expensive, regardless of the system’s ethical failings.
Are the Current Regulatory Guardrails Enough?
In the joint statement, the SAP and OpenAI stated that the partnership seek to "enable millions of public sector employees to use AI safely and responsibly while meeting strict data sovereignty, security, and legal standards" It begs the questions, what are the standards and regulations that provide such guardrails? and are they enough?
There is an argument that the German Federal Data Protection Act (Bundesdatenschutzgesetz), General Data Protection Regulation (GDPR), and the EU AI Act would provide sufficient guardrails for “Open AI for Germany”. Whilst these regulations are foundational to protect data used by AI systems and imposes strict requirements on 'High-Risk' AI applications, they may not be sufficient to manage the inherent risks posed to the public before an AI system is deployed.
In the aftermath of the “Child Care Benefit Scandal”, the Government of Netherlands faced strong pressure and urgency to strengthen safeguards around the use of algorithms and automated decision-making in the public sector. This expedited the mandatory application of the Fundamental Rights and Algorithms Impact Assessment (FRAIA) – a tool for public authorities to assess whether the use of algorithms might infringe on fundamental rights, including privacy, non-discrimination, and due process.
As of the writing of this article, Germany does not possess a uniform public procurement framework specifically designed for algorithmic systems. The EU AI Act imposes obligations on deployment and post-market monitoring of high-risk AI systems. While it requires certain testing and risk management measures, and mandates fundamental rights impact assessments in some public-sector cases, it does not establish a broad, binding requirement for comprehensive human rights due diligence or bias testing as part of procurement processes before public contracts are awarded.
As the US experience demonstrates with Memorandum M-24-18 which requires federal agencies to impose prescriptive requirements on AI acquisitions, mandatory mechanisms embedded within the federal procurement process are essential to ensure that accountability is baked in at the pre-purchase stage, preventing the acquisition of potentially harmful systems altogether.
Weighing in on Policy Benefits (LkSG, CSDDD, and the Omnibus)
The announcement of Sovereign AI Germany comes amid significant criticism from industry regarding the German Supply Chain Due Diligence Act (Lieferkettensorgfaltspflichtengesetz, LkSG), the EU’s Corporate Sustainability Due Diligence Directive (CSDDD), and the Omnibus legislative measures, which are seen as imposing costly and potentially bureaucratic compliance burdens that may hinder competitiveness and innovation. The Draghi Report on European competitiveness has similarly cautioned that overregulation risks suppressing innovation and efficiency. Paradoxically, Sovereign AI Germany infrastructure could, in fact, offer substantial support for implementing human rights due diligence laws in Germany and across the EU.
Research by the Indo-German Chamber of Commerce (AHK Indien) on the use of AI in corporate human rights due diligence highlights that AI systems can help companies streamline their processes, improve effectiveness, and reduce the complexities associated with human rights due diligence. While the research also highlights the ethical considerations and potential risks associated with AI adoption in this context, AI systems integrated into public sector oversight bodies and corporate compliance departments can potentially offer the capability to rapidly process, analyse, and audit the vast volumes of supply chain data required by the LkSG and CSDDD.
Rather than being seen solely as an excessive regulatory burden, Open AI for Germany could provide the required tool to streamline human rights due diligence processes, removing key criticisms of bureaucratic inefficiency frequently raised in CSDDD and Omnibus debates
The AI Skill Gap of Government Employees
A significant AI skill gap exists within public administration in Germany, making it difficult for civil servants to manage the complexities of AI governance, liability, and ethical risk. The OECD Artificial Intelligence Review of Germany (2024) notes that while Germany has strong AI research, the low level of digitalisation in the public sector limits the potential for AI use. It explicitly recommends "upskilling civil servants" and clarifying responsibilities to accelerate the transition to an innovative public sector.
Procurement officers, subject matter experts, and regulatory bodies should be equipped with the expertise to check vendor claims, assess data quality, and determine the legal accountability when algorithmic harm occurs. Without this foundational literacy, AI Sovereignty will simply substitute dependency on foreign hyperscalers for algorithmic dependency on the domestic vendor.
Moving Ahead
The partnership between SAP and OpenAI is a clear win for Germany. It marks a pivotal moment for Germany’s digital transformation. The objective of Sovereign AI Germany is laudable and necessary. However, sovereignty defined purely in technological terms is insufficient. To ensure that it is genuinely built on human rights and democratic principles, additional considerations should be taken into account.
Germany have endorsed both UNESCO Recommendation on the Ethics of Artificial Intelligence and the OECD AI Principles. It reflects country's commitment to ethical and responsible AI development.This ensures that the national approach to AI governance aligns with international best practices on transparency, fairness, and human-centric values.
Germany should look expanding AI Campus to include national mandatory skilling programme for public administration staff. The Federal Ministry of Education and Research (BMBF) initial funding of the AI Campus also makes BMBF well-positioned to spearhead this ambitious national AI skilling programme. This programme should concentrate not just on maximising the technical aspects of AI systems but, more critically, on skilling government officials about the operational, legal, and human rights risks associated with it.
Finally, AI-specific public procurement guidelines are urgently required to safeguard human rights and due process. Crucially, this will also ensure the benefits that AI systems offer in terms of enhanced efficiency, improved operational decision-making, and reduced costs can be realised. The guidelines could be developed through joint efforts led by the Federal Ministry for Economic Affairs and Climate Action (BMWK), leveraging the expertise of the existing Centre of Excellence on Public Procurement (Kompetenzzentrum Öffentliche IT). The new framework should mandate pre-contractual human rights impact assessments, demand auditable transparency on model training and performance, and guarantee ongoing external scrutiny.
About the Author:
Indras Ghosh is an independent expert specialising in corporate ESG and human rights due diligence and experienced in helping organisations and businesses implement responsible business practices. He is also a Research Group Member at the Center for AI and Digital Policy (CAIDP).
The views expressed in this article are his own.
OFFICES:
CONTACTs:
© 2025. All rights reserved.
CONNECT ON LINKEDIN: