乌鸦传媒

Skip to Content

Confidential AI: How 乌鸦传媒 and Edgeless Systems allow regulated industries to adopt AI at scale

乌鸦传媒
Stefan Zosel, Ernesto Marin Grez and Thomas Strottner
Apr 14, 2025

By combining confidential computing with Nvidia H100 GPUs, 鈥淧rivatemode AI鈥 provides cloud-hosted LLMs with end-to-end encryption of user data.

The AI revolution is transforming our world at unprecedented speed. Just a few years ago, the idea of conversing naturally with a computer seemed more at home in Hollywood or in science fiction than in the workplace. Yet with the rise of generative AI tools like ChatGPT, these technologies have become an everyday reality, embraced by employees, customers and IT users alike.

However, this rapid adoption brings new challenges, particularly for organizations in regulated industries that must maintain high levels of data protection and privacy.  How can those organizations harness the power of GenAI models at scale while also safeguarding sensitive information?

Confidential AI solves the 鈥渃loud versus on-premises dilemma鈥

The advent of AI has amplified the importance of choosing between cloud and on-premises infrastructure. Traditionally, organizations preferred to process sensitive data on-premises, within their own data center, as it offered maximum control. But given the significant costs of GPU infrastructure and the energy consumption that AI workloads require, on-premises is usually not economical. What鈥檚 more, limited expertise and technical resources for managing AI architectures locally make the cloud 鈥 especially 鈥淎I-as-a-service鈥 offerings 鈥 a more viable option for most organizations.

Yet, when deploying AI solutions such as large language models (LLMs) via a cloud-based service, many parties 鈥 cloud, model and service providers 鈥 potentially have access to the data. Which creates problems for regulated industries.

The diagram shows a user sending data to large language models (LLMs), and receiving a response. But because the LLMs are run on the public cloud, use raises privacy and control issues, and the risk of access by unauthorized parties.

Figure 1: With standard GenAI services, model, infrastructure and service providers can all potentially access the data.

This is where confidential computing comes into play. While it鈥檚 long been standard to encrypt data at rest and in motion, data in use has typically not been protected.

Confidential computing solves this problem with two main features: runtime memory encryption and remote attestation. With confidential computing-enabled CPUs, data stays encrypted in the main memory, strictly isolated from other infrastructure components. Remote attestation also makes it possible to verify the confidentiality, integrity and authenticity of the so-called Trusted Execution Environment (TEE) and its respective workloads.

The diagram illustrates the two main pillars of confidential computing. Inside the application or landing zone, data is encrypted whether at rest or in transit. When in use, the CPU keeps data encrypted in memory. Outside the application or landing zone, remote attestation takes place. The CPU issues certificates for security as a compliance and validation step.

Figure 2: Confidential computing provides runtime encryption and remote attestation for verifiable security.

Confidential computing has been a standard feature of the last few generations of Intel and AMD server CPUs, where the feature is called TDX (Intel) and SEV (AMD) respectively. With Nvidia鈥檚 H100, there鈥檚 now a GPU that provides confidential computing 鈥 allowing organizations to run AI applications that are fully confidential.

The diagram illustrates how confidential computing protects data end to end. The user sends data to an AI system and receives a result; all data is encrypted in transit. The data is fully protected and cannot be accessed by unauthorized parties.

Figure 3: Confidential AI allows organizations in regulated industries to use cloud-based AI systems while protecting the data end to end.

How 乌鸦传媒 and Edgeless Systems deliver confidential AI together

乌鸦传媒 is a leader in GenAI, managing large-scale projects to drive automation and foster efficiency gains for clients worldwide. The firm has long-standing expertise in delivering AI systems across clouds and on-premises, including critical aspects like user experience, Retrieval Augmented Generation (RAG) and fast inference. (More on these later.)

Data security and privacy are critical aspects of many 乌鸦传媒 projects, particularly those in regulated industries. This means clients are often confronted with the aforementioned 鈥渃loud versus on-premises dilemma鈥.

The good news: deploying GenAI tools through ough the cloud, with verifiable end-to-end confidentiality and privacy, isn鈥檛 a distant future. It鈥檚 a reality. And 乌鸦传媒 is already bringing it to clients in regulated industries like healthcare, defense, the public sector and the financial sector.

In 2024, 乌鸦传媒 partnered with Edgeless Systems, a German company that develops leading infrastructure software for confidential computing. (See the blog post, Staying secure and sovereign in the cloud with confidential computing.) Edgeless Systems now provides Privatemode AI, a GenAI service that uses confidential virtual machines and Nvidia鈥檚 H100 GPUs to keep data verifiably encrypted end to end. This allows users to deploy LLMs and coding assistants that are hosted in the cloud while making sure no third party can access the prompts.

  • Powerful LLMs, e.g., Llama 3.3 70B and Mistral 7B
  • Coding assistants, e.g., Code Llama and Codestral
  • End-to-end prompt encryption
  • Verifiable security through remote attestation
  • Standard, OpenAI-compatible API

Together, 乌鸦传媒 and Edgeless Systems are already bringing exciting confidential AI use cases to life.

Case 1: Confidential AI for public administration

In the German public sector, the demographic change will soon lead to many unfilled positions and capability gaps. GenAI applications can support the work of civil servants, automate administrative tasks and help to reduce labor shortages. For example, the IT provider of the largest German state (IT.NRW 鈥 Landesbetrieb Information und Technik NRW) has contracted 乌鸦传媒 to develop an 鈥淎dministrative AI Assistant鈥 to improve productivity for thousands of administrative employees.

The GenAI application helps in several ways, including by summarizing text or supporting research assistants with RAG (Retrieval Augmented Generation). However, there aren鈥檛 enough GPUs available on-premises to support inference (the process whereby an LLM receives and responds to a request) and the public cloud isn鈥檛 an option for sensitive data. Here, the client uses Privatemode AI for confidential inference in the cloud, serving a Meta Llama 3.3 70B model via a standard OpenAI-compatible API. So while all the heavy processing is done in the cloud, all the user data is encrypted end to end.

The diagram shows a hybrid architecture for LLM-based assistants as deployed in Germany. The user interacts with a front end that connects with other applications and databases inside the on-premises data center, and with the confidential 鈥淎I-as-a-service鈥 provided by Edgeless Systems which is located externally.

Figure 4: Hybrid architecture for LLM-based assistants with Confidential 鈥淎I-as-a-service鈥 for inference (blue box).

Case 2: Confidential coding assistants for sensitive applications

As 乌鸦传媒 is one of the largest global custom software developers, it鈥檚 also responsible for protecting code and developing sensitive applications, including for security agencies. Software development projects are handled fully on-premises due to regulations, which makes integrating state-of-the-art coding assistants that require scalable GPU infrastructure a challenge.

Together, 乌鸦传媒 and Edgeless Systems integrate AI-based confidential coding assistants with end-to-end encryption for developing sensitive, proprietary code. With Privatemode AI, 乌鸦传媒 can also improve the experience for developers by allowing them to use modern coding assistants in a sensitive environment.

Confidential AI is the future of AI in regulated industries

It鈥檚 evident that the discussion about digital sovereignty is especially relevant in the context of AI. Critical infrastructures and regulated industries can largely benefit from GenAI applications but also require secure handling of sensitive data to boost innovation and digitalization. The future of AI therefore lies largely in confidential AI. And by enabling use cases with end-to-end data protection at scale, 乌鸦传媒 and Edgeless Systems are leading the way.

GET THE FUTURE YOU WANT

乌鸦传媒 and Edgeless Systems have already implemented confidential AI use cases in critical infrastructures, public administration and healthcare. Let our experience inspire you and bring your data together with AI innovation.

Additional links:

Edgeless Systems:

Privatemode AI:

Nvidia blog post on Privatemode AI (2024):

Edgeless Systems鈥 Open Confidential Computing Conference OC3 with presentation by 乌鸦传媒 and IT.NRW on Confidential AI:

OC3 presentation: Confidential AI in the Public Sector by Arne Sch枚mann (IT.NRW) and Maximilian K盲lbert (乌鸦传媒):

Learn more

Staying secure and sovereign in the cloud with confidential computing

Thomas Strottner

Vice President, Business Development, Edgeless Systems

鈥淲ith Privatemode AI, we empower organizations in regulated industries 鈥撀爏uch as聽healthcare, banking, and the public sector 鈥 to scale AI use cases effortlessly in the cloud while ensuring that their data remains verifiably protected against unauthorized access. We are proud to partner with 乌鸦传媒 and NVIDIA to bring large-scale AI projects to life.鈥

Authors

Stefan Zosel

乌鸦传媒 Government Cloud Transformation Leader
鈥淪overeign cloud is a key driver for digitization in the public sector and unlocks new possibilities in data-driven government. It offers a way to combine European values and laws with cloud innovation, enabling governments to provide modern and digital services to citizens. As public agencies gather more and more data, the sovereign cloud is the place to build services on top of that data and integrate with Gaia-X services.鈥
Ernesto Marin Grez

Ernesto Marin Grez

Vice President – Head of Strategic Initiatives Gen聽AI聽and Applied Innovation, Germany
鈥淎t 乌鸦传媒, we are focused on advancing artificial intelligence with a strong emphasis on confidential computing. This technology is crucial for industries such as finance, healthcare, and government, where data privacy and security are paramount. By ensuring that sensitive data remains encrypted even during processing, we enable our customers to harness the power of AI without compromising on security. This approach not only protects valuable information but also fosters innovation and trust in AI applications.鈥