乌鸦传媒

Skip to Content
Innovation

A conversation with David Knott

Technology and the public sector

David Knott is Chief Technology Officer of the UK Government.

“My job is to make sure that we do technology well. By that, I mean delivering better outcomes for citizens through improved systems, data, and infrastructure.

I sit within the Government Digital Service (GDS), which is part of the Department for Science, Innovation and Technology (DSIT). My role cuts across four main areas. First, there’s architecture: how to steer departments toward better technology choices. Second, we lead on engineering standards and best practices. Third is security and resilience, making sure our digital services are robust and secure against external threats. Finally, we contribute to commercial strategy, looking at how we engage suppliers and stimulate innovation in the digital marketplace. We focus a lot on transformation, not just in terms of new services, but also by modernizing existing systems. Technology isn鈥檛 background noise, it鈥檚 foundational.”


Where do you see technology making the biggest impact in the public sector?

There are four major areas where technology is making a difference.

First is citizen services. The public expects services that are digital, convenient, and reliable. Technology is how we meet those expectations at scale.

Second is productivity. We want to help public servants concentrate on value-adding work. If we can automate the routine stuff, then teachers, police officers, nurses, and civil servants can focus on what really matters.

Third, there鈥檚 specialist capability. AI and digital tools are helping in areas such as criminal justice, healthcare, and education. For instance, they can identify patterns in data or diagnose medical conditions earlier than humans, in some cases.

The fourth area is what I call 鈥渓anguage in, language out.鈥 A lot of interactions with government involve people describing their situations in everyday language and then expecting a clear response. Think of applying for a benefit or registering a complaint. Historically, computers have struggled with natural language. But with the advent of large language models [LLMs] and generative AI, we can now start meeting citizens on their terms. For me, this is genuinely exciting. Instead of forcing people to work in 鈥渕achine-friendly鈥 ways, we鈥檙e teaching machines to work in people-friendly ways

The B2B pulse - banner

“With the advent of large language models [LLMs] and generative AI, we can now start meeting citizens on their terms”


What do you think the public sector can learn from the private sector 鈥 and vice versa 鈥 when it comes to digital transformation?

There鈥檚 a lot to learn in both directions. 

From the private sector, we can learn about joined-up strategy and execution. In large corporations, leadership can make decisions that hundreds of teams act upon. In government, we鈥檙e more fragmented. Every department, school, and NHS trust has its own remit. That autonomy is valuable, but it makes it harder to share things such as infrastructure or platforms.

Private companies also often move faster when it comes to shared services and cloud adoption. They invest in engineering maturity and modern tooling that the public sector is still catching up on.

That said, there鈥檚 something deeply admirable in the public sector: purpose. People in government are mission-driven. They care deeply about the impact of their work.

“From the private sector, we can learn about joined-up strategy and execution”


What are some unique challenges you face in leading technology for government?

There are a few that stand out.

The first is public visibility. When something fails in government, it fails in public. That raises the stakes and makes people more risk-averse. This is understandable but can delay progress.

The second is structural complexity. Unlike a single organization, government is a constellation of institutions, each with its own priorities and tech stacks. Achieving alignment takes a lot of listening and a lot of collaboration.

The third challenge is talent. We鈥檙e in a competitive market and we still can’t always match private-sector salaries. So, we rely heavily on our ability to offer meaningful work. And honestly, that works. I鈥檝e seen some of the best engineers choose to work here because they believe in the mission.

Lastly, the scale and scope of what we do is massive. No private company engages with such a wide range of domains, from agriculture, to justice, to education, to counterterrorism.

“I鈥檝e seen some of the best engineers choose to work here because they believe in the mission”


Are there any major opportunities for value creation you see on the horizon?

Definitely. Our estimates that we could unlock 拢45 billion of value through better digital and data practices. That breaks down into three main areas:

  1. Productivity and efficiency 鈥 automating processes and freeing people from routine tasks.
  2. Channel shift 鈥 encouraging citizens to use digital channels instead of phone or face-to-face, cutting cost and employee time.
  3. Reducing fraud and error 鈥 saving money and building trust. 

A lot of this comes down to modernizing systems, improving data quality, and building digital services that people want to use.

“we could unlock 拢45 billion of value through better digital and data practices”

Abstract digital art with pixelated circular and wave patterns in shades of blue, turquoise, and black.

How is your team tackling long-term transformation and digital maturity?

We鈥檝e published a blueprint called A Modern Digital Government, which lays out six key priorities:

  1. Joined-up public services 鈥 so users can navigate government without needing to know how it鈥檚 organized internally
  2. Responsible adoption of AI 鈥 making sure we use emerging technologies ethically and effectively
  3. Strengthening core infrastructure 鈥 tackling legacy systems and investing in resilience
  4. Driving value from procurement 鈥 aligning our supplier spend with national outcomes
  5. Investing in people 鈥 both in terms of digital skills and leadership
  6. Transparency and accountability 鈥 being open about what we鈥檙e doing and how well we鈥檙e doing it

How is the UK government implementing AI in public services?

There are four strands, all designed to move us from early experimentation to scalable, responsible deployment.

First, we鈥檝e built what we call an AI Incubator or i.AI, which is a team of deep-tech specialists who work alongside departments to apply AI in practical, impactful ways. This helps overcome one of the biggest challenges in government: limited in-house expertise.

Second, we鈥檙e embedding AI into transformation planning. So, instead of AI being a side project, it becomes a key enabler of wider operational and service improvements. That means funding AI where it makes real-world impact 鈥 in health, education, justice, etc.

Third, we鈥檙e working on skills and confidence building. A lot of public servants are still unfamiliar with how AI works, what it can do, and the risks it entails. So, we鈥檙e rolling out training programs across the civil service to help everyone from frontline staff to policy leads build digital confidence.

And fourth, we published the AI Playbook for Government, on which I led. It鈥檚 a practical guide that connects the technical, legal, and ethical aspects of AI use. We wanted a practical aid to help teams safely deploy AI while complying with laws and upholding key principles such as fairness, transparency, and privacy.

“we鈥檙e embedding AI into transformation planning”


What excites you most about the current AI landscape in government?

What鈥檚 exciting is that we鈥檙e seeing real, applied use cases, not just hype. We鈥檙e at a point where AI can genuinely change how work gets done on the front line of public services.

Let鈥檚 take a simple example. If an AI assistant can save someone half an hour a day by drafting emails or summarizing documents, that鈥檚 significant. It鈥檚 not just saving money, it鈥檚 giving a teacher 30 more minutes to work with students, or a police officer more time to investigate a case. That鈥檚 a productivity gain with purpose.

The paradigm shift in how computers operate is also incredibly exciting to me. Traditionally, we programmed systems using logic: 鈥渋f X, then Y.鈥 But machine learning [ML] does things computers previously couldn鈥檛, like make judgments or interpret language. That opens up all kinds of new possibilities for government.

“We鈥檙e at a point where AI can genuinely change how work gets done on the front line of public services”


How is AI being applied inside government?

I think it helps to break it into four categories, each with different challenges and benefits.

First, there鈥檚 embedded AI. Features like smart suggestions or predictive text are built into everyday tools. For most users, this will be the most common experience. The tools change subtly, and the way we work has to shift with them. It鈥檚 more about change management than system development.

Second, we鈥檙e seeing AI used within custom-built solutions. Instead of traditional coding, teams are embedding AI models into software. For example, using ML rather than a hard-coded rules engine to decide eligibility for a service. This requires careful engineering, testing, and ongoing maintenance.

Third, AI is transforming how we build software itself. Tools such as GitHub Copilot help developers write code faster and better. But they also raise new questions: How do we test AI-generated code? How do we maintain it? These are challenges we鈥檙e actively exploring. 

The fourth category is the most forward-looking: agentic AI. These are AI systems that can act on behalf of a user 鈥 for example, by booking appointments or submitting forms. That brings up important questions around identity, trust, and authorization. If an AI claims to be acting on someone鈥檚 behalf, how do you know it鈥檚 legitimate? And what limits do you place on its authority? We鈥檒l need new protocols for that.

“If an AI claims to be acting on someone鈥檚 behalf, how do you know it鈥檚 legitimate?”


How important is data in making AI work within government?

Data is foundational, and it鈥檚 also one of our toughest challenges. Government has a vast amount of data, but it’s fragmented across systems and formats. Some of it is highly sensitive. Some of it is underused. We鈥檝e tried every model: data warehouses, data lakes, data meshes. And while we鈥檝e learned a lot, we haven鈥檛 cracked the problem.

The emerging concept of data fabrics is promising. It allows data to stay where it is, while making it discoverable and usable through common governance and metadata. That鈥檚 the direction we鈥檙e leaning into.

We鈥檝e also launched the National Data Library, which is about making data available for training and research, while ensuring we meet standards around consent, privacy, and transparency. Getting this balance right between utility and ethics is crucial to developing responsible AI.

“We鈥檝e tried every model: data warehouses, data lakes, data meshes. And while we鈥檝e learned a lot, we haven鈥檛 cracked the problem”


Many systems in government are still legacy-based. How are you tackling modernization?

Modernization is essential to stability, security, and progress.

Legacy systems are a major barrier. In fact, our recent State of Digital Government report showed that the number of critical legacy systems has gone up, not down. That鈥檚 worrying, and it鈥檚 a call to action.

We鈥檙e working hard to make the case for investment. It鈥檚 not just about cost savings. It鈥檚 also about reducing service outages and risks to the public. If a citizen can鈥檛 access their benefit because of a system failure, that鈥檚 not just a technical issue, it鈥檚 also a human one.

We鈥檙e also focused on designing secure systems. That鈥檚 why we鈥檝e published our Secure by Design standard. Security needs to be baked in from day one. This is part of our broader push for digital maturity across departments.

“The number of critical legacy systems has gone up, not down. That鈥檚 worrying, and it鈥檚 a call to action”


How do you think about ethics in technology?

I have a PhD in Philosophy, so ethics has always been part of my thinking. But for a long time, I kept that separate from my work in technology. That鈥檚 no longer possible.

AI raises real ethical questions. It affects real people. When we鈥檙e building models or deploying systems, we need to remember this isn鈥檛 abstract. The data we use represents human lives. The decisions we make have consequences. 

My advice is: don鈥檛 overcomplicate it, but don鈥檛 ignore it either. You don鈥檛 need a philosophy degree to know what鈥檚 right and wrong. But you do need to ask the questions, involve diverse voices, and build governance structures that support good decision-making.

We also need to move beyond tick-box compliance. Ethics isn鈥檛 a checklist. It鈥檚 a conversation, a discipline of constantly asking: 鈥淚s this the right thing to do?鈥

“Ethics isn鈥檛 a checklist. It鈥檚 a conversation, a discipline of constantly asking: 鈥淚s this the right thing to do?”


How is the UK government strengthening digital resilience in the face of growing cyber threats?

We鈥檙e making progress, but digital resilience remains a big challenge. That鈥檚 not just my view. It’s backed by data from the State of Digital Government report and the National Audit Office.

The Government Security Group has developed and rolled out a framework called GovAssure. It鈥檚 essentially a structured approach that allows departments to self-assess and be assessed independently against core standards for cybersecurity and operational resilience. It brings visibility and accountability. It helps us understand where the gaps are and where we need to act. 

But resilience doesn鈥檛 come just from undergoing assessments. That鈥檚 why we also developed our Secure by Design standard. It means engineers and developers, not just security specialists, receive clear guidance on how to build resilient systems.

We also have the huge advantage of working closely with the National Cyber Security Centre (NCSC). They give us world-class research, early warnings on threats, and deep expertise. That partnership lets us stay ahead of risks in ways that many private-sector organizations can鈥檛.


Is the government preparing for the era of quantum computing and post-quantum cryptography?

Yes, very much so. We鈥檙e now looking at a 10-year horizon for post-quantum readiness, and we know that the cryptographic methods we use today may not survive that transition.

We recently co-published an international position paper that lays out our strategic posture for post-quantum security. It鈥檚 about building algorithmic agility into our systems, so, unlike most legacy systems, we can respond to evolving threats.

It鈥檚 not just a technical shift. It鈥檚 a capability-building challenge. We need systems that can evolve and people who can manage that evolution.

Being part of the DSIT gives us another edge. We鈥檙e aligned with the UK鈥檚 national quantum strategy, which includes some of the most ambitious quantum research programs in the world, such as the National Quantum Computing Centre [NQCC]. Through those partnerships, we鈥檙e connecting policymakers and digital leaders across government with the front lines of quantum innovation. 

Quantum won鈥檛 solve every problem, but there are many areas of government services where it can help.

The sky was blotted during the Northern California fire season. Aerial images taken September 9 2020.

“We鈥檙e now looking at a 10-year horizon for post-quantum readiness”


How do you see generative AI [Gen AI] reshaping the cybersecurity landscape?

AI and cybersecurity are a natural match. At its core, cybersecurity is about detecting patterns 鈥揳nomalies, changes in behavior, indicators of compromise. That鈥檚 exactly where AI thrives.

We鈥檝e used ML for anomaly detection for a while now. But Gen AI brings new capabilities, especially in language-based analysis. For example, it can help us identify prompt injection attacks, where someone tries to manipulate an AI system by feeding it malicious input. It can also enhance phishing detection, which is important as AI is making phishing emails more sophisticated.

Gen AI lets attackers scale and personalize their attacks. So, we must use the same tech to identify subtle threats and understand intent. 

It鈥檚 a bit of an arms race. But I鈥檓 confident that, with the right governance, and with the partnerships we have, we can stay ahead.


Looking back on your career, which leadership lessons have you learned?

First, technologists belong in the boardroom. Early in my career, I saw how much better decisions were when technology leaders had a seat at the table. Technology isn鈥檛 a support function anymore, it drives the mission. So, my advice to fellow tech leaders is: don鈥檛 wait to be invited into the room. Be bold. Bring your voice.

Second, delivery is, by nature, unpredictable. If you鈥檙e doing something meaningful, it usually means you鈥檙e doing something new. That means there will be unknowns. Agile, DevOps, and Site Reliability Engineering (SRE) aren鈥檛 just buzzwords, they’re toolsets to navigate that unpredictability. The idea that you can plan out every detail in a five-year waterfall strategy is fiction. We need to embrace uncertainty and learn to 鈥渇eel鈥 our way forward by learning in real time.

Third 鈥 and this one鈥檚 increasingly important 鈥 business leaders must understand how tech works. It鈥檚 no longer safe to just rely on what vendors say or what a glossy demo shows. You don鈥檛 need to be a developer, but you do need a working understanding of the tech. My role is often to 鈥渄emystify the magic,鈥 to help leaders make informed decisions based on what these systems can and can鈥檛 do.

“Business leaders must understand how tech works”


What do you think will define the next decade of technology in government?

Three trends come to mind.

First, AI will keep growing but the focus will shift to application. The research arms race for the biggest models will continue, but what will matter most is how those models are applied. We鈥檙e already seeing the rise of agentic AI 鈥 systems that don鈥檛 just respond but act with initiative. That鈥檚 where questions of trust, identity, and authorization will become central.

Second, quantum readiness. It won鈥檛 be a dramatic arrival, but it鈥檚 coming. We鈥檒l need to build new skillsets in quantum engineering and designing quantum algorithms. We鈥檒l need development tools to make that accessible. It reminds me of the early days of computing, when the barrier to entry was high. That will change, but we must start investing in it now.

And third, a deep rethinking of digital trust. We鈥檙e realizing that the internet, as it was originally designed, didn鈥檛 build in enough trust, security, or identity controls,such as verifiable credentials, user-controlled identity, and transparent authorization. We鈥檙e trying to retrofit those now. Web3 hasn鈥檛 delivered on all its promises, but it has introduced useful ideas.

“AI will keep growing but the focus will shift to application”

Stay informed

Subscribe to have the latest reports from the 乌鸦传媒 Research Institute delivered direct to your inbox.

Further reading

AI-powered everything

Your gateway to cutting-edge innovation

Annika 脰lme, CTO, SKF Group

Conversations for Tomorrow

This quarterly review is 乌鸦传媒鈥檚 flagship publication targeted at a global audience. It showcases diverse perspectives from best-in-class global brands, leading public figures, academics and influencers on a chosen theme. We feature a wide variety of content, including interviews, articles by guest contributors, and insights from some of the Institute鈥檚 reports. Within such wealth and diversity of these global industry leaders鈥 opinions, there is something for everyone. We warmly invite you to explore.