Responsible artificial intelligence: building trust and value at Lloyds Banking Group

Rohit Dhawan
Director and Group Head of AI & Advanced Analytics
Published on: 29 August 2025
5 min read

At a glance:

  • Embedding ethics and safety into AI systems: Lloyds Banking Group’s responsible AI framework ensures rigorous testing, human oversight, and scenario planning to prevent harm, with real-world safeguards like escalation protocols for virtual assistants and fraud detection systems. 
  • Driving fairness, transparency, and accountability: Every AI deployment undergoes fairness audits, explainability checks, and governance reviews to mitigate bias and ensure decisions, such as loan approvals or fraud flags, are interpretable and just.
  • Sustainable and adaptive innovation: The Group prioritises environmentally responsible AI development and continuous monitoring, supported by cross-sector collaboration and internal education to evolve best practices and uphold trust across all stakeholders.

As the Group Head of AI at Lloyds Banking Group, my mission is to ensure that artificial intelligence drives lasting value for our customers, colleagues, and broader society. The rapid evolution of AI technology brings profound opportunities, but also new ethical responsibilities. 

Our responsible AI framework is more than a set of abstract ideals. It's a living, adaptive system of principles embedded in every stage of our AI journey. This article explores the core responsible AI principles at Lloyds Banking Group, expands on their real-world implications, and examines the tangible steps we take to uphold them.

AI ethics and safety: guarding against harm

Our first and most crucial principle is ethics and safety. As the financial lives of millions come into daily contact with our platforms, anything less than the highest standard is unacceptable. Ethical AI at Lloyds Banking Group means relentless testing of algorithms, scenario planning for edge cases, and a zero-tolerance approach to potential harms.

Take, for example, our AI-powered virtual assistants supporting customer queries. Before deployment, every model undergoes extensive testing to prevent so-called “hallucinations” – the generation of incorrect or nonsensical information. We simulate thousands of interactions, scrutinizing for instances where AI could potentially mislead or confuse a customer, especially in critical tasks like payment  instructions. Only after surpassing stringent benchmarks does a model reach our customers.

Real-world consideration:

Suppose a customer is locked out of their online banking account at 2 AM. The AI assistant must provide clear, secure, and non-misleading guidance, escalating to a human when uncertainty arises. This commitment to safety builds the foundation of trust.

AI security: protecting data and systems

AI cannot be responsible without being secure. Lloyds deploys robust controls at every layer – from encrypting data to monitoring for adversarial attacks. With AI systems increasingly integrated into fraud detection and cybersecurity, an error or vulnerability could have far-reaching consequences.

For instance, we leverage advanced AI to detect unusual spending patterns and intervene when fraud is suspected. All models are developed within fortified environments, and their predictions are tracked for anomalies that might indicate attempted manipulation or systemic drift.

Real-world consideration:

If an AI system misclassifies legitimate transactions as fraud, customers face inconvenience. If it misses actual fraud, the losses are profound. At Lloyds, we implement a holistic approach – continuous internal and external audits, layered technical safeguards, and rapid response protocols.

 

Artificial Intelligence at Lloyds Banking Group

We’re reimagining how we operate by harnessing the full potential of AI–embedding it across our business to drive smarter decisions, faster outcomes, and better experiences.

Visit the AI hub

Addressing AI biases

One of the greatest challenges for financial services is the risk of bias in AI-driven decisions – for instance, in lending, product recommendations, or anti-money laundering checks. Lloyds Banking Group mandates that every AI deployment is rigorously audited for fairness and representativeness.

We utilise multifactor analysis to check for disparate impact across different demographic groups. In mortgage approvals, we test models on historical data to ensure that no group based on age, gender, race, or other protected characteristics is unjustly disadvantaged. When bias surfaces, we refine our data, models, and processes before live deployment.

Real-world consideration:

Imagine a young entrepreneur applying for a business loan. If AI inadvertently penalises non-traditional career paths, we risk perpetuating inequality. By embracing fairness audits, human review, and open challenge, we keep our models accountable to ethical standards.

Transparency: explaining AI’s decisions

Transparency is essential for building trust in AI. We uphold transparency in two ways:

  • Externally, by being open with customers and stakeholders about how AI is used 
  • Internally, by ensuring colleagues can interrogate, interpret, and explain AI decisions

All advanced models at the Group, for example, include explanation features, allowing both customers and staff to understand the rationale behind decisions. Our teams leverage explainable AI (XAI) frameworks, peer-review model documentation, and invest in continuous upskilling on transparency tools.

Real-world consideration:

A customer denied a loan deserves know why the decision was made, and how to appeal or provide additional evidence for review. Transparency makes AI both useful and just.

Human-in-the-loop: ensuring accountability

While automation enhances efficiency, decisions with significant consequences must never be left solely to machines. Across Lloyds Banking Group, sensitive use cases – such as credit provision, fraud investigations, or complaints handling – always keep a “human-in-the-loop”, no matter how sophisticated the model, human judgment remains central. Our systems are designed to flag uncertainty, escalate nuanced cases, and ensure a human is accountable for the final call.

Real-world consideration:

If an AI model raises a “red flag” for potential money laundering in a small business’s account, it is reviewed by a specialist before any action is taken. This layered approach allows us to balance scale with compassion and context.

AI governance: clear ownership and oversight

Responsible AI thrives on clear governance and strong leadership. Lloyds Banking Group has instituted cross-functional oversight – the “GenAI Control Tower” – to prioritise use cases, allocate resources, and enforce regulatory and ethical requirements. The introduction of the “Head of Responsible AI” ensures ongoing coordination across risk, technology, legal, and business teams.

Every new AI initiative follows a rigorous approval process, including risk assessment, legal review, and controls for ethics, bias, and security. The governance framework continuously tracks deployments and maintains the flexibility to adapt as the technology and regulatory landscape evolves.

Real-world consideration:

When launching new generative AI features, each proposal is reviewed for risks such as data leakage, hallucinations, and regulatory compliance – before being piloted or scaled.

 

Reimagining banking with AI

Rohit Dhawan believes that AI will revolutionise the financial services industry over the coming decade. And the transformation will be broad – impacting everything from customer experience to administrative operations.

Read Rohit's article

AI and environmental responsibility: sustainable innovation

As AI grows in power, so does its environmental footprint – especially with resource-intensive models. At Lloyds Banking Group, we’re committed to monitoring and optimizing the carbon impact of our AI activities. 

In both procurement and in-house development, we prioritize partners and approaches aligned with our net-zero commitments. For major AI projects, we assess energy use, data centre efficiency, and potential to recycle models (rather than rebuild from scratch), minimising environmental impact.

Real-world consideration:

When collaborating with university partners on next-generation AI models, we select cloud providers with strong sustainability credentials and audit the carbon costs of large model training.

Continuous monitoring and collaboration: evolving best practices 

Responsible AI is not a “set and forget” exercise. At Lloyds Banking Group, we maintain dedicated teams for ongoing monitoring, system optimisation, and incident response. 

We learn from both within and beyond our industry, collaborating with universities, fintech innovators, and regulators to share best practices. AI literacy is embedded across the Group through regular workshops, ethics training, and open forums, helping every colleague recognise and raise potential issues.

Real-world consideration:

If a new type of financial crime emerges, we rapidly retrain relevant models and update governance, drawing upon cross-sector insights and peer-reviewed research.

Trust, value, and the future of AI at Lloyds Banking Group

Responsible AI is at the heart of Lloyds Banking Group’s digital transformation. We go beyond compliance, striving always for leadership in ethics, safety, fairness, and sustainability. Our principles are lived – not just written – through robust systems, clear governance, active human oversight, and a culture of integrity and innovation.

As we look ahead, our commitment is unwavering: AI must be a force for good, amplifying human potential and creating lasting value for our customers, our colleagues, and the communities we serve.

Rohit Dhawan
About the author Rohit Dhawan

Group Head of Artificial Intelligence

Dr. Rohit Dhawan is the Group Executive Director of Artificial Intelligence at Lloyds Banking Group in the UK, where he leads a multidisciplinary team of AI specialists, data engineers, data scientists, and AI ethicists.

A prominent figure in data and AI strategy, Dr. Dhawan is a seasoned C-suite operator and a published author. Previously, Dr. Dhawan served as the Regional Head of Data & AI Strategy for Amazon Web Services in the Asia Pacific region, covering South-East Asia, Australia, and Japan. Dr. Dhawan retains a PhD in AI and a Master’s degree in IT from the University of Sydney.

Follow Rohit on LinkedIn

Related content

Reimagining the future: how AI is transforming Lloyds Banking Group

Artificial Intelligence (AI) is no longer a futuristic concept; it’s a present-day catalyst for transformation. At Lloyds Banking Group, we’ve embraced this reality with purpose and ambition.

Read Rohit's article opens in same tab

Accelerating GenAI innovation safely

“The Responsible AI team is tasked with building algorithmic guardrails that can be used by our colleagues to ensure solutions remain compliant with the AI Assurance Framework – and can be trusted by the user.” 

Read Chandrima's article Opens in same tab

PEGASUS: evaluation driven development for GenAI

Generative AI & Agentic AI success hinges on having the right model for the job, context engineering that feeds foundation models the right materials to reason, and evaluation to understand how well a model performs on specific task.

Read Eric's article