{"id":126949,"date":"2021-01-13T03:54:00","date_gmt":"2021-01-13T03:54:00","guid":{"rendered":"https:\/\/www.capgemini.com\/se-en\/?p=126949"},"modified":"2025-03-13T09:56:17","modified_gmt":"2025-03-13T08:56:17","slug":"ai-and-ethics-seven-steps-to-take","status":"publish","type":"post","link":"https:\/\/www.capgemini.com\/se-en\/insights\/expert-perspectives\/ai-and-ethics-seven-steps-to-take\/","title":{"rendered":"AI and ethics \u2013 seven steps to take"},"content":{"rendered":"\n

<\/p>\n\n\n\n

AI and ethics \u2013 seven steps to take<\/h1><\/div><\/div><\/div><\/div>
\"\"<\/div>
Lee Beardmore<\/h5>
2021-01-13<\/h5><\/div><\/div>
<\/div><\/div><\/div><\/div><\/header>\n\n\n\n
\n

In the first article in this series, I outlined the importance of ethics in artificial intelligence (AI), and I also gave a few highlights from research recently conducted<\/a> by the ÎÚÑ»´«Ã½ Research Institute, showing customer attitudes and business responses to AI.<\/p>\n\n\n\n

In the second article, I considered the practical preparations that businesses need to make for its morally justifiable implementation.<\/p>\n\n\n\n

In this, the final article, I am going to highlight the seven steps that should form part of the ethical development, deployment, and management of your AI systems.<\/p>\n\n\n\n

Step #1 \u2013 define purpose and assess potential impact<\/strong><\/p>\n\n\n\n

Organizations need to satisfy themselves that the core aim of the AI system is to benefit people or improve their lives, and that it is not driven solely by economic goals such as increasing profits. This core aim needs to be made transparent not just to internal audiences such as teams in development, sales, marketing, and compliance, but also to external stakeholders such as partners, contractors, and relevant regulatory and government bodies.<\/p>\n\n\n\n

Alongside an assessment of potential benefits, organizations should also consider potential risks before any implementation. Such risks might include possible threats to people\u2019s fundamental rights.<\/p>\n\n\n\n

Step #2 \u2013 address sustainability considerations<\/strong><\/p>\n\n\n\n

Successful AI implementations can optimize business operations. This needn\u2019t just mean improved margins and better productivity: it can also have implications for an organization\u2019s broader goals, such as equality and inclusion, and also such as reducing environmental impact. If such improvements are possible, they shouldn\u2019t be mere by-products: they should be actively sought out and factored in as development goals.<\/p>\n\n\n\n

What\u2019s more, AI has its own carbon footprint, whether it\u2019s on-premise or in the cloud. This, too, needs to be a development consideration.<\/p>\n\n\n\n

Step #3 \u2013 embed diversity and inclusion<\/strong><\/p>\n\n\n\n

The broader the mix of people engaged in the AI system development lifecycle, the better. Organizations should aim to build teams from a variety of racial, gender, and demographic backgrounds. Diversity of discipline should also be a factor, bringing together people of different viewpoints and educational backgrounds.<\/p>\n\n\n\n

Also, tools now exist to evaluate fairness and to identify and correct bias in AI systems and machine learning models. Organizations can and should use such tools to correct bias in datasets by focusing on the training data. What\u2019s more, they should ensure that AI testing covers all appropriate demographics, so as to avoid any group or groups of people being inadvertently disadvantaged by the outcomes of an AI application.<\/p>\n\n\n\n

Step #4 \u2013 enhance transparency<\/strong><\/p>\n\n\n\n

Tools also exist to analyze the processes being used by AI systems and to explain not just simple outcomes, but entire models. Some approaches go further still, and provide a benchmarked evaluation of an AI model under various conditions.<\/p>\n\n\n\n

Adopting these tools and approaches can help organizations to be clear to users, regulators, and the general public about the origins of their models, their use, and any limitations those models may have.<\/p>\n\n\n\n

Step #5 \u2013 humanize the AI experience<\/strong><\/p>\n\n\n\n

Where possible, it\u2019s a good idea to keep real people involved in AI processes. For example, tag-teaming a human agent with a virtual assistant on customer service calls can help to stop ethical issues arising in the first place. No organization should want its customers to feel that they have lost agency, or that their basic rights have been compromised.<\/p>\n\n\n\n

Step #6 \u2013 ensure technological robustness<\/strong><\/p>\n\n\n\n

Many of the resilience issues that relate to AI are true also for technology in general. For instance, AI systems should be resilient to attacks or mishaps, and wherever possible should be backed by fallback plans in case of failure. Data should be accurate; results should be reproducible; and regular testing and monitoring can ensure that AI models are behaving as expected, before go-live and after.<\/p>\n\n\n\n

However, there are other areas of technological robustness that are specific to AI. The nature of the integrity of its datasets is a case in point. It\u2019s a good idea for each such dataset to be accompanied by a datasheet that documents key variables such as composition, collection process, and recommended uses. This will help AI developers to work more effectively with AI algorithm users such as sales and marketing teams, and help them understand the impact of their decisions.<\/p>\n\n\n\n

Step #7 \u2013 empower customers with privacy controls<\/strong><\/p>\n\n\n\n

Giving customers control over their personal data isn\u2019t merely a courtesy, or even a sign of good corporate citizenship. In some parts of the world \u2013 notably, in the EU \u2013 it\u2019s a legal requirement. The General Data Protection Regulation (GDPR) obliges businesses to meet all kinds of customer obligations upon request, including seeing how and when personal data is being used, and for what purpose; opting out of an AI-based system in favor of human intervention; and allowing users to change the weight of individual data attributes so as to influence AI output \u2013 for example, to increase recommendations in line with actual rather than AI-derived personal preference.<\/p>\n\n\n\n

If such obligations need to be put in place for EU residents, multinational organizations may conclude that, to show both fairness and consistency, it might make sense to make the same provision for customers elsewhere.<\/p>\n\n\n\n

The benefits of being frictionless<\/strong><\/p>\n\n\n\n

All of these steps are easier to implement, and are more likely to succeed, when the organization can act as a cohesive whole \u2013 when it can seamlessly and intelligently connect its processes and people as required.<\/p>\n\n\n\n

At ÎÚÑ»´«Ã½, we call this the Frictionless Enterprise<\/a>. It\u2019s an approach that dynamically adapts to changing circumstances, and it\u2019s therefore ideally suited to addressing AI systems, and the ethical considerations that flow from them. It enables organizations to monitor and manage not just the technology and the datasets, but the diversity of the teams developing them. It also helps businesses to respond to the concerns of their customers, of regulatory bodies, and of other external stakeholders, and to demonstrate a commitment to human fairness, to sustainability, and to transparency.<\/p>\n\n\n\n

For more on how organizations can build ethically robust AI systems and gain trust, read the full paper entitled \u201cAI and the Ethical Conundrum<\/a>.\u201d<\/em><\/p>\n\n\n\n

Read other blogs in this series<\/em><\/p>\n\n\n\n