5 practical steps for implementing responsible AI | HCLTech
AI

5 practical steps for implementing responsible AI

Implementing responsible AI requires a multi-faceted approach, with both organizational and cultural changes needed
 
4 minutes 20 seconds read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
4 minutes 20 seconds read
Share
5 practical steps for implementing responsible AI

Whether an organization is large or small, establishing strong governance and accountability frameworks early on is essential for ensuring responsible AI adoption in the long run.

In a recent HCLTech Trends and Insights podcast, Heather Domin, Head of Responsible AI at HCLTech, explored this topic and provided insights on the steps needed to implement responsible AI at scale.

1. Building organizational foundations

A great first step is to establish a dedicated AI governance body. This could take the form of an AI committee responsible for assessing risks and making decisions about AI deployment and strategy. “Many organizations actively take this on, and it's a great starting point,” explained Domin. For larger organizations, setting up an “Office of Responsible AI” can formalize AI governance, providing clear policies and direction as AI initiatives scale.

A clear structure is crucial to help the organization think critically about the operational risks associated with AI. For example, as AI systems become increasingly complex, leaders need to ensure that the appropriate technical and ethical considerations are addressed in each stage of AI development, from pre-deployment to post-deployment monitoring.

2. Integrating stakeholders and fostering collaboration

Responsible AI implementation is a socio-technical challenge. Unlike traditional IT systems, AI involves a broad set of stakeholders, including data scientists, business leaders, ethics boards and end-users.

“The AI lifecycle involves many different stakeholders, which makes AI implementation different from other types of IT projects,” said Domin.

AI adoption requires deep collaboration across departments, with diverse teams contributing at every stage. By engaging stakeholders early and often, organizations can not only address technical concerns but also enable fairness, accountability and transparency in their AI systems.

“Diversity in teams can help prevent issues like bias from being overlooked,” added Domin.

The speed at which AI is deployed today also introduces new challenges. Unlike the slower pace of traditional IT systems, where the development cycle was measured in months or years, many AI systems, especially generative AI (GenAI) and foundation models, are deployed significantly faster, sometimes within a matter of weeks.

“It's not just about getting the technology out there fast but enabling stakeholders to be properly engaged and trained to use the technology effectively and responsibly,” explained Domin.

3. Monitoring and managing AI risks: Shadow AI and beyond

A critical aspect of responsible AI is addressing the risks of “shadow AI,” where teams or individuals use AI tools without oversight. This can lead to compliance, security and operational risks. To manage these risks, organizations should work to ensure that there’s awareness of what AI tools are being used, where, and by whom.

“There are many tools that can help detect shadow AI,” said Domin. “Organizations can then take action if there's any misuse of data or models.”

This active monitoring approach is like traditional cybersecurity practices. As with any sensitive system, organizations need to know what’s happening within their AI environment at all times, including “where your AI systems are, what data they are accessing and what models are being used.”

This approach helps ensure that AI systems are not just deployed in compliance with laws but are also managed responsibly throughout their lifecycle.

4. Building trust in AI: Transparency, regulation and communication

Trust is foundational to AI adoption. “Regulations and standards, such as ISO 42001, give people confidence that appropriate controls are in place,” said Domin.

Regulatory frameworks help instill confidence in AI systems by providing clear requirements and helping to align practices with industry standards. However, regulations alone aren't enough. Clear communication and transparency are key to fostering trust.

“Training and helping people understand the protections in place is important,” confirmed Domin.

When organizations clearly communicate the safeguards embedded in their AI systems, employees and users are more likely to adopt and engage with the technology. This transparency helps mitigate concerns about the ethical implications of AI, such as data privacy and bias.

Interestingly, 58% of executives in a recent white paper from HCLTech in partnership with MIT,Implementing responsible AI in the generative age, expressed confidence in their organizations' data privacy and security practices.

This is encouraging, according to Domin, given that “AI governance is about five to 10 years behind where data privacy and security governance are today.” This gap presents an opportunity for organizations to build robust, responsible AI frameworks just as privacy and security standards evolved over the past few decades.

HCLTech’s comprehensive approach to responsible AI

HCLTech is committed to a holistic and value-driven approach to responsible AI.

“Our approach is rooted in our core values as an organization,” said Domin. These values and our cornerstones of Responsible AI — accountability, fairness, security, privacy and transparency — guide HCLTech’s internal AI development and deployment, as well as its consulting work with clients.

Internally, HCLTech has built an extensive AI risk management framework that serves as a model for responsible AI governance.

“We leverage this framework to help our clients address their own AI challenges,” continued Domin.

This approach allows HCLTech to assist clients across various industries in managing AI risks and ensuring that their AI deployments are ethical and aligned with best practices.

HCLTech’s recent membership to the Responsible AI Institute further strengthens its commitment to responsible AI practices. “Being part of a community actively pursuing the next level of responsible AI is key to growing our own practices and helping our clients,” added Domin.

 

The top 5 tech trends shaping 2025: From AI to accessibility

Learn more

 

A Strategic, holistic approach to responsible AI

Implementing responsible AI is not solely a technological challenge, it's a leadership and cultural one. As Domin outlined, organizations must establish governance frameworks, engage diverse stakeholders and take active steps to monitor and manage AI risks like shadow AI. The pace of AI deployment today, combined with the complexity of AI systems, demands that organizations adopt a comprehensive and collaborative approach to responsible AI.

Through transparency, stakeholder engagement, and regulatory compliance, organizations can not only address the challenges of AI adoption but also build lasting trust in AI systems.

By embedding responsible AI at the core of their operations, organizations can help ensure that AI technologies are deployed ethically, sustainably and for the benefit of all.

Share On