Navigating the complex landscape of AI governance and compliance | HCLTech

Navigating the complex landscape of AI governance and compliance

As organizations continue to adopt AI, GenAI and the large datasets that power them, there are significant implications for governance practices
 
3 minutes read
Nicholas Ismail
Nicholas Ismail
Global Head of Brand Journalism, HCLTech
3 minutes read
Navigating the complex landscape of AI governance and compliance

As more organizations pilot and scale artificial intelligence (AI) and generative AI (GenAI) technologies, properly governing these systems becomes increasingly important.

To explore this topic further, Jatin Arora, Associate Vice President, Cybersecurity – Governance, Risk, Compliance & Resiliency at HCLTech, helped distill the implications and challenges organizations face in this evolving landscape.

Enabling secure AI innovation: Privacy, governance and compliance

“When you talk about large data sets, of course, the very first consideration that any organization needs to undertake is around privacy and security,” he said. 

“Data privacy laws dictate how an organization will operate in a particular geography, whether EU GDPR, the California Consumer Privacy or the Australian Privacy Principles. AI systems often process very large amounts of data, and some of that data could be personal data also, so there must be a mechanism that enables organizations to ensure that the critical and sensitive data is not being used to train their engines,” added Arora.

Organizations need to ensure that they are not using production data, sensitive information or secured information to train their large learning models.

Beyond privacy, data security governance is also crucial. Organizations must consider the quality of the data being used, the kind of data assets being used and who has access to the data assets used to train these AI models. 

Proper data management protocols help ensure models are trained responsibly.

Regulatory compliance further complicates matters. "If any organization starts to focus on GenAI, they must comply with the regulatory guidelines or law of the region,” said Arora, citing examples like the European AI Act and US AI Bill of Rights. Regulatory standards also provide best practices, though regulating fast-moving technologies poses challenges.

A key challenge is the pace of AI innovation outpacing regulators. “AI is everywhere today — from even a small startup company to a big multinational, everybody is creating these large learning models or are adopting the ones already available in the market,” said Arora. 

With technology evolving quickly in every industry, regulations need to keep pace.

Risk management and ethical considerations

Ethical and risk management considerations compound regulatory headaches. Past challenges like algorithmic bias must be avoided, but organizations must also identify and govern new risks emerging from AI systems. 

“Whenever a new technology comes in, or whenever there is a change in normal, there is always a new risk associated,” said Arora. 

To overcome this hurdle, organizations need to be proactive in identifying these risks. They should be aware of the ecosystems in which the AI and GenAI models will operate and govern those risks appropriately.

“Risk management is not a new domain. But with the growth of technology, new risks emerge. Organizations need to equip themselves well enough to identify those new risks coming from adopting GenAI and create a mechanism to monitor and remediate those risks proactively,” said Arora. 

HCLTech launches Enterprise AI Foundry to drive AI effectiveness across enterprise value chains

Read the press release

Looking ahead to a well-governed future

Enhancing and adapting internal governance protocols is critical for organizations adopting AI. “Start with policy and procedural development regarding how you're going to build your AI engine or adopt an already existing model,” advised Arora. 

At the same time, organizations should focus on aligning internal practices with external regional regulations, which helps foster secure innovation. 

“Security shouldn't be viewed as a roadblock. Rather, security should be an enabler to build confidence in AI adoption in an organization,” said Arora. Every organization today has some form of cybersecurity control. Taking this risk-based approach should also expand risk oversight to AI applications.

AI’s potential to augment human experience will enable employees to become more innovative in their roles. In this context, Arora advised companies to focus on training and awareness of how people should operate such applications, how they should adopt AI within the organization and what their roles and responsibilities should be.

Overall, AI's opportunities are interconnected with the challenges of privacy, ethics, risk, regulations and internal governance. 

“All of them are interrelated, and that's what comes under the scope of governance, risk and compliance,” concluded Arora.

Share On