From security to ethics: Changing roles and vital responsibilities in the AI-empowered enterprise | HCLTech
Article

From security to ethics: Changing roles and vital responsibilities in the AI-empowered enterprise

As generative AI drives technology into the organizational driver’s seat, how can enterprises practically manage secure implementation while grappling with the big ethical questions?
 
5 minutes read
Andy Packham

Author

Andy Packham
Chief Architect and Senior Vice President, Microsoft Ecosystem Unit, HCLTech
5 minutes read
Share
From security to ethics

The big question

Conversations around have changed radically in the past 18 months. Rapid advancement, coupled with widespread adoption, has shown us AI’s potential. We understand its enterprise impact. We have seen its business value. The question is no longer “How will AI impact my business?”

It's “How do I use AI ethically?”

Roles and responsibilities around technology have shifted as AI has become more intelligent and influential. With that comes growing concern about bias, transparency, accountability and the potential for unintended consequences.

These questions around ethics and responsibility are not the sole domain of the CISO or technology team. They’re business-wide and board-level issues that need careful consideration.

The big challenge

Constraining the use of AI across the enterprise means you’ll get left behind. It’s not an option. But the challenge enterprises face lies in enabling versus protecting the business.

Eager adopters want to see the business forge ahead, but the business has a responsibility to do so with an appropriate level of risk; one that’s protective of both customers and commercials.

Ethical empowerment

is starting to touch every corner of the enterprise. And rightly so; every team should have an AI strategy.

However, when you’re democratizing the use of — and generating myriad use cases — privacy and security must remain at the crux of each proposition. This requires cultural change — both in terms of embedding a culture of experimentation to extract value from new technologies, and in embedding a sense of shared responsibility around the use of data and AI.

It’s not just anymore, it’s . Would your customers be happy with what you’re using their data for? Are you doing something for the good of society?

They’re big, important questions that can be overwhelming to businesses just starting out in scaling AI. Let’s look at some initial practicalities.

Taking secure steps

Consider these key principles and imperatives.

Avoid shadow AI

With so much generative AI built and so many well-known apps coming to market over the past 18 months, the risk of shadow AI — the unauthorized use of AI — across the enterprise is huge. It’s essential that organizations get ahead of this to make sure tools are properly vetted, data is properly protected and the business isn’t put at risk. There’s no choice but to embrace AI; it will enter your organization somehow. Those that start working on change, start adopting and start setting guardrails will be the ones that secure themselves against risk.

Take a risk-based approach

In prioritizing your portfolio, you may have traditionally focused on business value vs. cost. However, in establishing models and feeding them data, risk becomes a huge factor, too. You have to learn by doing, and doing means using real data.

Start by pursuing the lowest risk use cases available first. Typically, that means implementing scenarios where the organization itself is the consumer of the data. For example, building employee chatbots with Microsoft Copilot before customer ones.

When identifying use cases to behind with, consider which risks already exist within a process, without generative AI. Now, how much is the risk increased (or indeed decreased) with the introduction of AI? Make a conscious, evidence-based decision to identify a few initial use cases.

Categorizing and cutting customer service inquiries.

Using generative AI to: 

  • Transcribe call center records
  • Understand why customers called
  • Summarize and categorize calls
  • Determine the most frequent issues

Applying human oversight to: 

  • Apply root cause analysis
  • Address and fix the source issue

Results:

  • Improved customer experience
  • Reduction in service center calls
  • Most frequent issues identified at scale
  • Employees empowered to perform work more effectively, with more information at hand
  • Baseline data to consistently monitor and measure

Start with augmentation

Initial use cases should be built around employee empowerment, augmenting their role with AI to save them time and free them up to value add elsewhere. That’s the Microsoft Copilot Suite enables: Copilot for Microsoft 365, Copilot for Sales and Copilot for Service are just three examples, each built to augment and improve user experience within a specific tool or discipline.

This is a low-risk way to implement AI, yet it is powerful in that it allows you to continuously improve augmentation and scale once tested, while the business still benefits from early productivity gains.

Ensure secure adoption

Empowering employees means upskilling and often pre-skilling across the whole organization. A company-wide education program is the first step to countering risk. Every employee should have a baseline of education on what tools are useful for, the guardrails around them and the data policies in place to avoid issues like data exfiltration and vulnerabilities in insecurely sourced software.

Establishing security: Data vs. platform

Platform controls and security policies should be driven by the classification of the AI system and the type of data it is ingesting. Security is absolutely paramount, and if you’re taking steps to implement on a use case by use case basis, you can create well-aligned, specific policies to ensure secure adoption.

You can't have AI without data and data is the lifeblood of AI. This makes data quality categorically critical to the integrity and success of generative AI. At HCLTech, we’ve worked across thousands of AI implementations, and we find repeatedly that it is data and its use that introduces risk, rather than the platform being used.

The devil’s in the data

Not all data needs to be available to everyone. It should be segmented and accessible to only those who need it. This is an important guardrail to put in place at the beginning. Controls against overexposure of data are key — including policies, procedures, mechanisms and governance strategist. In the case of securing against internal bad actors, this is a simple but critical step to take.

In terms of quality, the use case by use case approach ensures you can thoroughly cleanse and maintain the specific datasets that are introduced to the AI. Training the model on quality data is not only important to accuracy of results, it’s important to overcome the ethical question of bias. Identify potential sources of bias, set rules and policies that mitigate these sources, and consider augmenting owned data with high-quality, representative supplementary data.

Trust the platform

While the surge in interest and utilization of generative AI is relatively new, the tools have been around for a while — and as a foundational layer for AI applications, they’re tested and trusted. Concerns around security are only becoming pronounced now that the tools are becoming more adopted.

Consider the Microsoft 365 E5 platform, which combines best-in-class productivity apps with advanced security, compliance and analytical capabilities. Security features are baked into the fabric of the platform, which helps to extend identity and threat protection with integrated and automated security, and brings information protection and advanced compliance capabilities together to protect and govern data, while reducing risk.

Any Microsoft product your organization uses has security built into the platform. Often, organizations already have these capabilities within the existing tech stack, so think about how to better utilize existing subscriptions and licenses.

A word on regulation

Regulations around AI are evolving at pace, across the world. This has led to understandable concern from businesses looking to implement the technology that are wary of getting it wrong. When we study these regulations, they’re not completely new, but they are multi-layered. The enterprise now needs frameworks in place to protect the data, protect the AI model, protect against AI risk, and then use AI to protect the enterprise.

The data layer

You always have to follow the traditional controls with respect to your data security: encryption, access management, anonymization and data minimization. These are existing controls, which every organization should have. These will remain and maintain the foundational security of the AI engine.

The base layer

New regulations reflect the need to secure your base layer, and do so as a part of a considered framework, which includes governance, training and assessment. Another part is how your base product of LP algorithms is secured and, for that level of security, how you’ve mapped it to new regulations and controls to protect the AI models.

The algorithm layer

Your algorithm layer lies on top of your data, natural language processing (NLP), models and policies. How you protect this part of your infrastructure is now an important part of cybersecurity and the protection of your IP.

The architecture layer

When it comes to the application architecture, it is important to determine where to monitor and how to govern data from the start of your transformation journey through to the end — and then ensure regular monitoring in life.

The same rules still exist. When your policies are in place, any new technology adoption requires training and education. This is a culture and people change as much as a digital one. However, if you use new regulations as guides, rather than seeing them as hindrances, you will be able to accelerate adoption.

Everyone’s a CISO

As people across the business are trained in AI, and in using it securely and responsibly, everyone’s role becomes more data and security focused. With that evolution, the role of the chief information security officer (CISO) changes from a protective to an enabling one. The challenge for the CISO, then, is that they need to deliver more — more training, more education, more tools — and do so faster.

With the acceleration of AI, there are practical areas where the CISO can do exactly that. One example is in plugging the skills gap. Human talent is scarce from a junior analyst standpoint, but AI can fulfill that role very efficiently now. So, with a smaller and less trained workforce, CISOs can achieve more using AI. The loop is closed, and AI begins to protect the organization.

Take secure steps to scale AI

At HCLTech, we take a pragmatic approach to ensuring critical, continuous data security optimization, using a defined framework to identify your most valuable use cases and determine which will do the most good. By fusing our capabilities with best-of-suite Microsoft technologies, we can help you embed AI throughout your organization and manage the data as well as the cultural change that must go with it.

Share On