With the introduction of Agentic AI, LLMs are becoming more deeply embedded within core business processes. HCLTech is developing several Agentic AI solutions to provide clients with autonomous systems capable of independent reasoning, decision making and collaboration, requiring minimal human oversight. These systems enhance productivity and operational efficiencies across multiple domains. LLMs provide a critical function to Agentic AI – they are the cognitive engines for these agents, providing reasoning, natural language understanding and decision-making capabilities. However, businesses must answer key questions to ensure these powerful yet potentially unpredictable tools' security, compliance and governance.
- Malicious actors can manipulate LLMs through carefully crafted prompts to bypass intended controls, access sensitive data or perform unauthorized actions
- Attackers could inject malicious data into the training dataset that can corrupt the model's behavior, leading to biased outputs, performance degradation or even the execution of harmful instructions
- Using pre-trained models from untrusted sources can potentially introduce the risk of hidden vulnerabilities or malicious code within the model
These scenarios aren't just hypothetical; they represent real and present dangers that businesses must confront. In addition, hallucinations, biases and inconsistencies in the output of LLMs can negatively impact business operations and customer experiences.
Google Cloud and HCLTech guide businesses to take a conscious and balanced approach to regulating LLMs to secure applications using Agentic AI.
Secure AI Framework (SAIF)
In 2023, Alphabet introduced the Secure AI Framework (SAIF), a guiding set of principles designed to help developers build AI systems with robust security guardrails. SAIF addresses security across all layers of AI, including the infrastructure, data, models and applications. The framework is built upon six core elements:
- Expand strong security foundations to the AI ecosystem - Leverage existing secure infrastructure and build internal expertise by adapting current security measures to defend against new AI-specific threats (like prompt injection) while also continuously scaling and evolving security strategies to keep pace with AI advancements and emerging threats.
- Extend detection and response to bring AI into an organization's threat universe - Proactively monitor AI systems for unusual activity and leverage threat intelligence to predict potential threats. This requires collaboration between trust and safety, threat intelligence and counter-abuse teams. It's about catching AI security problems early and responding effectively.
- Automate defenses to keep pace with existing and new threats - Use AI to enhance security incident response speed and scale. Since attackers will use AI to amplify their attacks, defenders must leverage AI to stay agile and cost-effective in protecting against them.
- Harmonize platform-level controls to ensure consistent security across the organization - Establish standardized security frameworks for AI risk management and apply consistent protections across all AI platforms and tools. This provides scalable and cost-efficient security for all AI applications. Examples include extending secure defaults to AI platforms and integrating security into the software development process.
- Adapt controls to adjust mitigations and create faster feedback loops for AI deployment - Continuously improve security through ongoing testing and adaptation to evolving threats. This involves using reinforcement learning based on incident data and user feedback to refine models, update training data and embed security directly into model-building software.
- Contextualize AI system risks in surrounding business processes - Before deployment, perform thorough risk assessments. This includes evaluating overall business risks like data lineage and validating AI performance through automated checks.
Businesses should consider adopting SAIF when implementing AI and Agentic AI. The framework provides key advantages:
- Establishes clear security standards for building and deploying AI applications responsibly
- Offers practical considerations for implementation
- Addresses security risks associated with AI systems to protect against attacks that can lead to incorrect decisions, system control compromises and data breaches
- Enables safe usage of AI with sensitive data
- Provides best practices for runtime data protection
- Robust governance and security controls cover all three key pillars - people, process and technology
HCLTech endorses the Google Cloud approach for Implementing SAIF
- Understand the AI use case: Is AI addressing a business problem? What data is required to train the model? Understanding the problem defines the scope and potential impact of AI. Knowing the data type, volume and source helps determine data security, privacy and ethical considerations and helps you identify the necessary policies, protocols and controls to be implemented with SAIF.
- Assemble a cross-functional team: When assembling the team working on AI development, include representatives from various domains—business, security, cloud engineering, risk and audit teams, development teams, privacy, legal and data scientists, to name a few. This will ensure a holistic and well-governed approach to AI development, addressing ethical considerations, potential risks, compliance requirements and alignment with business objectives.
- Level set by providing an AI primer: It is essential to ensure that everyone in the team, including the non-technical stakeholders, has a basic understanding of the AI model development lifecycle, including the design, logic, capabilities, merits and limitations of AI. This, in turn, promotes realistic expectations and increases the likelihood of successful AI implementation and value creation.
- Apply SAIF elements: Once you have identified the use cases and the team has been assembled and trained, you can apply SAIF principles to your AI development lifecycle. It is important to note that there is no prescribed order for using them; you could do so simultaneously to ensure that the AI system is deployed securely and responsibly.
Google Cloud Model Armor
Model Armor is a new capability in the Google Cloud Security Command Center used by enterprises to proactively protect against various threats in AI systems and implement elements of SAIF. It analyses user inputs, i.e., prompts and LLM responses, to detect and prevent prompt injection, harmful contents, data loss and malicious URLs. It is multicloud, multi-model and multi-LLM compatible, allowing organizations to integrate it into their AI development lifecycle. Integration with the Security Command Center offers centralized security policy management. The regional endpoints-based architecture offers low-latency operations.
Conclusion
Agentic AI is the new frontier and utilizes LLMs as the cognitive engine. This enables agentic agents to process and generate human-like text and makes them more interactive and adaptable. Unlike chatbots, which are limited to pre-set responses to queries, agentic agents have sophisticated learning abilities, can manage multi-step, higher-level tasks and can work with other agents. Taking advantage of the transformative business potential of agentic agents requires a focused approach to avoid serious security and governance risks like prompt injection, data poisoning and model integrity issues.
Organizations must be proactive and adopt responsible AI frameworks like Alphabet's SAIF to mitigate these risks. Tools like Model Armor can aid in the practical implementation of these frameworks. Using secure AI frameworks and implementing appropriate tools and safeguards are essential for ensuring agentic agents can deliver safe, reliable and responsible results.