5 Reasons Why: "AI" - A Double-edged Sword for Cybersecurity in Financial Services | HCLTech

5 Reasons Why: "AI" - A Double-edged Sword for Cybersecurity in Financial Services

 
February 14, 2019
Sudip Lahiri

Author

Sudip Lahiri
Executive Vice President and Head, Financial Services, Europe
February 14, 2019
Share

AI has been at the heart of many security debates in the financial services sector, over the last couple of years. With more enterprises going ‘digital’, there is a consistent flow of information, that can offer insights into transaction validity and security. So it’s no surprise that the firms are investing heavily in AI with its capability to quickly parse massive data streams, apply analytical models, and identify threats. In India alone, 36% of financial institutions have already invested in AI, while the global AI-based cyber security market will reach a staggering USD 18.2 billion by 2023.

But despite the market being bullish, can AI in cyber security live up to all the hype? Will the technology, as it stands today, address emerging and evolving tactics from hackers? We look at five reasons why the answer isn’t that simple.

Look at the 5 reasons why AI act as a double-edged sword to address the emerging and evolving tactics from hackers. @hclfs #cybersecurity #artificialintelligence #AI

  1. AI may not always distinguish between ‘good’ and “wrongful” access

    AI is designed to be value-agnostic, meaning it cannot distinguish between ethical transactions and subversive ones if all other conditions remain the same. Consider if a user digitally moves funds from London to Berlin, to the Cayman Islands, and back again -- and somewhere along the way there’s a sizeable dip in the taxation rates -- it’s likely that such a transaction would be considered as “business as usual” by an AI engine. Scenarios like this involve complex calculations, referring to jurisdictions and statutes. Without any human intervention in terms of discretionary judgment and ethics, this is something AI will not be able to detect.

  2. AI isn’t always as dynamic as you want them to be

    Increasingly, we are seeing a distinction being made between Robotic Process Intelligence (RPA) and AI. This is a line that shouldn’t be blurred when it comes to security issues. While certain iterative tasks like checking against protocol, verifying ownership, and ensuring the right balance mechanisms, can be completed by RPA, an intelligence layer isn’t the definitive solution you’re looking for.

    For several banks, using an intelligent system as gatekeepers of cyber security can be problematic. A typical transaction will touch multiple devices and storage spaces, causing several opportunities for breach. A static AI solution isn’t equipped to address all these threats holistically, especially considering a growing frequency of ‘zero-day’ attacks. In 2018, an overwhelming 76% of all cyber security attacks were zero-day in nature, stemming from completely unfamiliar sources.

  3. Far from replacing people, AI is powered by them

    While RPA could replace a certain amount of manual labor, AI is inherently dependent on human knowledge, skill-sets and forecasting capabilities. In other words, an AI-based security layer is only as smart as the human that built it. Any gap in developer knowledge, prejudices, or cultural habits specific to location, and other human fallacies are bound to creep in.

    This also means that, in order to make your cyber security solution truly sustainable, a proactive “maintenance & update” team must always be available. For global financial institutions, this requires a in-house workforce spanning geographies and domains.

  4. Foundation infrastructure remains a pain point

    When considering futuristic ideas like fully sentient AI engines or globally accessible block chain technology, the current state of digital infrastructure is often overlooked. The truth is, it requires significant amount of computational power and highly stable networks to process the kind of data that financial firms are looking at. Fed with complex analytical models, nuanced machine learning algorithms, and constantly updates with new data sets, AI could strain existing infrastructure and slow down processes. This, in turn, could lead to misses in threat detection, leaving vulnerabilities in your systems.

    Before looking to adopt full-scale AI-based security architecture it’s critical to re look at existing systems. Integration must be seamless, given the high stakes involved in any risk mitigation & security reinforcement exercise in the financial services sector.

  5. A defeatist attitude isn’t the way forward

    Faced with these challenges, business leaders are often left feeling that it’s only a matter of time before they are hacked -- it’s all about ‘when’ and not ‘if’. Well, that’s where the battle is half-lost. Just as AI is not a magic bullet for contemporary cyber security complexities, certain forms of AI integrations in financial enterprises are inevitable. Obviously, AI brings significant efficiencies in terms of manual efforts, salary costs, timelines, and hardware requirements. These benefits will continue to grow as AI evolves.

    The key is to not fall for the ‘one-stop-solution’ hype and look at incremental transformation. Digital financial majors, like PayPal, have remained immune to large scale hacks by adopting a continuous testing approach. Instead of opting for a static, predefined authentication model, PayPal combines human checks and balances with automated verification to ensure each transaction made on their platform is valid.

A Final Word

Clearly, there are two sides to this conversation. According to a report by Webroot, AI is used by nearly 87% of cyber security professionals in the US. On the other hand, 91% of security professionals fear that AI will inspire even more sophisticated attacks. By choosing a dynamic innovation approach based on a service architecture with tailored security modules, designed specifically for each enterprise, it’s possible to mitigate these risks. Simply put, in a terrain so complex, the answer must be individualized and in-sync with each unit, location, and organizational touch point.

We are hosting a Roundtable which discusses all of this in greater detail, bringing together a think tank of cyber security leaders from across the world. Watch this space for more insights and updates from HCLTech’s Straighttalk Cyber Security Roundtable titled in collaboration with The Hague Security Delta, scheduled for February 14, 2019, at the Louwman Museum, Hague.

Get HCLTech Insights and Updates delivered to your inbox

Tags:
Share On