During a panel discussion at the HCLTech's pavilion during the World Economic Forum in Davos, industry leaders from HCLTech, Google Research and BNY shared their perspectives on the challenges and opportunities of scaling AI in a responsible manner.
Alan Flower, Executive Vice President and Head of AI & Cloud Native Labs at HCLTech, began by explaining the fundamental aspect of responsible AI is "trustworthiness."
He said: “For me, it is the most important thing and for our client as a consumer of AI to enhance business or service, they need that trust as a provider of that service; trust that the solution is going to enhance my business, keep me out of trouble if I am the operator of that solution and improve over time.”
Yossi Matias, Vice President and Head of Google Research at Google, honed in on the importance of integrating responsible AI development throughout the entire research process.
“When we talk about responsibility, or responsible AI, obviously there are multiple aspects. First, it's about how to build AI in the right way,” he said, emphasizing the importance of addressing potential issues such as bias and factuality from the outset, rather than treating them as afterthoughts.
The panelists highlighted numerous examples of how AI can be deployed responsibly to tackle global challenges. Matias continued by discussing how AI-powered models have been used to classify diabetic retinopathy, a leading cause of blindness, with accuracy rivalling that of human experts. He also mentioned the potential of AI to personalize education and address the climate crisis through improved flood prediction and carbon emission reduction.
Jayee Koffey, Global Head of Enterprise Execution and Chief Corporate Affairs Officer at BNY, also underscored the critical role of trust in the financial services industry.
"We have the privilege of responsibility, of looking after in one shape or form over $50 trillion of assets for clients all around the world," she said.
To foster this trust, BNY has developed an internal AI platform called Eliza, which aims to make AI more accessible and relatable for non-technical employees.
"This enables a significant number of our employees, many who are non-engineers, to experiment and integrate the power of AI into their day-to-day working lives, allowing them to use AI for their own professional engagement, productivity and potential."
Operational challenges
The panellists also discussed the operational challenges of scaling AI responsibly.
Flower highlighted the shift from “static IT” to “organic IT” specifically in the dynamic nature of AI environments.
“It’s crucial for IT organizations to adapt to a new way of running solutions based on ever-changing models and ensuring that a simple thing like a model upgrade does not introduce errors that ripple throughout an entire organization,” he said.
Matias acknowledged this issue of AI hallucinations, or the generation of inaccurate or contradictory content, but also acknowledged that there are applications where this could be useful–like for creative purposes or scientific discovery. He also expressed optimism about the progress being made in techniques to improve grounding and factuality, mentioning Google's recently published FACTS leaderboard, which provides a benchmark for evaluating language models’ factual accuracy.
Regulation promoting responsible AI
The role of regulation in promoting responsible AI was a topic of discussion. Koffee emphasized the importance of “good, transparent, adaptable regulations” to control and promote responsibility, while acknowledging the need to balance innovation and regulation.
Matias added: “AI has become so powerful that it necessitates regulation, but this regulation must be carefully crafted to ensure we don't miss opportunities to solve societal problems ... We must balance the risks and benefits of AI, recognizing that AI itself can be the solution to address some of the risks.”
Flower highlighted the power of commercial incentives. “The carrot, the commercial incentives, will have a far greater impact on enterprise behavior than any amount of legislation.”
Doing the right thing
The panellists concluded the discussion by touching on the responsibility and desire of companies to ensure responsible AI use beyond their own organizations.
Matias emphasized the need for broad societal efforts and education to address this, while Flower referred to research co-authored by HCLTech and MIT, Implementing responsible AI in generative age, that was launched during the World Economic Forum.
Implementing responsible AI in the generative age found that senior business leaders are increasingly recognizing the competitive advantage of responsible AI adoption.
“Our ability to do this right is going to bring real value to our organization. And I think that might be one of the real incentives that speeds up the rate of innovation, the opportunity to get a commercial advantage through doing AI responsibly,” said Flower.