Despite the recent warnings, a regulated AI future should bring optimism | HCLTech

Despite the recent warnings, a regulated AI future should bring optimism

Considered the ‘godfather’ of AI, Dr Geoffrey Hinton has recently warned of the dangers awaiting in the future, but there are reasons to be optimistic when it comes to AI
 
9.4 min. read
Jaydeep Saha
Jaydeep Saha
Global Reporter, HCLTech
9.4 min. read
Despite the recent warnings, a regulated AI future should bring optimism

Recent high-profile warnings have thrown a light on the potential dangers of artificial intelligence (AI). The US President Joe Biden has met the leaders of Google and Microsoft to discuss this, the AI “godfather” of Google warned about it in his departing note and thousands of dignitaries signed an open letter in March, explaining the race to develop AI systems are out of control.

In a New York Times article, Dr Geoffrey Hinton was particularly worried about “bad actors” who would try to use AI for “bad things”.

He said: “You can imagine, for example, some bad actors…decided to give robots the ability to create their own sub-goals. This eventually might create sub-goals like ‘I need to get more power’. The kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems. With digital systems, all these copies can learn separately but share their knowledge instantly. So, it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

“As Dr Hinton rightly points out in his leaving comments, the speed that AI is being utilized is rapidly accelerating. However, how or who should regulate and at what level are being left to governments that are not known for their expertise. Biased AI already exists and during various iterations of ChatGPT, they have had to ‘edit’ source material and retrain models. This is because of the personal ethics of the people concerned,” says Phil Hermsen, Solutions Director, Data Science & AI, at HCLTech.

He continues: “However, should this always be the case? What regulations can be put into place to ensure that AI remains untainted? While I take the optimist view that AI has mainly been used for good, even 1% of its use for nefarious means could result in financial loss, scare mongering or even war. This brings up the next question, you may regulate AI in one country, but another country takes a different view. So, should the United Nations be doing something?”

Powering reimagined experiences for E.ON

Watch the video

How governments across the world are dealing with it

When it comes to government, while India is not considering any law to curb AI growth, a recent White House invitation sent to the chief executives of Google, Microsoft, OpenAI and Anthropic noted President Joe Biden’s “expectation that companies like yours must make sure their products are safe before making them available to the public”.

At the meeting—which included a “frank and constructive discussion” on the need for companies to be more transparent with policymakers about their AI systems—Biden told the CEOs that they must mitigate the current and potential risks AI poses to individuals, society and national security. The White House added that the meeting also included the importance of evaluating the safety of such products and the need to protect them from malicious attacks.

US Vice President Kamala Harris in a statement later said AI has the potential to improve lives but could pose safety, privacy and civil rights concerns. To ensure the safety of their AI products, she told the CEOs that they have a “legal responsibility”, and that the administration is open to advancing new regulations and supporting new legislation on AI.

After the meeting, in response to a question on regulations, OpenAI chief Sam Altman told reporters: “We’re surprisingly on the same page on what needs to happen.”

In addition, according to Reuters, the Biden administration announced a $140 million investment from the National Science Foundation to launch seven new AI research institutes and released policy guidance on the use of AI by the federal government.

Earlier, the US has also reintroduced the Algorithmic Accountability Act that requires companies to conduct assessments of high-risk automated systems that involve personal information or make decisions that affect consumers’ lives. The bill aims to prevent algorithmic bias and discrimination. Indeed, the main concerns about fast-growing AI include privacy violations, bias and worries it could proliferate scams and misinformation.

Other public sector organizations in Canada, Italy, China, the EU and UK are also looking at regulating AI. But are at different stages of this journey.

In April, Italy took ChatGPT offline to examine its potential breach of personal information rules. After it later lifted the ban, the Italian government move inspired fellow European privacy regulators to launch investigations.

Besides setting rules on how algorithms can operate, China has been implementing new regulations to restrict the creation of ‘deepfakes,’ media-generated or edited by AI software. The Cyberspace Administration of China began enforcing the regulation to curb one of the most explosive and controversial areas of AI advancement.

The EU’s proposed Artificial Intelligence (AI) Act aims to improve regulations on the development and use of AI. In an article, the World Economic Forum (WEF) stated that the AI Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

“[AI] has been around for decades but has reached new capacities fueled by computing power,” said Thierry Breton, the EU’s Commissioner for Internal Market in a statement.

The UK’s Competition and Markets Authority (CMA) recently said it would start examining the impact of AI on consumers, businesses and the economy and whether new controls were needed on technologies such as OpenAI’s ChatGPT.

“It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information,” CMA CEO Sarah Cardell told Reuters. She added that AI had burst into the public consciousness and was developing at speed and because of this, the competition regulator would start work by seeking to understand how foundation models that use large amounts of unlabeled data are developing.

“The EU’s Digital Markets Act that came fully into force this week does not cover generative AI and the CMA no doubt sees this as an opportunity to be leading the global debate on these issues - along with the US FTC which is already looking at the area,” Linklaters lawyer Verity Egerton-Doyle told Reuters. “The review would give Britain’s competition regulator the chance to join the debate,” she added.

Ultimately, regulations and internal governance need to catch up with the rapid rise of AI. There is a significant appetite for this, and the required regulations and governance will eventually catch up.

This will require significant public and private sector collaboration and for organizations to prioritize a culture of responsibility when it comes to the development and roll out of the technology. In this type of environment, the future of an AI-enabled society should be viewed with optimism as it will enable the responsible use of AI.

Of course, with every new technology there are positive and negative impacts, but it’s up to policymakers, regulators, business leaders and innovators to lead decisive action and ensure AI is developed as a force good.

Share On