Challenges of Generative AI - Short & Long Term | HCLTech
AI

The challenges attached to generative artificial intelligence and other AI tools

Generative artifical intelligence presents several short- and long-term challenges that organizations must address with regulations, trusted data and new policies
 
10 minutes read
Jaydeep Saha
Jaydeep Saha
Global Reporter, HCLTech
10 minutes read
Share
Listen to article
Mute
30s Backward
30s Forward
The challenges attached to generative AI

There is an urgent need for rules, regulations and ethical guidelines related to the use of generative artificial intelligence (GenAI).

This article will explore the challenges, risks and disadvantages related to the rising GenAI adoption— both in the short and long term.

Short term

Concerns about ethical guidelines (or lack thereof): The widely adopted use of AI tools and Generative AI systems GenAI is impacting the education system, where it is narrowing possible learning experiences, reducing human-to-human interactions and limiting learners’ autonomy by providing predetermined solutions.

What needs to be investigated here is the social-emotional aspects of learning, the impact on young learners’ intellect and IQ development and the possibilities of exacerbating existing disparities in educational resources that may deepen inequities and bring in biases.

The potential for manipulation, concerns about cognitive development and emotional wellbeing are increasingly prevalent, with more sophisticated GenAI systems being introduced in education systems.

This is where ethical guidelines and regulations are required at executive levels to determine the content and training of large language models (LLMs) that can eliminate biases, discrimination and a yet unknown impact on students.

“An explainable AI describes the approach of how the feature values are related to the model prediction. This helps user to understand why the model is behaving in a certain way for a particular prediction,” says Dr. Naveen Kumar Malik, Associate Vice President, HCL SW-CTO Office-Strategy, at HCLTech.

Permissible or non-permissible: From copyright issues to data privacy and safety of content, the use of content without consent in AI models raises multiple immediate concerns.

This is further enhanced by GenAI, with AI tools that widen the horizon of risks related to the large amounts of scrapped text, sounds, codes and images from the internet.

The training data for GenAI models is usually available without the owner’s permission. This creates challenges around Intellectual Property Rights.

“The basic idea of AI fairness is that the model should not discriminate between different individuals or groups from the protected attribute class. Protected attributes (for example race, gender, age, religion) arean input feature, which should not affect the decision making of the models solely. In the context of fairness, the concern stays mainly with unwanted bias that places privileged groups at a systematic advantage and unprivileged groups at a systematic disadvantage,” adds Dr. Malik.

Viral content: Without guardrails, discriminatory and unacceptable content is often a source of viral content that looks real, accurate and convincing, but the creator knows very well that it’s offensive and/or unethical.

Generated by GenAI models, biased material does have the potential to trigger national-level threats like provoking antisocial elements, terrorism, hate speech and radicalization. This is where effective monitoring, ethical guidelines and strict regulations need to come into effect, drawing a strict line between what’s right and what’s wrong.

“Since these models are increasingly being used in making important decisions that affect human lives, it is important to ensure that the prediction is not biased toward any protected attribute such as race, gender, age or marital status,” adds Dr. Malik.

Deepfakes and real-time voice cloning: Remember Tom Cruise, Keanu Reeves and Tom Hanks all crying foul and appearing live to tell people not to believe in the videos that have gone viral? Yes, those were deepfakes. Powered by GenAI and with thousands of alterations, these videos often leave people confused and uncertain if they are real or fake.

With real-time voice cloning, what a person is actually saying can be manipulated to convince listeners and audiences.

Thankfully, political advertisers will now have to flag if they used AI or digital manipulation in advertisements on Facebook and Instagram. Meta said this goes a step further in tackling deepfakes. Advertisements related to politics, elections or social issues will have to declare any digitally altered image or video from January 2024.

Cybersecurity: GenAI is a double-edged sword and cybercriminals know it better than anyone else. While GenAI brings in automation across various levels in multiple sectors, criminals looking for one small gap can use it in their favor — uncovering vulnerabilities faster and unleashing evolving malware that works in real time.

However, CISOs handling such an environment are starting to use GenAI in their favor, by implementing an infrastructure rooted in zero-trust and that includes data classification, encryption, storage and transmission.

Augmenting a human-led cybersecurity team’s capabilities, GenAI helps in bringing in the required automation that frees up the workforce. Automation in threat detections is an emerging trend that is leveraged with the power of GenAI and transforms the approach to cybersecurity.

As GenAI is continuously learning — from training data related to past incidents and threat intelligence feeds — and evolving, it is now being used to prevent and detect anomalies, raise alarms, predict future threats and identify vulnerabilities, thereby reducing the time to respond to attacks or attempts and reducing potential damage to an organization. This is further put into use for incident reporting and threat-sharing real-time training data that is used for taking timely decisions and actionable insights.

“The primary goals of an AI model security in HCLTech are to prevent attackers from degrading AI models and its functionality, protect the confidentiality of sensitive training data used to build the model and stop them from interfering with normal operation of AI models,” says Dr. Malik.

HCLTech supercharges demerger for UD Trucks, powered by scaled digital transformation

Watch the video

Long term workforce concerns

According to a Goldman Sach report, while GenAI could amplify global GDP by 7% and double productivity growth over the next decade, a recent BCG survey (of 2,000 global executives, based on BCG’s Digital Acceleration Index), mentioned that 52% still discourage GenAI adoption as they do not fully understand it and 37% are still experimenting with GenAI and have no policies in place to prepare their workforce.

At the recent AI summit hosted by UK Prime Minister Rishi Sunak, billionaire tech leader Elon Musk however said: “It’s hard to say exactly what that moment is, but there will come a point where no job is needed. You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything.”

Perhaps a lot needs to be addressed before Musk’s fears (of existential threat to humanity and no more human work required) come true.

Among concern areas the BCG survey highlighted about generative artifical intelligence are limited traceability of sources, making factually wrong decisions, compromised privacy of personal data, increased risk of data breaches and unreproducible outcomes.

The survey also highlighted an unclear roadmap and investment priorities, no strategy for responsible GenAI, a lack of talent and skills and unclear responsibility in the C-suite, as challenges to the long-term adoption of GenAI.

Until the time that these long-term issues are addressed at a global level by governments and organizations dealing in advanced AI models and tools, the adoption of GenAI should be monitored and investigated at a local level where ethics, rules and regulations are not broken in the name of entertainment and spreading false information.

From an HCLTech perspective, an ethical AI system should be:

Inclusive: It must be unbiased/fair and works equally well across all spectra of society. This requires full knowledge of each data source used to train the AI models to ensure no inherent bias in the data set and bias mitigation techniques to remove unfairness of the model. It requires a careful audit of the trained model to filter any problematic attributes learned in the process.

Explainable: It means the source training data, resulting data, what their algorithms do and why they are doing that can be explained. When AI systems go awry, it can be traced through a complex chain of algorithmic systems and data processes to find out why.

Positive purpose: An AI system endowed with a positive purpose aims to reduce fraud, eliminate waste, reward people, slow climate change, cure disease, etc. Ways to safeguard AI from being exploited for bad purposes is a must.

Data security: Responsible collection, management and use of data is essential. Data should only be collected when needed, not continuously, and the granularity of data should be as narrow as possible.

Share On