The US is continuing to look at new ways to address the evolving threats in the digital arena and intends to focus on trends that President Biden has called critical to shaping a “decisive decade for the world”. The revolution created by generative AI is one of these trends, and the US Department of Homeland Security (DHS) has recently unveiled the department’s first-ever AI Task Force in response.
In addition to the AI Task Force, other agencies and regulatory bodies are turning their gaze towards AI and how to regulate it responsibly to ensure it is used safely, including the White House itself.
In a recent meeting with tech company CEOs—including Google, Microsoft, OpenAI and Anthropic— Vice President Kamala Harris told the leaders that they have a “moral” obligation to keep products safe.
“AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges,” VP Harris said. “At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy.”
One of the first AI safety bills was introduced to the US Congress recently to create a separate AI task force to identify risks to civil liberties and privacy. The US’ Central Command (CENTCOM) also recently hired a new AI expert to help advise on cyber warfighting.
Experts agree that generative AI can open new doors for organizations, but that proper regulation and oversight will be needed along the way to maintain ethical and security standards.
These recent moves by entities in the US indicate a change in response to AI, as public concerns around the technology mount.
The AI Task Force’s mission
DHS Secretary Alejandro Mayorkas announced the formation of the AITask Force during a Council on Foreign Relations event. The group will focus on combating negative repercussions of AI technologies and analyze adverse impacts of generative AI systems, like ChatGPT.
“The profound evolution in the homeland security threat environment— changing at a pace faster than ever before—has required our Department of Homeland Security to evolve along with it,” said Mayorkas during the event.
DHS is highlighting integrating AI in supply chain and border trade management, countering the flow of fentanyl into the US and applying AI to digital forensic tools to counter child exploitation and abuse as linchpins to the task force’s missions.
The newly minted AI Task Force will have several responsibilities and drive specific applications of AI to advance homeland security missions. These will include:
- Integrating AI into US efforts to enhance the integrity of supply chains and the broader trade environment and deploying AI to more ably screen cargo, identify the importation of goods produced with forced labor and manage risk.
- Leverage AI to counter the flow of fentanyl into the US, better detect fentanyl shipments, identify and interdict the flow of precursor chemicals around the world and disrupt key criminal networks.
- Apply AI to digital forensic tools to help identify, locate and rescue victims of online child sexual exploitation and abuse; and identifying and apprehending perpetrators of these acts.
- Work with partners in government, industry and academia to assess the impact of AI on the US’ ability to secure critical infrastructure.
Additionally, within 60 days of establishment, the AI Task Force will submit a roadmap of milestones to achieve.
More US moves on AI
While Secretary Mayorkas said that AI was still in its “nascent stages”, other recent moves from the US indicate that they would prefer to be ahead of the curve on the emerging technology.
Recently, CENTCOMhired former Google AI Cloud Director Dr. Andrew Moore to serve as the first-ever CENTCOM Advisor on AI, robotics, cloud computing and data analytics. He will advise CENTCOM leaders on the application of AI, machine learning, robotics and network architecture to CENTCOM’s missions in the Middle East, Levant, Central and South Asian States.
CENTCOM prioritizes digital transformation, according to its CTO Schuyler Moore, and wants to add the requisite talent needed to provide guidance to the command to drive efforts to combat growing AI threats.
At the legislative level, democratic Colorado Sen. Michael Bennet, introduced legislation to create a congressional AI Task Force to address concerns around AI, with a prioritization on youth safety. The task force would be made up of government experts, from the Defense Department, National Institute of Standards and Technology and other agencies, who could identify risks and reduce potential civil liberties and privacy drawbacks from AI.
Further, the task force would look for gaps in current AI regulation and move to quickly recommend new policies to remediate those gaps. In a letter Sen. Bennet wrote to the CEOs of OpenAI, Microsoft, Snap, Google and Meta, he highlighted the potential harm of generative AI to young users.
“Few recent technologies have captured the public’s attention like generative AI,” wrote the senator. “The technology is a testament to American innovation, and we should welcome its potential benefits to our economy and society.”
He added, however, that “responsible deployment requires clear policies and frameworks to promote safety, anticipate risk and mitigate harm”.
Overkill on generative AI?
Generative AI can create so many new opportunities and has already had a significant impact on the future of business and digital experience, but regulation is still needed to keep generative AI tools and AI-enabled systems in check.
Vulnerabilities in AI and machine learning should be front of mind for policymakers at organizations before deploying AI-enabled systems.
Phil Hermsen, Solutions Director, Data Science & AI at HCLTech, said that “the vulnerabilities attached to machine learning must be understood before making any informed decision on risks and investments because flaws within an ML make the situation even more complicated and are being exploited by cybercriminals and owners of cloud platforms.”
Through its Dynamic Cybersecurity, HCLTech helps secure its customers via AI and robotic process automation. This offering is a framework of governance and continual assessment to enable an adaptive and evolving cyber posture and to leverage the best technologies. Expert knowledge and experience in AI-ML at HCLTech determines that cybersecurity is a major next step in the AI field.
While generative AI continues its meteoric assent into the public consciousness, regulations and government bodies are beginning to angle towards being ready for the future of AI and how it impacts all our lives. Hermsen says that developing trustworthy AI systems, policies, governance, traceability, algorithms and security protocols are all needed.