President Biden’s expansive Executive Order on AI is the latest act in the global jockeying for the leadership position in this area. It sets out an ambitious Government-wide roadmap aimed at delivering responsible AI and reducing the risks of irresponsible development and use:
“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
Is this easier said than done? Well, we’ll find out in the fullness of time, but if you come across a well-built man with a European accent saying "hasta la vista baby" or "I'll be back", you can deduce that the EO was a dismal failure.
The EO itself is based around eight principles, the first is that AI must be “safe and secure”. The most pressing security risks specifically include cybersecurity, but cybersecurity weaves into the others listed - biotech, critical infrastructure and other national security dangers. Thus, the EO confirms once again the pivotal nature of the cybersecurity field of study within the idea of responsible AI.
Cybersecurity risks in AI
The EO doesn't break down the cybersecurity risks into all of their constituent parts, so if you want to understand them in more detail, take a look at the CyBOK body of knowledge that is linked to in the resources section of my website. However, in the most basic terms they include AI being used to design and improve attacks, AI being used in the attack and AI being the target of the attack.
Guidelines for development of safe, secure and trustworthy AI
Let’s look at some of the main elements of EO that engage directly with the cybersecurity agenda:
Within 270 days, the Secretary of Commerce, acting through NIST and in coordination with the Secretary of Energy and Secretary of Homeland Security, must develop guidelines, standards and best practices for the development of safe, secure and trustworthy AI systems. The key objectives are to promote a consensus around industry standards; to develop standards and benchmarks for evaluations and audits; to establish procedures and processes for red-teaming exercises by AI developers; and to develop testbeds.
Defence and the protection of critical infrastructure
Rules are also being put in place for defence and the protection of critical infrastructure (“CI”):
The developers of potential dual-use foundation models will be required to report to the Federal Government on their activities, including ownership details and the results of red-teaming.
To prevent infiltration of Infrastructure as a Service (IaaS) platforms by malicious foreign cyber actors, regulations will be developed for the reporting of certain foreign person transactions. They will also prevent the resale of these products by foreign persons who do not conform to an identification and reporting regime.
There will be a determination of the technical conditions of a large AI model that could have potential for malicious cyber activities, which is subject to a pro-tem level of computing power that is defined in the EO.
Technical protections for CI
To advance the levels of technical protections against cybersecurity risks for CI:
Annual risk assessments will be performed by Government Agencies responsible for CI, to understand the cybersecurity risks related to the use of AI in the various CI sub-sectors.
A report on best practices for financial institutions will be produced.
The AI Risk Management Framework and NIST AI 100-1 will be incorporated into safety and security guidelines for CI owners and operators. This will lead to the issuance of Federal Government guidelines in due course.
An “AI Safety and Security Board” will be established, to act as an advisory committee for the Federal Government and the wider CI ecosystem.
Cyber warfare issues
For the purposes of cyber defences as this relates to national security systems (which relates to the “cyberwar” field of study):
An operational pilot project will be undertaken to apply AI to the discovery and remediation of vulnerabilities in the Government’s software, systems and networks.
Guidelines for the safe release of Government data will be developed, to understand the potential for this to be used in autonomous offensive cyber capabilities. This exercise will be supported by inventorying these data assets.
A “National Security Memorandum” will be developed, to address the governance of AI used for national security, military and intelligence purposes.
Broader economy issues with a cybersecurity golden threat
There are many other areas covered by the EO that have cybersecurity running through them, but which are not discussed as cybersecurity topics, such as the impacts for CBRN and biosecurity; personalised medicine; tackling climate change and the net zero agenda; supporting innovation and research, including intellectual property issues and increasing competition in the chip sector; attracting AI talent to the US economy; and the generation of synthetic online content. The latter topic clearly engages the cybercrime field of study, due to the very real concerns about the use of AI to create CSAM materials and deep fakes that can be used for social engineering purposes as part of cyber-enabled fraud. Part of the solution is labelling techniques, such as the application of inviolable watermarking to enable synthetic material to be distinguished from the non-synthetic.
Rights and civil liberties
What else is in there?
Well, the equality, civil liberties, human rights, privacy, workplace and consumer protection agendas are addressed, as are broader economy issues, such as agriculture, transportation and housing. Cybersecurity is a condition within the achievement of the relevant goals, rather than the goal itself, although cybersecurity is specifically called out as a key goal in the context of the health and human services sector.
As far as the privacy agenda is concerned, notable features of the EO include:
Data inventories of “commercial available information” created by government agencies that contain personally identifiable information will be produced.
Privacy Enhancing Technologies are promoted. This will be supported by the creation of a “Research Coordination Network” and the encouragement of their use by government agencies.
Privacy Impact Assessments are promoted.
Hmm, does that sound familiar?
Chief AI Officers
To support growth in the use of AI by Government, each Agency will be required to appoint a Chief AI Officer and adhere to a long list of requirements around governance and risk management. This means that the agencies will be required to address all of the discrete issues within the EO, such as red-teaming, labelling and water marking, vendor risk management, talent management – and so on. A “Technology Modernisation Fund” will be established, which agencies can tap into.
The last CPO or DPO in the building should turn the lights out.
Global leadership
As part of the US’s stated ambition to be the global leader in AI, the plan includes the development of global nomenclature and terminology; best practices for data handling and use; verification and assurance of use; and wider risk management. As part of this effort, NIST will create a “Global Development Playbook” to cover these issues.
In case it’s not yet obvious, the EO is directed at the Federal Government, rather that creating rules for the private sector. Therefore, a question might be running through your mind about how effective this model is for influencing change beyond government. My view is that the answer will be found in the example of the NIST cybersecurity framework. This began as a Federal Government initiative, but its influence has been truly global and it cuts across all economic sectors. I imagine that the US have this experience in mind.
Perspectives on expertise
My final thoughts concern the issue of expertise, because responsible AI plainly needs substantive expertise for it to be delivered, which the EO recognises in many ways, such as in the passages on attracting talent to the US economy. My perspective is that sometimes it seems that everyone is claiming to an AI expert these days, akin to what happened during the GDPR boom. What happens is that a new area emerges and there is a flight to it, with very little barriers to entry other than self-proclaimed expertise. This is something that we really do need to talk about and be alert to, because the risks of irresponsible AI seem to be of a different order of magnitude to the risks of other technology and data issues.
We cannot afford to mess this up and so I pray that only true expertise is applied to the problems and challenges before us and none of the hubris of self-proclaimed expertise.