Skip to main content

How Can We Regulate AI?

By Mehul Bhushan

In the fast-paced world of AI development, calls for regulating AI systems in the US are at fever pitch.

AI systems have revolutionized various sectors from law and advertising to healthcare. AI decision making has led to, for instance, greater accuracy and more rapid detection of diseases such as cancer in patients. However, there are also concerns about the reliability of AI decision-making without sufficient human oversight. Other issues, such as job displacement, algorithmic bias, and national security risks, including the potential misuse of AI for building bioweapons, have also become central topics in public discourse. This has led to increased calls for AI regulation.

Regulators across the globe have intensified their attempts to design policies to regulate AI in response to such controversies. On October 30, 2023, the United States released a major policy guideline document-: the Executive Order on Safety, Security, and Trustworthiness of AI Development and Use (EO). The EO marks a huge step forward in the governmental attempts to building policies in the US that will help regulate the development and use of AI. A main goal in President Biden’s EO is that AI systems should be developed and used in a responsible way, for example by protecting citizens’ data privacy and preventing any discrimination resulting from the autonomous decision making by AI.

A focus on such ethical concerns shows that the EO’s scope is expansive. It includes sections ranging from safety and security of the nation, citizens, and organizations; to protecting civil rights of citizens, and promoting AI innovation. The EO also calls for a government-wide effort towards building policies and guidelines to regulate AI and involves multiple agencies.

Yet the distribution of requirements in the document for the various governmental agencies presents challenges to timely implementation and alignment of policy guidelines with societal values. For instance, two major policy areas that have received maximum time-focused attention are mitigating national security risks and attracting AI talent to federal government. Both policy areas have the most concrete deadlines to meet requirements in 2024. On the other hand, policy areas such as protection of civil rights of citizens have far fewer concrete deadlines.

The EO also displays a more risk-focused approach to regulating AI. Such a risk-oriented approach is not new. The European Union’s recently adopted AI related legislation, the AI Act, also lays priority on the risks posed by AI systems. However, in the US, such a risk-based approach to regulating AI has generated confusion amongst policy makers.

This can be seen in President Biden’s EO, wherein companies have been required to report security tests when developing large AI models. For this, the companies must disclose the amount of compute power used in developing such models, a threshold that needs to be set by the Department of Commerce. However, it remains unclear in the EO as to why ‘compute power’ has been set as a requirement for reporting security testing of AI models and no other criteria including the use of private data of citizens. This raises questions about potential abuse of data monitoring by big companies that have considerable computational power to develop large AI models and report them, but not focus on the ways in which such practices can further harm the privacy of citizens’ data. There have been other proposals such as those allowing only big technology companies to build large AI models because they have the necessary resources to develop ‘reliable’ models.

While confusion persists around how best to regulate AI, the US is still lagging in developing more concrete AI policies. The European Union has already implemented two major pieces of policy guidelines that are related to AI- the GDPR (General Data Protection Regulation) and the AI Act. The US is yet to adopt a comprehensive, federal-wide data protection or AI policy guidelines. Further, while some federal agencies have implemented and disclosed how far they are on achieving the requirements stated in the EO, others have not. There is a lack of information available to the public about how different agencies are working towards effective implementation of policy guidelines as described in the EO. There is, therefore, a need for timely development of policy in the US to ensure that ethical and societal values such as data privacy or algorithm and AI bias are adequately addressed.