Artificial Intelligence: Who Is in Charge?
Suffice it to say, it would be nearly impossible to find a place to hide or isolate yourself from the miracle (or debacle, depending on your point of view) called artificial intelligence (AI). I challenge you to find anything currently written or broadcast that does not refer or infer to AI, or derivations thereof. I find myself on the miracle side of things, but with a healthy dose of skepticism. Facing reality, it is a form of Pandora’s Box that has been opened, and a fait accompli that has already begun and this cannot be reversed. I have written extensively on the topic of AI, beginning with a trilogy on AI and its evolution, then what humans can do that AI cannot and most recently on the concerns about AI if left untethered, uncontrolled, and unregulated. This begs the question as to who (or what) is “in charge” of AI from a regulatory point of view. In late 2023 Google and their AI team said it best, “AI is too important not to regulate—and too important not to regulate well.”
Unlike Europe with its extensive AI Act — top-down rules that prohibit the uses of AI that pose “unacceptable risk,” the United States is taking its typical decentralized, bottom-up approach. Regulation and enforcement resides in a patchwork of national and state laws. The Constitution divides power between federal and state governments, which complicates a unified AI regulation approach. The federal government’s role is to manage matters including defense, foreign policy and interstate commerce, while states handle issues including education, public health and criminal justice. But AI intersects with various areas under the jurisdiction of different authorities, creating this complicated and fragmented landscape.
Making rules adaptable for a burgeoning and pervasive technology that is likely to change rapidly is something lawmakers need to address, but getting lawmakers to agree on anything is difficult … and in the worst-case scenarios, approaches the impossible. The USA legislative process requires laws to be approved by both houses of Congress creating difficulties in passing laws, especially in the rapidly evolving field of AI. As a point of reference roughly 10% of federal AI-related bills in the U.S. were passed into law last year.
The states exacerbate the complexity. Each state has its own approach that may conflict with federal legislation. Most focus on a specific area such as education or and how AI applies to that. Here are a couple of examples:
- In mid-May, the Colorado Governor signed the Colorado Artificial Intelligence Act (CAIA) into law, making Colorado the first state to enact legislation governing the use of high-risk artificial intelligence systems.
- Earlier this year, Utah enacted SB 149, which creates limited obligations for private sector companies deploying generative artificial intelligence, including disclosing its use.
- The California legislature is currently considering seven AI-related bills that, if passed, would add to the growing patchwork of state AI laws.
It is well worth your time to go to a searchable database of existing and pending state legislation related to AI: NCSL 50-State Searchable Bill Tracking Databases.
The U.S. tech industry wields considerable influence over regulatory discussions on AI, due to its significant contribution to the nation’s’ GDP and AI developments. The complexities explain why the government has opted for voluntary frameworks that largely allow the tech industry to self-regulate and avoid any potential undermining of its position. Applying and adhering to the laws as written will be up to the AI developers. No matter what they say publicly, private companies will advance their own agendas and versions of “responsible AI” and face a fragmented AI regulatory landscape. Talk about a fox guarding the henhouse, and constantly moving targets … oh my!
All we can say at this point is that proponents of specific — and most importantly clear — AI regulation that will be enforceable will likely be disappointed. This should come as no surprise since we lack a single definition of AI that is universally agreed upon. One attempt comes from the White House and its executive order on AI labeled as the National Artificial Intelligence Initiative. They define AI as:
“A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
The conundrum is that each state has a different definition and approach. Just a few of many examples are Texas, Connecticut and California. The is all a moving target. The attempt at legislation (federal or state) suffers from overlapping jurisdictions and competing agendas and the results (or lack thereof so far) speak for themselves.
There is no doubt that regulation and control of AI has risen to the top few items to consider for legislators and developers. It is the 800-pound gorilla in the room and getting bigger each day. There is also no doubt that these groups see the difficulties involved in doing something about it. One approach is to lump some of the major issues under the office of the executive branch (i.e. the president) to circumvent and avoid some of the legislative battles. The White House Executive Order on AI and proposed legislation at the federal and state level generally seeks to address the following overarching issues:
- Safety and security
- Responsible innovation and development
- Equity and unlawful discrimination
- Protection of privacy and civil liberties
The National Artificial Intelligence Initiative lists the following eight key principles and priorities to encourage the responsible development of AI technologies and safeguard against potential harms:
- AI must be safe and secure
- To lead in AI, the US must promote responsible innovation, competition and collaboration
- Responsible development and use of AI requires a commitment to supporting American workers
- AI policies must advance equity and civil rights
- The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected
- Privacy and civil liberties must be protected
- The federal government must manage the risks of its own use of AI
- The federal government should exercise global leadership in societal, economic and technological progress
As you probably have figured out, these are principles and guidelines. This begs the question as to specific agencies’ involvement and enforcement. Currently, there is no AI-specific federal regulator in the U.S. but in 2023, the Federal Trade Commission, Equal Employment Opportunity Commission, Consumer Financial Protection Bureau and Department of Justice issued a joint statement clarifying that their authority applies to “software and algorithmic processes, including AI.” Without comprehensive national AI specific laws or regulations any enforcement and penalties relating to the creation, dissemination and/or use of AI are governed by application of existing law through regulatory or judicial application of non-AI-specific federal and state statutes. The interesting element will be to see where AI interjects with existing laws and how that is adjudicated. I can see the court dockets filling to overflowing as I write this article.
Nearly all agree that AI promises both huge benefits for society and yet poses major risks. The challenge is getting the balance right between innovation and societal risks. As one subject matter expert pointed out, “The devil is in the details in making rules adaptable for a technology that is likely to change rapidly and be pervasive.” A comprehensive national AI law is unlikely over the next few years. The tremendous risks and opportunities of AI have necessitated that it rises to a presidential-level issue. The White House is coordinating its executive agencies, as each moves ahead with actions in its own domain. With a divided Congress unlikely to pass a major law with new mandatory rules, the executive branch will attempt to build on its AI “bill of rights” that spans different sectors and encourages voluntary commitments. As the individual executive agencies move ahead over time this will produce a patchwork quilt of AI rules grounded in the expertise of those specific agencies. On the downside this will be complex and even confusing but, on the upside, (if they are implemented well) they will promote innovation.
Two final issues to consider. The first one falls under antitrust agencies as they lead the effort to forestall big tech companies dominating AI, false and deceptive practices, and AI-driven fraud. The high cost and scale of AI foundation models likely leads to market concentration. We expect FTC actions designed as warnings shots to industry. The second issue is global in scale and scope. Growing competition with China shadows the AI regulatory effort, escalating a “don’t fall behind China” debate. Controls on exports and investments in AI-related technologies such as advanced GPUs are likely to expand over time and broaden, e.g., cloud computing, quantum.
In the developed world, everyone everywhere is paying attention to AI. The recent G7 meeting and the World Economic Forum punctuate the global concerns and need for regulations and controls. There is no binding agreement on the horizon as to what those controls and regulations are other than publishing guidelines and principals. Our goal has been to shed some light on where we are in the USA relative to AI regulations and controls. We don’t actually know who is in charge since this is a moving target with federal, state legislators, and tech developers taking aim as they deem appropriate. Confusing? You bet but we will keep track and let you know where it appears to be headed and what will most affect you.