SB-1047, ChatGPT and AI's Game of Thrones

post by Rahul Chand (rahul-chand) · 2024-11-24T02:29:34.907Z · LW · GW · 0 comments

Contents

  Part-I (The Sin of Greed)
    Too Soon, Too Fast, Too Much - Timeline of AI
      November 2022
      February 24, 2023
      March 14, 2023
      March 22, 2023
  Part-II (The Sin of Pride)
      May 2, 2023
      May 16, 2023
  Part-III (The Sin of Envy)
    Game of Thrones
      House AI Safety
      House AI acceleration
      The Enemy of my Enemy is my Friend
  Part - IV (The Sin of Sloth) 
None
No comments

Part-I (The Sin of Greed)

On 30 November 2022, OpenAI released ChatGPT. According to Sam Altman, it was supposed to be a demo[1] to show the progress in language models. By December 4, in just 5 days it had gained 1 million users, for comparison it took Instagram 75 days, Spotify 150 days and Netflix 2 years to gain same no. of users. By January 2023, it had 100 million users, adding 15 million users every week. ChatGPT became the fastest growing service in human history. Though most people would only come to realize this later, the race for AGI had officially began. 

Days it took popular services to get to 1M and 100M users

 

Too Soon, Too Fast, Too Much - Timeline of AI

November 2022

At first, ChatGPT was seen more of a fun toy to play with. Early uses of ChatGPT mostly involved writing essays, reviews, small snippets of code etc. It was buggy, not very smart and even thought it became an instant hit, it was seen more as a tool that can help you draft your mail but nothing more serious. Skeptics brushed it aside as being dumb and not good enough for anything that involved any level of complex reasoning. 

The next 2 years however saw unprecedented progress that would change the outlook of LLM chatbots from being tools that can help you draft a mail to tools that could replace you. The models kept getting bigger, they kept getting better and the benchmarks kept getting climbed. The interest in ChatGPT also saw billions poured into to other competetitors Anthropic, Mistral, Gemini, Llama trying to come up with their own version of an LLM powered chatbot. 

February 24, 2023

Soon Open-source models started to catch-up as well, the number of model's hosted on HuggingFace increased 40x in just 1 year, from Jan 2023 to Jan 2024. Companies that had missed the boat but had enough talent drove the progress of Open-source models, on Feb 24, 2023, Meta reluctantly released the weights of their LLM Llama after someone had uploaded its weight on 4chan a week back. This kick-started the open-source progress. Llama was the first good competent open source LLM. It reportedly cost Meta 20 million dollars to train it, a huge sum that the open source community couldn't have come up with but it represents less than 0.015% of Meta's annual revenue (~130 billion). For what is chump change, Meta had successfully created an alternative to proprietary models and made their competitors margin thinner. The gpu poors of the world were now thrown in to the race too (a race, they would realize later, that they cannot win).

March 14, 2023

To me, the release of GPT-4 was, in some ways, even more important than the first one. GPT-4 was remarkably more smarter than anything before it. It was this release that marked the going away from a cool demo to something much bigger. For the first time, a general purpose AI model was competing with humans across tasks. To the believers, the very first signs of AGI had been seen.

Below charts show how substantially better GPT-4 was than its predecessor. It consistently ranked high on many tests designed to test human intelligence. Stark contrast from GPT-3.5 which mostly performed in the bottom half.

GPT-4 substantial improvement over GPT-3.5

March 22, 2023

Though it had only been 1 week since GPT-4 was released, the implications of it for the future were already showing up. This was the worst AI was ever going to be, it would only get stronger. There was also a belief that models much smarter than even GPT-4 were on the horizon, and it was only a matter of time. Even before GPT-4, the fear around AI risk was already starting to take shape. The constant hints from top AI labs about not releasing full non-RLHF models due to risks only added to this. Similar claims were made about older models like GPT-3, which, in hindsight, weren't nearly advanced enough to pose any real threat. This time around, it felt different though, the tech and the outside world was slowly catching on.

On March 22, 2023, the Future of Life Institute published an open letter signed by over 1,000 AI researchers and tech leaders, including Elon Musk, Yoshua Bengio and Steve Wozniak calling for a six-month pause on the development of AI systems more powerful than GPT-4. 

Excerpt from the open letter

The letter insisted that they were not calling for a permanent halt, but the six month time was to provide policy makers and AI safety researchers enough time to understand the impact of the technology and put safety barriers around it. The letter gained significant media coverage and was the sign of what was to come. However, by this time, the cat was already out of the bag. Even proponents of the letter believed that it was unlikely the letter was going to lead to any halt, much less that of six months. The AGI race had begun, billions of dollars were on stake and it was too late to pause it now. 

This letter was followed by the provisional passing of The European Union's AI Act on April 11, 2023 which aimed to regulate AI technologies, categorizing them based on risk levels and imposing stricter requirements on high-risk applications. Two weeks later, on April 25, 2023, the U.S. Federal Trade Commission (FTC) issued a policy statement urging AI labs to be more transparent about the capabilities of their models. 

Part-II (The Sin of Pride)

Sam Altman during his congressional hearing

May 2, 2023

On May 2nd, Geoffery Hinton, the godfather of AI (and now a Nobel Prize winner) announced his retirement from Google at the age of 75 so he could freely speak out about the risks of AI. On the same day, he gave an interview with NYT, where he said a part of him, now regrets his life’s work and detailed his fear with current AI progress. This interview later lead to public twitter feud between Yann LeCun and Hinton, showing the increasing divide within the community. AI risk was now no longer a quack conspiracy theory but was backed by some of the most prominent AI researchers. Below are some of the excerpts from the NYT article. 

"But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity"

“It is hard to see how you can prevent the bad actors from using it for bad things”

May 16, 2023

With increasing integration of generative AI into daily life, concerns regarding their societal impact were not only becoming mainstream but also politically important. On May 4, 2023, President Biden called a private meeting with the heads of frontier AI labs, including Sam Altman, Sundar Pichai, and Anthropic CEO Dario Amodei. Notably, no AI safety policy organizations were invited.

This private meeting then led to Sam Altman's Congressional Hearing on May 16th. Sam Altman's Senate hearing was different than the usual grilling of tech CEOs that we have seen in the past. Most of this is attributed to the private closed-door meeting that Sam Altman already had with members of the committee before. In fact, Altman agreed with ideas suggested by the committee regarding the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations, and tests that A.I. models must pass before being released to the public. The groundwork for a regulatory bill like SB-1047 was taking place. People on the outside saw the hearing as just an attempt by Sam Altman at regulatory capture.

“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening .... We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work" - Sam Altman

“It’s such an irony seeing a posture about the concern of harms by people who are rapidly releasing into commercial use the system responsible for those very harms" - Sarah Myers West (direction of AI Now Institute, a policy research centre)

The public feud between Hinton and Yann

To understand why SB-1047 became such a polarizing bill that gained such media coverage and formation of factions across the board, it's important to note that by this time everything was falling into place. The California government discusses around 2000 bills annually, for a bill to receive so much interest it had to tick all three boxes, be of national political importance, resonate with the public and have potential of large economic impact. 

Part-III (The Sin of Envy)

California Governor Gavin Newson (picture by TechCrunch)

On Feb 4, 2024, California Democratic Senator Scott Wiener introduced the AI safety bill SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act). The bill was aimed to ensure regulations to ensure safe development and deployment of frontier AI. The key provisions of the first draft were

Timeline of SB-1047 Saga (Source tweet)

SB-1047 history is long and complicated[2], it went over a total of 11 ammendments before it finally ended up on the desk of California governor Gavin Newson. Many of these changes were due to the pressure from frontier AI labs, especially the less than warm ("cautious") support by Anthropic, who Scott and the broad AI safety community thought would be more supportive of the bill. I briefly cover the changes below   

Change in Definition of Covered Models

Removal of the proposed Frontier Model Division

Removal of pre-harm enforcement

Relaxation for operators of computing clusters

Game of Thrones

Elon Musk, Twitter and OpenAI

One of the most interesting things about the whole SB-1047 story is how the AI and the larger tech world reacted to it and the factions that formed

House AI Safety

House AI acceleration

The Enemy of my Enemy is my Friend

Larry Summers (pictured above) joined OpenAI board of directors in Nov 2023 followed by Paul Nakasone, former head of NSA who was hired by OpenAI in June 2024

Why I find these factions fascinating is because so many people were fighting for so many different things. The Hollywood letter in support of SB-1047 talks about bio and nuclear weapons. I don't believe the people in Hollywood were really concerned about these issues, but what prompted them to sign the letter was that they were fighting something even more important, which is the fight for the future of work. While Bengio and Hinton were legitimately concerned about existential risks that these AI systems present, Elon's support for the bill wasn't due to his concerns about AI risks but rather seen as an attempt to get back at OpenAI and to even the field as xAI plays catch up.

Similarly, the people who were against SB-1047 were fighting for different things as well. The open-source community saw it as an ideological battle; they were opposed to any kind of governmental control over AI training. This is apparent from the fact that even after amendments, they still opposed SB-1047, even though they wouldn't have been affected. While Meta & Google saw it as an unnecessary roadblock that could lead to further scrutinization over their AI models and business practices.

OpenAI's stance is the most interesting one here, because Sam Altman didn't oppose regulation, he only opposed state-level regulation but was in favor of federal regulations. OpenAI had by then become really important for national security. In November 2023, during a period of internal upheaval, OpenAI appointed Larry Summers to its board of directors. Summers' appointment, who had previously served as U.S. Treasury Secretary and Director of the National Economic Council, was seen as a strategic move to strengthen OpenAI's connections with government entities due to his deep-rooted connections within U.S. policy circles and influence over economic and regulatory affairs. In June 2024, OpenAI hired former NSA director Paul Nakasone. With figures like Summers and Nakasone onboard, OpenAI was now operating at a much higher level of influence; for them, a federal regulation would be favorable and they wanted to avoid the complications an unfavorable state regulation bill would introduce.

Part - IV (The Sin of Sloth) 

After passing through the California senate and assembly the bill was finally presented to Governor Newsom on Sept 9, 2024. On Sept 29, Newsom under pressure from silicon valley and own party members like Pelosi vetoed the bill, stating "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology" and "While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data".

So after 11 amendments, 7 months, 2 votes and 1 veto later, the saga of SB-1047 finally came to an end. It was an interesting time in AI, with the debate around the bill helped uncover people's revealed preferences, what they really thought about where AI was heading and how they were positioned in the race to AGI.

For me, I surprisingly (and for the first time) found the most ground with Hollywood's stance. Unlike Hinton and others, I don't feel there is an existential AI risk, atleast not in the near future. I don't believe that super-smart AI will suddenly allow bad actors to cause great harm that they couldn't previously. The same way how having internet access didn't make everyone build a fighter jet in their backyard. Building nuclear or biological weapons is more about capital and actual resources than just knowing of how to build them. 

What I fear the most is the way in which super-smart AI[4] is going to completely disrupt the way human's work and this is going to happen inside the next 5 years. I feel people and our current systems are not ready for what will happen when human intelligence becomes too cheap to meter. A world where people can hire a model as smart as Terrance Tao for 50 dollars per month is extremely scary. This is why I understand why Hollywood protested and passed bills which safegaurded them from use of generative AI. If I was in-charge, I would have passed SB-1047, not because I fear nuclear or bio weapons, but to ensure that the world gets enough time to adjust before AGI hits us.

 

  1. ^

    Sam Altman in his reddit ama said "We were very surprised by the level of interest and engagement .... We didn't expect this level of enthusiasm."

  2. ^

    I found this substack by Michael Trazzi particularly helpful to figure out how the SB-1047 saga played out (substack link)

  3. ^

    People saw this move by Elon Musk who had increasingly become more active in politics as a way to get indirect regulatory control over OpenAI.

  4. ^

    I keep using super-smart AI & AGI interchangeably. Because I am not sure when super-smart AI ends and where AGI starts.

0 comments

Comments sorted by top scores.