OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors

post by Joel Burget (joel-burget) · 2024-06-13T21:28:18.110Z · LW · GW · 10 comments

This is a link post for https://openai.com/index/openai-appoints-retired-us-army-general/

Contents

10 comments

Today, Retired U.S. Army General Paul M. Nakasone has joined our Board of Directors. A leading expert in cybersecurity, Nakasone’s appointment reflects OpenAI’s commitment to safety and security, and underscores the growing significance of cybersecurity as the impact of AI technology continues to grow.

As a first priority, Nakasone will join the Board’s Safety and Security Committee, which is responsible for making recommendations to the full Board on critical safety and security decisions for all OpenAI projects and operations.

Whether this was influenced by Aschenbrenner's Situational Awareness or not, it's welcome to see OpenAI emphasizing the importance of security. It's unclear how much this is a gesture vs reflective of deeper changes.

10 comments

Comments sorted by top scores.

comment by LawrenceC (LawChan) · 2024-06-14T00:55:05.385Z · LW(p) · GW(p)

This also continues the trend of OAI adding highly credentialed people who notably do not have technical AI/ML knowledge to the board.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-06-14T20:16:13.947Z · LW(p) · GW(p)

This fact will be especially important insofar as a situation arises where e.g. some engineers at the company think that the latest system isn't safe. Board won't be able to engage with the arguments or evidence, it'll all come down to who they defer to.

Replies from: Fabien
comment by Fabien Roger (Fabien) · 2024-06-15T18:56:16.052Z · LW(p) · GW(p)

Are board members working full-time on being board members at OpenAI? If so, I would expect that they could take actions to alleviate their lack of technical expertises by spending 15h/week to get up to speed on the technical side, reading papers and maybe learning to train LMs themselves. It naively seems like AI is sufficiently shallow that 15h/week is enough to get most of the expertise you need within a year.

Replies from: gwern
comment by gwern · 2024-06-15T21:05:16.120Z · LW(p) · GW(p)

They almost certainly are not. In none of the OA Form 990s up to ~2022 or so are board members listed as working more than an hour or two per week (as board members - obviously board members who were working for the OA corporation, like Sutskever/Altman/Brockman, presumably are working fulltime there). For example in the 2022 Form 990, Zillis and Hurd etc list 3 hours per week. (And from descriptions of the board's activity in 2022, this is probably fictional: I have no idea how they supposedly all spent >156 hours on OA in 2022...) This is also standard for non-profits; offhand, I can't think of any non-profits where non-employee board members work 40 hours a week as board members.

So it would be quite a change if any non-employee board members were working fulltime now. And in Nakasone's case, he appears to have plenty on his plate already:

On May 8th, 2024, Nakasone was named Founding Director of Vanderbilt University's new Institute for National Defense and Global Security. Nakasone will also hold a Research Professorship within Vanderbilt's School of Engineering, as well as serving as special advisor to the chancellor.[36] In addition, on May 10th, 2024, Nakasone was elected to the board of trustees of Saint John's University, his alma mater.[37]

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2024-06-16T12:05:12.355Z · LW(p) · GW(p)

That’s true for for-profit boards too—to pick a random example, here’s the Microsoft Board, pretty much everybody seems to have a super intense day job and/or to serve on 4+ company boards simultaneously.

comment by wassname · 2024-06-14T00:35:46.143Z · LW(p) · GW(p)

It could indicate the importance of security, which is safe. Or of escalation in a military arms race, which is unsafe.

comment by kromem · 2024-06-14T22:50:59.970Z · LW(p) · GW(p)

I may just be cynical, but this looks a lot more like a way to secure US military and intelligence agency contracts for OpenAI's products and services as opposed to competitors rather than actually about making OAI more security focused.

This is only a few months after the change regarding military usage: https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/

Now suddenly the recently retired head of the world's largest data siphoning operation is appointed to the board for the largest data processing initiative in history?

Yeah, sure, it's to help advise securing OAI against APTs. 🙄

Replies from: wassname
comment by wassname · 2024-06-15T06:28:31.940Z · LW(p) · GW(p)

I thought this too, until someone in finance told me to google "Theranos Board of Directors", so I did, and it looked a lot like OpenAI's new board.

This provides an alternate hypothesis: That signals nothing substantial. Perhaps it's empty credentialism, or empty PR, or a cheap attempt to win military contacts.

comment by bhauth · 2024-06-16T02:42:50.381Z · LW(p) · GW(p)

This post probably should have mentioned that Paul Nakasone is a former NSA director.

A more cynical interpretation of this news is that it represents a deal that gives OpenAI favorable access to US government contracts and some protection from safety-related regulation in exchange for ensuring the NSA will have access to user data going forward.

comment by Robert_AIZI · 2024-06-14T01:13:14.639Z · LW(p) · GW(p)

One of my fascinations is when/how the Department of Defense starts using language models, and I can't help but read significance into this from that perspective. If OpenAI wants that sweet sweet defense money, having a general on your board is a good way to make inroads.