Posts

Let’s Talk About Emergence 2024-06-07T19:18:16.382Z
"Open Source AI" is a lie, but it doesn't have to be 2024-04-30T23:10:11.963Z
Podcast interview series featuring Dr. Peter Park 2024-03-26T00:25:58.129Z
INTERVIEW: Round 2 - StakeOut.AI w/ Dr. Peter Park 2024-03-18T21:21:41.293Z
INTERVIEW: StakeOut.AI w/ Dr. Peter Park 2024-03-04T16:35:06.542Z
Hackathon and Staying Up-to-Date in AI 2024-01-08T17:10:20.270Z
Interview: Applications w/ Alice Rigg 2023-12-19T19:03:02.824Z
Into AI Safety: Episode 3 2023-12-11T16:30:36.347Z
Into AI Safety Episodes 1 & 2 2023-11-09T04:36:40.881Z
Into AI Safety - Episode 0 2023-10-22T03:30:57.865Z
Documenting Journey Into AI Safety 2023-10-10T18:30:03.075Z

Comments

Comment by jacobhaimes on "Open Source AI" is a lie, but it doesn't have to be · 2024-05-23T19:29:46.631Z · LW · GW

I have heard that this has been a talking point for you for at least a little while; I am curious, do you think that there are any aspects of this problem that I missed out on?

Comment by jacobhaimes on "Open Source AI" is a lie, but it doesn't have to be · 2024-05-23T19:23:14.672Z · LW · GW

Thanks for responding! I really appreciate engagement on this, and your input.

I would disagree that Mistral's models are considered Open Source using the current OSAID. Although training data itself is not required to be considered Open Source, a certain level of documentation is [source]. Mistral's models do not meet these standards, however, if they did, I would happily place them in the Open Source column of the diagram (at least, if I were to update this article and/or make a new post).

Comment by jacobhaimes on Documenting Journey Into AI Safety · 2023-10-11T22:18:40.160Z · LW · GW

Glad to hear that my post is resonating with some people!

I definitely understand the difficulty regarding time allocation when also working a full time job. As I gather resources and connections I will definitely make sure to spread awareness of them.

One thing to note, though, is that I found the more passive approach of waiting until I find an opportunity to be much less effective than forging opportunities myself (even though I was spending a significant amount of time looking for those opportunities).

A specific and more detailed recommendation for how to do this is going to be highly dependent on your level of experience with ML and time availability. My more general recommendation would be to apply to be in a cohort of BlueDot Impact's AI Governance or AI Safety Fundamentals courses (I believe that the application for the early 2024 session of the AI Safety Fundamentals course is currently open). Taking a course like this provides opportunities to gain connections, which can be leveraged into independent projects/efforts. I found that the AI Governance session was very doable with a full time position (when I started it, I was still full time at my current job). Although I cannot definitely say the same for the AI Safety Fundamentals course, as I did not complete it through a formal session (and instead just did the readings independently), it seems to be a similar time commitment. I think that taking the course with a cohort would definitely be valuable, even for those that have completed the readings independently.