Posts

No Clickbait - Misalignment Database 2024-02-18T05:35:44.078Z
Do you want to make an AI Alignment song? 2024-02-09T08:22:05.164Z
AI Safety orgs- what's your biggest bottleneck right now? 2023-11-16T02:02:04.158Z

Comments

Comment by Kabir Kumar (kabir-kumar-1) on My takes on SB-1047 · 2024-09-09T20:58:01.057Z · LW · GW
Comment by Kabir Kumar (kabir-kumar-1) on Notifications Received in 30 Minutes of Class · 2024-08-13T21:35:18.075Z · LW · GW

if asked about recommendation algoritms, I think it might be much higher - given a basic understanding of what they are, addictiveness, etc

Comment by Kabir Kumar (kabir-kumar-1) on AI doing philosophy = AI generating hands? · 2024-08-13T21:15:47.987Z · LW · GW

imo, our philosophical progress has been stagnated by media (in the classic sense of the word) - recording ideas makes it harder to forget them. akin to training at very low dropout

Comment by Kabir Kumar (kabir-kumar-1) on AI doing philosophy = AI generating hands? · 2024-08-13T21:13:48.675Z · LW · GW

In a parallel universe with a saner civilization, there must be tons of philosophy professors workings with tons of AI researchers to try to improve AI's philosophical reasoning.

Sanskrit scholars worked for generations to make Sanskrit better for philosophy

Comment by Kabir Kumar (kabir-kumar-1) on What mistakes has the AI safety movement made? · 2024-05-26T19:08:05.225Z · LW · GW

But maybe we are just bad at politics and coalition-building.

 

Mostly due to a feeling of looking down on people imo

Comment by Kabir Kumar (kabir-kumar-1) on Workshop (hackathon, residence program, etc.) about for-profit AI Safety projects? · 2024-05-08T16:52:15.441Z · LW · GW

Thank you. We just had some writers join, who're, among other things, going to make an up to date About Us section. some out of date stuff is available on https://aiplans.substack.com
Something that we use internally is:  https://docs.google.com/document/d/1wcVlWRTKJqiXOvKNl6PMHCBF3pQItCCcnYwlWvGgFpc/edit?usp=sharing

We're primarily focused on making a site rebuild atm, which has a lot of new and improved features users have been asking for. Preview (lots of form factor stuff broken atm) at: https://ai-plans-site.pages.dev/

Comment by Kabir Kumar (kabir-kumar-1) on Thoughts on seed oil · 2024-05-01T07:47:54.722Z · LW · GW

I want to know.

Comment by Kabir Kumar (kabir-kumar-1) on Clickbait Soapboxing · 2024-04-06T21:09:44.335Z · LW · GW

Ok, so are these not clickbait then?

"Stop This Train, Win a Lamborghini"

"$1 vs $250,000,000 Private Island!"

"$1 vs $100,000,000 House!"

"I Hunted 100 People!"

"Press This Button To Win $100,000!"

Comment by Kabir Kumar (kabir-kumar-1) on Clickbait Soapboxing · 2024-04-04T10:59:30.873Z · LW · GW

My Clients, The Liars

And All The Shoggoths Merely Players

Acting Wholesomely

These are the most obvious examples. By 'clickbait', here I mean a title that's more for drawing in readers than accurately communicating what the post is about. Doesn't mean it can't be accurate too - after all, MrBeast rarely lies in his video titles - but it means that instead of choosing the title that is most accurate, they chose the most eye catching and baiting title out of the pool of accurate/semi-accurate titles.

Comment by Kabir Kumar (kabir-kumar-1) on Can a stupid person become intelligent? · 2024-04-01T02:10:24.968Z · LW · GW

update on my beliefs - among humans of above average intelligence, the primary factor for success is willpower - stamina, intensity and consistency

Comment by Kabir Kumar (kabir-kumar-1) on Arguments for Robustness in AI Alignment · 2024-03-26T06:45:08.651Z · LW · GW

You might be interested in the Broad List of Vulnerabilities

Comment by Kabir Kumar (kabir-kumar-1) on Community Notes by X · 2024-03-20T20:24:02.512Z · LW · GW

Thank you, this is useful. Planning to use this for AI-Plans.

Comment by Kabir Kumar (kabir-kumar-1) on On Devin · 2024-03-19T09:45:20.845Z · LW · GW

Hasn't that happened?

Comment by Kabir Kumar (kabir-kumar-1) on Clickbait Soapboxing · 2024-03-17T14:26:48.344Z · LW · GW

Yet, the top posts on LessWrong are pretty much always clickbaited, just in the LessWrong lingo.
The Curated Posts seem to be some of the worst cases of this:

I find that LessWrong folk in general are really, really, susceptible to deception and manipulation when it's done in their language.

Comment by Kabir Kumar (kabir-kumar-1) on MATS AI Safety Strategy Curriculum · 2024-03-08T15:41:51.267Z · LW · GW

Week 3: How hard is AI alignment?

https://www.lesswrong.com/posts/3pinFH3jerMzAvmza/on-how-various-plans-miss-the-hard-bits-of-the-alignment#comments

Seems like something important to be aware of, even if they may disagree.

Comment by Kabir Kumar (kabir-kumar-1) on No Clickbait - Misalignment Database · 2024-02-26T17:51:31.238Z · LW · GW

for sure. right now it's just a google form and google sheets. would you be interested in taking charge of this?

Comment by Kabir Kumar (kabir-kumar-1) on No Clickbait - Misalignment Database · 2024-02-20T19:06:01.635Z · LW · GW

Thank you, I've labelled that as the form link now and added the DB link.

Comment by Kabir Kumar (kabir-kumar-1) on No Clickbait - Misalignment Database · 2024-02-20T19:03:01.185Z · LW · GW

Thank you! I'll add those as well!

Comment by Kabir Kumar (kabir-kumar-1) on No Clickbait - Misalignment Database · 2024-02-20T11:50:19.759Z · LW · GW

Ah, sorry, here's the link! https://docs.google.com/spreadsheets/d/1uXzWavy1mS0X-uQ21UPWHlAHjXFJoWWlN62EyKAoUmA/edit?usp=sharing 

Thank you for pointing that out, also added it to the post!

Comment by Kabir Kumar (kabir-kumar-1) on No Clickbait - Misalignment Database · 2024-02-20T08:51:50.698Z · LW · GW

Updated to 115.

Comment by kabir-kumar-1 on [deleted post] 2024-02-13T14:14:22.999Z

Perhaps a note on Pre-Requisites would be useful. 
E.g. the level of math & comp sci that's assumed. 
Suggestion: try going through the topics to 50+ random strangers. Wildly useful for improving written work. 

Comment by Kabir Kumar (kabir-kumar-1) on Catching AIs red-handed · 2024-02-10T18:33:57.031Z · LW · GW

Yes, that's what I'm referring to. As in, getting that enacted as a policy.

Comment by Kabir Kumar (kabir-kumar-1) on Catching AIs red-handed · 2024-02-09T08:07:07.731Z · LW · GW

This is an absurdly low bar, but yes, this should be done.
How can I help?

Comment by Kabir Kumar (kabir-kumar-1) on Workshop (hackathon, residence program, etc.) about for-profit AI Safety projects? · 2024-01-26T12:41:50.921Z · LW · GW

We're doing this at AI-Plans.com!! With the Critique-a-Thon and the Law-a-Thon!

Comment by Kabir Kumar (kabir-kumar-1) on On how various plans miss the hard bits of the alignment challenge · 2024-01-08T17:46:25.245Z · LW · GW

Extremely important

Comment by Kabir Kumar (kabir-kumar-1) on jacquesthibs's Shortform · 2023-11-14T19:37:14.776Z · LW · GW

there is an issue with surface level insights being unfaily weighted, but this is solvable, imo. especially with youtube, which can see which commenters have watched the full video.

Comment by Kabir Kumar (kabir-kumar-1) on Can a stupid person become intelligent? · 2023-11-10T02:28:00.008Z · LW · GW

You can compete with someone more intelligent but less hard working by being more organized, disciplined, focused and hard working. And being open to improvement.