Posts

We should be signal-boosting anti Bing chat content 2023-02-18T18:52:18.818Z
Morristown NJ ACX Meetup 2022-08-22T17:35:54.976Z

Comments

Comment by mbrooks on The Economics of the Asteroid Deflection Problem (Dominant Assurance Contracts) · 2023-09-02T21:14:28.758Z · LW · GW

What are the transaction costs if you need to do 3 transactions?
1. Get the refund bonuses from the producer
2. Get the pledges from funders
3. Return the pledges + bonus if it doesn't work out

Also, will PayPal allow this type of money sending?

Congrats on getting funded way above your threshold!

Comment by mbrooks on Catching the Eye of Sauron · 2023-04-09T12:25:01.130Z · LW · GW

@habryka curious what you think of this comment

Comment by mbrooks on Catching the Eye of Sauron · 2023-04-09T00:26:52.247Z · LW · GW

Fair enough regarding Twitter

Curious what your thoughts are on my comment below

Comment by mbrooks on Catching the Eye of Sauron · 2023-04-08T17:57:57.373Z · LW · GW

I'm talking about doing a good enough job to avoid takes like these: https://twitter.com/AI_effect_/status/1641982295841046528

50k views on the Tweet. This one tweet probably matters more than all of the Reddit comments put together

 

Comment by mbrooks on Catching the Eye of Sauron · 2023-04-08T17:50:02.832Z · LW · GW

I don't find this argument convincing. I don't think Sam did a great job either but that's also because he has to be super coy about his company/plans/progress/techniques etc.

The Jordan Peterson comment was making fun of Lex and a positive comment for Sam.

Besides, I can think Sam did kinda bad and Elizier did kind of bad but expect Elizier to do much better!


.
 

I'm curious to know your rating on how you think Eliezer did compare to what you'd expect is possible with 80 hours of prep time including the help of close friends/co-workers.

I would rate his episode at around a 4/10

Why didn't he have a pre-prepared well thought list of convincing arguments, intuition pumps, stories, analogies, etc. that would be easy to engage with for a semi-informed listener? He was clearly grasping for them on the spot.

Why didn't he have quotes from the top respected AI people saying things like "I don't think we have a solution for super intelligence.",  "AI alignment is a serious problem", etc.

Why did he not have written notes? Seriously... why did he not prepare notes? (he could have paid someone that knows his arguments really well to prepare notes for him)

How many hours would you guess Eliezer prepared for this particular interview? (maybe you know the true answer, I'm curious)

How many friends/co-workers did Eliezer ask for help in designing great conversation topics, responses, quotes, references, etc.?

This was a 3-hour long episode consumed by millions of people. He had the mind share of ~6 million hours of human cognition and this is what he came up with? Do you rate his performance more than a 4/10?

I expect Rob Miles, Connor Leahy, or Michaël Trazzi would have done enough preparation and had a better approach, and could have done an 8+/10 job. What do you think of those 3? Or even Paul Christiano.

Eliezer should spend whatever points he has with Lex to get one of those above 4 on a future episode is my opinion.

Comment by mbrooks on Catching the Eye of Sauron · 2023-04-07T13:44:41.386Z · LW · GW

The easiest point to make here is Yud's horrible performance on Lex's pod. It felt like no prep and brought no notes/outlines/quotes??? Literally why?

Millions of educated viewers and he doesn't prepare..... doesn't seem very rational to me. Doesn't seem like systematically winning to me.

Yud saw the risk of AGI way earlier than almost everyone and has thought a lot about it since then. He has some great takes and some mediocre takes, but all of that doesn't automatically make him a great public spokesperson!!!

He did not come off as convincing, helpful, kind, interesting, well-reasoned, humble, very smart, etc.

To me, he came off as somewhat out of touch, arrogant, weird, anxious, scared, etc. (to the average person that has never heard of Yud before the Lex pod)

Comment by mbrooks on We should be signal-boosting anti Bing chat content · 2023-02-20T00:07:10.906Z · LW · GW

Toby and Elon did today what I was literally suggesting: https://twitter.com/tobyordoxford/status/1627414519784910849

@starship006, @Zack_M_Davis, @lc, @Nate Showell do you all disagree with Toby's tweet?

Should the EA and Rationality movement not signal-boost Toby's tweet?
 

Elon further signal boosts Toby's post

Comment by mbrooks on We should be signal-boosting anti Bing chat content · 2023-02-19T13:19:14.128Z · LW · GW

I see your point, and I agree. But I'm not advocating for sabotaging research.

I'm talking about admonishing a corporation for cutting corners and rushing a launch that turned out to be net negative.

Did you retweet this tweet like Eliezer did?https://twitter.com/thegautamkamath/status/1626290010113679360

If not, is it because you didn't want to publicly sabotage research?

Do you agree or disagree with this twitter thread? https://twitter.com/nearcyan/status/1627175580088119296?t=s4eBML752QGbJpiKySlzAQ&s=19

Comment by mbrooks on We should be signal-boosting anti Bing chat content · 2023-02-19T13:09:48.211Z · LW · GW

Are you saying that you're unsure if the launch of the chatbot was net positive?

I'm not talking about propaganda. I'm literally saying "signal boost the accurate content that's already out there showing that Microsoft rushed the launch of their AI chatbot making it creepy, aggressive, and misaligned. Showing that it's harder to do right than they thought"

Eleizer (and others) retweeted content admonishing Microsoft, I'm just saying we should be doing more of that.

Comment by mbrooks on We should be signal-boosting anti Bing chat content · 2023-02-19T13:02:39.411Z · LW · GW

Why?

Comment by mbrooks on We should be signal-boosting anti Bing chat content · 2023-02-19T13:02:21.233Z · LW · GW

I felt I was saying "Simulacrum Level 1: Attempt to describe the world accurately."

The AI was rushed, misaligned, and not a good launch for its users. More people need to know that. It's literally already how NYT and others are describing it (accurately) I'm just suggesting signal boosting that content.

Comment by mbrooks on Morristown NJ ACX Meetup · 2022-10-01T18:40:15.115Z · LW · GW

We're at the pizza place off the green "A Legna"

Comment by mbrooks on Morristown NJ ACX Meetup · 2022-10-01T14:40:33.708Z · LW · GW

Looks like the rain will stop before 2 pm. So we can meet up at the green and then decide if we want to head somewhere else.

Comment by mbrooks on AGI Ruin: A List of Lethalities · 2022-06-06T21:43:55.653Z · LW · GW

I'm also on a team trying to build impact certificates/retroactive public goods funding and we are receiving a grant from an FTX Future Fund regrantor to make it happen!

If you're interested in learning more or contributing you can:

  • Read about our ongoing $10,000 retro-funding contest (Austin is graciously contributing to the prize pool)
  • Submit an EA Forum Post to this retro-funding contest (before July 1st)
  • Join our Discord to chat/ask questions
  • Read/Comment on our lengthy informational EA forum post "Towards Impact Markets"
Comment by mbrooks on How to Make Your Article Change People's Minds or Actions? (Spoiler: Do User Testing Like a Startup Would) · 2022-03-30T19:58:23.344Z · LW · GW

It's weird that I have my own startup, completely understand using real users for user testing, and also barely ever "user-test" any of my writing with actual audience members.

Once you shared with me your document it became super clear I should, so thank you!

Comment by mbrooks on It Looks Like You're Trying To Take Over The World · 2022-03-10T15:25:56.508Z · LW · GW

This is really really bad design. It 100% looks like dxu is a new comment thread that is referring to the original poster, not a hidden deleted comment that could be saying the complete opposite of the original poster...