Posts
Comments
TIME MOVED TO 12 PM
Sorry to change it last minute but I now have plans later on Saturday I can't move so I'm shifting the meetup to an earlier time.
What are the transaction costs if you need to do 3 transactions?
1. Get the refund bonuses from the producer
2. Get the pledges from funders
3. Return the pledges + bonus if it doesn't work out
Also, will PayPal allow this type of money sending?
Congrats on getting funded way above your threshold!
@habryka curious what you think of this comment
Fair enough regarding Twitter
Curious what your thoughts are on my comment below
I'm talking about doing a good enough job to avoid takes like these: https://twitter.com/AI_effect_/status/1641982295841046528
50k views on the Tweet. This one tweet probably matters more than all of the Reddit comments put together
I don't find this argument convincing. I don't think Sam did a great job either but that's also because he has to be super coy about his company/plans/progress/techniques etc.
The Jordan Peterson comment was making fun of Lex and a positive comment for Sam.
Besides, I can think Sam did kinda bad and Elizier did kind of bad but expect Elizier to do much better!
.
I'm curious to know your rating on how you think Eliezer did compare to what you'd expect is possible with 80 hours of prep time including the help of close friends/co-workers.
I would rate his episode at around a 4/10
Why didn't he have a pre-prepared well thought list of convincing arguments, intuition pumps, stories, analogies, etc. that would be easy to engage with for a semi-informed listener? He was clearly grasping for them on the spot.
Why didn't he have quotes from the top respected AI people saying things like "I don't think we have a solution for super intelligence.", "AI alignment is a serious problem", etc.
Why did he not have written notes? Seriously... why did he not prepare notes? (he could have paid someone that knows his arguments really well to prepare notes for him)
How many hours would you guess Eliezer prepared for this particular interview? (maybe you know the true answer, I'm curious)
How many friends/co-workers did Eliezer ask for help in designing great conversation topics, responses, quotes, references, etc.?
This was a 3-hour long episode consumed by millions of people. He had the mind share of ~6 million hours of human cognition and this is what he came up with? Do you rate his performance more than a 4/10?
I expect Rob Miles, Connor Leahy, or Michaël Trazzi would have done enough preparation and had a better approach, and could have done an 8+/10 job. What do you think of those 3? Or even Paul Christiano.
Eliezer should spend whatever points he has with Lex to get one of those above 4 on a future episode is my opinion.
The easiest point to make here is Yud's horrible performance on Lex's pod. It felt like no prep and brought no notes/outlines/quotes??? Literally why?
Millions of educated viewers and he doesn't prepare..... doesn't seem very rational to me. Doesn't seem like systematically winning to me.
Yud saw the risk of AGI way earlier than almost everyone and has thought a lot about it since then. He has some great takes and some mediocre takes, but all of that doesn't automatically make him a great public spokesperson!!!
He did not come off as convincing, helpful, kind, interesting, well-reasoned, humble, very smart, etc.
To me, he came off as somewhat out of touch, arrogant, weird, anxious, scared, etc. (to the average person that has never heard of Yud before the Lex pod)
Toby and Elon did today what I was literally suggesting: https://twitter.com/tobyordoxford/status/1627414519784910849
@starship006, @Zack_M_Davis, @lc, @Nate Showell do you all disagree with Toby's tweet?
Should the EA and Rationality movement not signal-boost Toby's tweet?
Elon further signal boosts Toby's post
I see your point, and I agree. But I'm not advocating for sabotaging research.
I'm talking about admonishing a corporation for cutting corners and rushing a launch that turned out to be net negative.
Did you retweet this tweet like Eliezer did?https://twitter.com/thegautamkamath/status/1626290010113679360
If not, is it because you didn't want to publicly sabotage research?
Do you agree or disagree with this twitter thread? https://twitter.com/nearcyan/status/1627175580088119296?t=s4eBML752QGbJpiKySlzAQ&s=19
Are you saying that you're unsure if the launch of the chatbot was net positive?
I'm not talking about propaganda. I'm literally saying "signal boost the accurate content that's already out there showing that Microsoft rushed the launch of their AI chatbot making it creepy, aggressive, and misaligned. Showing that it's harder to do right than they thought"
Eleizer (and others) retweeted content admonishing Microsoft, I'm just saying we should be doing more of that.
Why?
I felt I was saying "Simulacrum Level 1: Attempt to describe the world accurately."
The AI was rushed, misaligned, and not a good launch for its users. More people need to know that. It's literally already how NYT and others are describing it (accurately) I'm just suggesting signal boosting that content.
We're at the pizza place off the green "A Legna"
Looks like the rain will stop before 2 pm. So we can meet up at the green and then decide if we want to head somewhere else.
I'm also on a team trying to build impact certificates/retroactive public goods funding and we are receiving a grant from an FTX Future Fund regrantor to make it happen!
If you're interested in learning more or contributing you can:
- Read about our ongoing $10,000 retro-funding contest (Austin is graciously contributing to the prize pool)
- Submit an EA Forum Post to this retro-funding contest (before July 1st)
- Join our Discord to chat/ask questions
- Read/Comment on our lengthy informational EA forum post "Towards Impact Markets"
It's weird that I have my own startup, completely understand using real users for user testing, and also barely ever "user-test" any of my writing with actual audience members.
Once you shared with me your document it became super clear I should, so thank you!
This is really really bad design. It 100% looks like dxu is a new comment thread that is referring to the original poster, not a hidden deleted comment that could be saying the complete opposite of the original poster...