Posts

Comments

Comment by ClipMonger on Did ChatGPT just gaslight me? · 2022-12-02T09:27:59.937Z · LW · GW

ChatGPT also loves to post a massive copypasta about what LLM's are and why it doesn't know about things that happened after 2021 (including saying "this was from 2013, therefore I don't know anything about it because I only know about things that happened in 2021 or earlier")

Comment by ClipMonger on Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment? · 2022-12-02T09:22:21.268Z · LW · GW

Is this still feasible now?

Comment by ClipMonger on Late 2021 MIRI Conversations: AMA / Discussion · 2022-12-02T09:21:01.121Z · LW · GW

Will there be one of these for 2022?

Comment by ClipMonger on Lies Told To Children · 2022-12-02T09:18:44.874Z · LW · GW

I think that the idea of dath ilan being better at solving racism than earth social media is really valuable (in basically every different way that dath ilan stories are valuable, which is a wide variety of extremely different reasons). It should be covered again, at projectlawful at least, but this is a huge deal, writing more of it can achieve a wide variety of goals, and it definitely isn't something we should sleep on or let die here.

Comment by ClipMonger on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-12-02T09:16:04.767Z · LW · GW

I don't think that putting in the guide was a very good idea. It's the unfamiliarity that makes people click away, not any lack of straightforwardness. All that's required is a line that says "just read downward and it will make sense" or something like that and people will figure it out on their own nearly 100% of the time.

Generally, this stuff needs to be formatted so that people don't click away. It's lame to be so similar to news articles but that doesn't change the fact that it's instrumentally convergent to prevent people from clicking away. 

Comment by ClipMonger on Don't use 'infohazard' for collectively destructive info · 2022-12-02T09:13:04.742Z · LW · GW

It's been almost 6 months and I still mostly hear people using "infohazard" the original way. Not sure what's going on here.

Comment by ClipMonger on DeepMind alignment team opinions on AGI ruin arguments · 2022-12-02T09:12:09.371Z · LW · GW

The pivotal acts proposed are extremely specific solutions to specific problems, and are only applicable in very specific scenarios of AI clearly being on the brink of vastly surpassing human intelligence. That should be clarified whenever they are brought up; it's a thought experiment solution to a thought experiment problem, and if it suddenly stops being a thought experiment then that's great because you have the solution on a silver platter.

Comment by ClipMonger on AGI Ruin: A List of Lethalities · 2022-12-02T09:08:54.192Z · LW · GW

Is 664 comments the most on any lesswrong post? I'm not sure how to sort by that.

Comment by ClipMonger on A challenge for AGI organizations, and a challenge for readers · 2022-12-02T09:02:54.590Z · LW · GW

Do you need any help distilling? I'm fine with working for free on this one, looks like a good idea.

Comment by ClipMonger on Appendix: How to run a successful Hamming circle · 2022-12-02T09:02:36.702Z · LW · GW

I noticed that it's been 3 months since this was posted. When can we expect more CFAR content?

Comment by ClipMonger on Appendix: How to run a successful Hamming circle · 2022-12-02T09:02:16.453Z · LW · GW

I noticed that it's been 3 months since this was posted. When can we expect more CFAR content?

Comment by ClipMonger on Relationship Advice Repository · 2022-12-02T09:00:41.143Z · LW · GW

I think it should be easier to share really good advice on LW, period, without needing a really strong justification other than it helps people out with things that will clearly hold them back otherwise.

Comment by ClipMonger on The Plan - 2022 Update · 2022-12-02T08:53:53.168Z · LW · GW

Will we have to wait until Dec 2023 for the next update or will the amount of time until the next one halve for each update, 6 months then 3 months then 6 weeks then 3 weeks?

Comment by ClipMonger on What is the best source to explain short AI timelines to a skeptical person? · 2022-12-02T08:51:10.355Z · LW · GW

Probably best not to skip to List of Lethalities. But then again that kind of approach was wrong for politics is the mind killer where it turned out to be best to just have the person dive right in.

Comment by ClipMonger on [deleted post] 2022-12-02T08:48:39.580Z

I think the idea is that it appeared similar to the author's similar post on Putin's speech, which took less work and was well recieved on LW.

Comment by ClipMonger on [deleted post] 2022-12-02T08:45:41.722Z

I've heard about Soviet rationality, does anyone have a link to the lesswrong post? I can't find it.

Comment by ClipMonger on [deleted post] 2022-12-02T08:44:50.063Z

I definitely like this.

Comment by ClipMonger on Gears-Level Understanding, Deliberate Performance, The Strategic Level · 2022-08-05T21:59:54.240Z · LW · GW

Look for ways to incorporate rationality practice into the things that you are already doing.

 

if you find that you’re too busy to do useful rationality practice, try thinking of “rationality” as any and all more effective approaches to the things that you’re already doing (instead of as an additional thing to add to the pile).

This is probably the most important known lesson of rationality, and all sorts of results-tested self-improvement gurus like James Clear also converge upon this truth, which is that finding new ways to implement a concept, daily, is the best way to acquire it for real. Same goes for programming, your education isn't finished (or even really started) until you've written your own programs that help you out at various things.

Just do it.

Comment by ClipMonger on $20K In Bounties for AI Safety Public Materials · 2022-08-05T21:50:01.358Z · LW · GW

It sure would be nice if the best talking points were ordered by how effective they were, or ranked at all really. Categorization could also be a good idea.

Comment by ClipMonger on AGI ruin scenarios are likely (and disjunctive) · 2022-07-27T18:44:24.390Z · LW · GW

Mossad was allegedly pretty successful procuring large amounts of PPE from hostile countries: https://www.tandfonline.com/doi/full/10.1080/08850607.2020.1783620. They also had covert contact tracing, and one way or another their case counts seemed pretty low until Omicron.

The first few weeks of COVID lockdowns went extremely well: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7675749/