Posts

LessOnline Festival Updates Thread 2024-04-18T21:55:08.003Z
LessOnline (May 31—June 2, Berkeley, CA) 2024-03-26T02:34:00.000Z
Vote on Anthropic Topics to Discuss 2024-03-06T19:43:47.194Z
Voting Results for the 2022 Review 2024-02-02T20:34:59.768Z
Vote on worthwhile OpenAI topics to discuss 2023-11-21T00:03:03.898Z
Vote on Interesting Disagreements 2023-11-07T21:35:00.270Z
Online Dialogues Party — Sunday 5th November 2023-10-27T02:41:00.506Z
More or Fewer Fights over Principles and Values? 2023-10-15T21:35:31.834Z
Dishonorable Gossip and Going Crazy 2023-10-14T04:00:35.591Z
Announcing Dialogues 2023-10-07T02:57:39.005Z
Closing Notes on Nonlinear Investigation 2023-09-15T22:44:58.488Z
Sharing Information About Nonlinear 2023-09-07T06:51:11.846Z
A report about LessWrong karma volatility from a different universe 2023-04-01T21:48:32.503Z
Shutting Down the Lightcone Offices 2023-03-14T22:47:51.539Z
Open & Welcome Thread — February 2023 2023-02-15T19:58:00.435Z
Rationalist Town Hall: FTX Fallout Edition (RSVP Required) 2022-11-23T01:38:25.516Z
LessWrong Has Agree/Disagree Voting On All New Comment Threads 2022-06-24T00:43:17.136Z
Announcing the LessWrong Curated Podcast 2022-06-22T22:16:58.170Z
Good Heart Week Is Over! 2022-04-08T06:43:46.754Z
Good Heart Week: Extending the Experiment 2022-04-02T07:13:48.353Z
April 2022 Welcome & Open Thread 2022-04-02T03:46:13.743Z
Replacing Karma with Good Heart Tokens (Worth $1!) 2022-04-01T09:31:34.332Z
12 interesting things I learned studying the discovery of nature's laws 2022-02-19T23:39:47.841Z
Ben Pace's Controversial Picks for the 2020 Review 2021-12-27T18:25:30.417Z
Book Launch: The Engines of Cognition 2021-12-21T07:24:45.170Z
An Idea for a More Communal Petrov Day in 2022 2021-10-21T21:51:15.270Z
Facebook is Simulacra Level 3, Andreessen is Level 4 2021-04-28T17:38:03.981Z
Against "Context-Free Integrity" 2021-04-14T08:20:44.368Z
"Taking your environment as object" vs "Being subject to your environment" 2021-04-11T22:47:04.978Z
I'm from a parallel Earth with much higher coordination: AMA 2021-04-05T22:09:24.033Z
Why We Launched LessWrong.SubStack 2021-04-01T06:34:00.907Z
"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party 2021-03-22T23:44:19.795Z
"You and Your Research" – Hamming Watch/Discuss Party 2021-03-19T00:16:13.605Z
Review Voting Thread 2020-12-30T03:23:06.075Z
Final Day to Order LW Books by Christmas for US 2020-12-09T23:30:36.877Z
The LessWrong 2018 Book is Available for Pre-order 2020-12-01T08:00:00.000Z
AGI Predictions 2020-11-21T03:46:28.357Z
Rationalist Town Hall: Pandemic Edition 2020-10-21T23:54:03.528Z
Sunday October 25, 12:00PM (PT) — Scott Garrabrant on "Cartesian Frames" 2020-10-21T03:27:12.739Z
Sunday October 18, 12:00PM (PT) — Garden Party 2020-10-17T19:36:52.829Z
Have the lockdowns been worth it? 2020-10-12T23:35:14.835Z
Fermi Challenge: Trains and Air Cargo 2020-10-05T21:51:45.281Z
Postmortem to Petrov Day, 2020 2020-10-03T21:30:56.491Z
Open & Welcome Thread – October 2020 2020-10-01T19:06:45.928Z
What are good rationality exercises? 2020-09-27T21:25:24.574Z
Honoring Petrov Day on LessWrong, in 2020 2020-09-26T08:01:36.838Z
Sunday August 23rd, 12pm (PDT) – Double Crux with Buck Shlegeris and Oliver Habryka on Slow vs. Fast AI Takeoff 2020-08-22T06:37:07.173Z
Forecasting Thread: AI Timelines 2020-08-22T02:33:09.431Z
[Oops, there is actually an event] Notice: No LW event this weekend 2020-08-22T01:26:31.820Z
Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) 2020-08-20T00:49:49.639Z

Comments

Comment by Ben Pace (Benito) on Paul Christiano named as US AI Safety Institute Head of AI Safety · 2024-04-23T04:04:59.635Z · LW · GW

I'm not in touch with the ground truth in this case, but for those reading along without knowing the context, I'll mention that it wouldn't be the first time that David has misrepresented what people in the Effective Altruism Biorisk professional network believe[1]

(I will mention that David later apologized for handling that situation poorly and wasting people's time[2], which I think reflects positively on him.)

  1. ^

    See Habryka's response to Davidmanheim's comment here from March 7th 2020, such as this quote.

    Overall, my sense is that you made a prediction that people in biorisk would consider this post an infohazard that had to be prevented from spreading (you also reported this post to the admins, saying that we should "talk to someone who works in biorisk at at FHI, Openphil, etc. to confirm that this is a really bad idea").

    We have now done so, and in this case others did not share your assessment (and I expect most other experts would give broadly the same response).

  2. ^

    See David's own June 25th reply to the same comment.

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-19T21:29:11.382Z · LW · GW

The first sounds like the sort of thing that turns out to be surprisingly useful (nobody ever gives me health advice). Mm, maybe folks can agree-react to this sentence if they too want to go to such a session?

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-19T21:23:06.484Z · LW · GW

Nice, just had a good call with Alkjash, who is coming and will be preparing 2 layman-level math talks about questions he's been thinking about.

Other ideas we chatted about having at LessOnline include maybe having some discussions about doing research inside and outside of academia, and also about learning from GlowFic writers how to write well collaboratively. (Let me know if you'd be interested in either of these!)

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-19T18:32:09.936Z · LW · GW

I think on-site housing is pretty scarce, though we're going to make more high-density rooms in response to demand for that. Tickets aren't scarce, our venue could fit like a 700 person event, so I don't expect to hit the limits.

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-19T02:04:23.975Z · LW · GW

Still working on setting it up, once I have the details I'll announce them (e.g. pricing and whatnot).

I'm aiming to have childcare available in some form for the full 9-day LessOnline-to-Summer-Camp-to-Manifest period. I'm excited for folks to come with their full families.

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-18T22:02:55.165Z · LW · GW

Also here are some sessions tentatively scheduled (some may change):

  1. Fighting Moloch in Politics talk/Q&A with Martin Sustrik
  2. One-Shot 'Baba Is You' Rationality Exercises activity with Raymond Arnold
  3. Write Your First Fact-Post activity led by Sarah Constantin
  4. Currently Untitled Sequel to And All the Shoggoths Merely Players narrated by Zack Davis and John Wentworth
  5. Write Your First Glowfic activity led by Alicorn
  6. Podcast and Q&A with Alexander Wales (author of Worth the Candle) and Daystar Eld, moderated by Jamie Wahls
  7. Wanted: People Who Want by Jacob Falkovich
  8. Magic-The-Gathering Color Wheel for Writers talk by Duncan Sabien
Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-18T20:38:16.465Z · LW · GW

Question: What is this event celebrating?

I've been trying to think a bit about the boundary of this event. Naturally I love LessWrong, but it's not simply about LessWrong.

This is not comprehensive, but here are three subcultures that I admire and am hoping to celebrate at this event.

  1. Aspiring Rationalists – people interested in understanding and improving their cognitive algorithms
  2. Rational Fiction Readers & Writers – people who write fiction that illustrate intelligent characters and lawful worlds
  3. Public Worldview Builders and/or Polymaths – people earnestly trying to directly understand the world and explain what they’ve learned with online writing

Below are a few notes about each of them and what I hope from bringing them together.

Aspiring Rationalists

I continue to believe the mission to understand our cognitive algorithms and improve them is a worthwhile one, and there's still lots of people working on advancing this, whether it's finding new theoretical underpinnings for decision-making or coming up with pragmatic heuristics for improving the truth-tracking nature of discussions or building up models for how the world is trying to fight your ability to understand it or something quite different.

I'm hoping to bring folks together who are interested in this mission and give them the ability to get ideas from other people who have been working on this too.

Rational Fiction Readers & Writers

Near to the aspiring rationalists is Rational Fiction. For detailed pointers to it check the description in the r/rational subreddit sidebar or the TV Tropes Page, but broadly it really respects the characters' intelligence and the rules of how the fictional world works. It's the sort of story where a key plot point might involve doing a fermi estimate for how much steel is required to withstand a given magical blast, or where a key moment of dramatic tension involves the protagonist carefully reasoning about their situation and reaching a novel conclusion, one that in-principle the reader themselves could've deduced given the information that they had available.

This community is very active and large! r/rational has 25k subscribers, and the best works are extremely popular. At the time of publication Harry Potter and the Methods of Rationality had the most reviews (37k) of any work on FanFiction.net (now second most) and is the most followed HP-fic (22k), Mother of Learning has the most views (18M) on Royal Road, and Worth the Candle is the 3rd most read original fiction on Archive of Our Own (I started it a month ago and am now 220 chapters in, no spoilers please). There's so many more amazing works, I'd guess like 300-1,000 rational-fics (especially counting all of the Glow-Fics).

I think this often (not always) has a similar nature to the old hard sci-fi, where people try to set up rules of a world and really ask what would follow from these rules, and authors put in a lot of work to be very creative-and-lawful.

I’m hoping to bring people together to talk about stories and the ideas that go into them and come away with lots of new ideas for stories they could write.

Public Worldview Builders and/or Polymaths

(This bit is something Ray helped a lot in writing)

There’s a style of discourse that has felt pretty important to me. It used to be kinda The Central Thing going on on the internet. It’s become less so, with the rise of social media. That Thing is a bit hard to pin down, but it’s something like:  

“Longform, thoughtful writing, earnestly trying to figure stuff out.”  

LessOnline is a festival celebrating that kind of writing. Furthermore, I have a strong hunch that there’s a craft that everyone invited is involved with in some way — an art of paying attention to the information you get about the world, piecing things together into a coherent perspective, extracting strong arguments to show to others, writing in a way that’s engaging and clear, and leaving people with a little bit of a better understanding of the world, that altogether hopefully adds up to leave me and humanity with a much more accurate and more leveraged map of the world.

The idea here is to invite the part of the blogosphere that seems sort of like "kindred spirits" to LessWrong, approaching similar kinds of questions from different perspectives. I think one framing is "we're looking for people who are publicly building out cohesive worldview in public."

I’m hoping to bring such people together to share ideas they’ve been thinking about and knowledge they’ve learned, and also to share insights about the shared craft. 

An quick and incomplete list of other subcultures I'm excited about bringing together

Econ Blogosphere, Statistics Blogosphere, History Blogosphere, Finance Blogosphere, The Nerdy Cartoon-O-Sphere (e.g. XKCD, SMBC, Abstruse Goose, many more), Agent Foundations researchers, Genetic Enhancement researchers, more.

Comment by Ben Pace (Benito) on A Review of In-Context Learning Hypotheses for Automated AI Alignment Research · 2024-04-18T18:41:33.507Z · LW · GW

Not sure where the right place to raise this complaint, but having just seen it for the first time, really, "MARS"? I checked, this is not affiliated with MATS who have had like 6 programs and ~300 people go through it. To me this seems too close in branding space to me, and I'd recommend picking a more distinct name.

Comment by Ben Pace (Benito) on I'm open for projects (sort of) · 2024-04-18T18:34:54.918Z · LW · GW

Oh cool! Um, first thought, register interest in this?

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-17T22:31:20.105Z · LW · GW

Update #1

Lots of info to share! Here's a bunch of awesome people confirmed as coming.

Eliezer YudkowskyThe Sequences | HPMOR | Project Lawful
Scott AlexanderSlateStarCodex | Astral Codex Ten | UNSONG
Zvi MowshowitzThe Zvi | Don't Worry About The Vase
Alexander WalesWorth the Candle | Alexander Wales
Kevin SimlerMelting Asphalt | The Elephant in the Brain
Katja GraceWorld Spirit Sock Puppet | AI Impacts
Sarah ConstantinRough Diamonds
Martin Sustrik250bpm | LW
Duncan SabienHomo Sabiens | r!Animorphs
John WentworthLW
Abram DemskiLW
AlicornAlicorn | LW
Jacob FalkovichPutANumOnIt | LW
Zack DavisLW
Daystar EldDaystar Eld
GeneSmithLW
Ozy BrennanThing of Things

Two activities I'm personally quite excited to go to are Sarah Constantin's "Write Your First Fact Post" and Alicorn's "Write Your First GlowFic" (I want to try to do both of these things!).

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-17T22:27:54.710Z · LW · GW

Questions Thread

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-17T22:27:51.763Z · LW · GW

Thinking Thread

Comment by Ben Pace (Benito) on LessOnline Festival Updates Thread · 2024-04-17T22:27:43.405Z · LW · GW

Updates Thread

Comment by Ben Pace (Benito) on FHI (Future of Humanity Institute) has shut down (2005–2024) · 2024-04-17T19:19:09.733Z · LW · GW

I don't believe so. The gossip I've heard leads me to think that it was substantially downstream of attempts to scale the institute by GovAI and Research Scholars program (involving hiring lots of junior people, which the university doesn't like), as well as the university in general being a terrible bureaucracy (e.g. as incompetent and micromanage-y as "remove the plants you bought because the university cannot commit to watering them").

Comment by Ben Pace (Benito) on RTFB: On the New Proposed CAIP AI Bill · 2024-04-17T00:06:55.628Z · LW · GW

@Zach Stein-Perlman I'm not really sure why you gave a thumbs-down. Probably you're not trying to communicate that you think there shouldn't be deontological injunctions against genocide. I think someone renouncing any deontological injunctions against such devastating and irreversible actions would be both pretty scary and reprehensible. But I failed to come up with a different hypothesis for what you are communicating with a thumbs-down on that statement (to be clear I wouldn't be surprised if you provided one).

Comment by Ben Pace (Benito) on The Best Tacit Knowledge Videos on Every Subject · 2024-04-16T02:23:12.518Z · LW · GW

Done.

Comment by Ben Pace (Benito) on Anthropic AI made the right call · 2024-04-16T00:26:07.740Z · LW · GW

I have been around the community for a long time (since before Anthropic existed), in a very involved way, and I had also basically never heard of this before the Claude 3 release.... So... it was definitely not the case that this was just an obvious thing that everyone in the community knew about or that Anthropic senior staff were regularly saying

For the record a different Anthropic staffer told me confidently that it was widespread in 2022, the year before you joined, so I think you're wrong here.

(The staffer preferred that I not quote them verbatim in public so I've DM'd you a direct quote.)

Comment by Ben Pace (Benito) on Anthropic AI made the right call · 2024-04-15T03:18:23.850Z · LW · GW

Did you not read the discussion from when this came out? Your post doesn't engage with the primary criticism which is about Anthropic staff lying about the company's plans, including the evidence suggesting the CEO lied to one of their investors leaving the investor with the belief that Anthropic had a “commitment to not meaningfully advance the frontier with a launch”. In contrast to the private communications with an investor, here's what Anthropic's launch post says of their new model (emphasis added):

Today, we're announcing the Claude 3 model family, which sets new industry benchmarks across a wide range of cognitive tasks... Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence.

Comment by Ben Pace (Benito) on RTFB: On the New Proposed CAIP AI Bill · 2024-04-11T23:39:26.810Z · LW · GW

Two obvious points:

  1. It is deontologically more ethical to not yourself kill everyone in the world.
  2. America has an incredible ability to set fashions, and if it took on these policies then I think a great number of others would follow suit.
Comment by Ben Pace (Benito) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-08T03:10:13.931Z · LW · GW

That interview is hysterically funny. 

I think as a 15 year old I could've had as hard a time as Metz did there. I mean, not if it were a normal conversation, then I would be far more curious about the questions being brought up, but if it was a first connection and I was trying to build a relationship and interview someone for information in a high-stakes situation, then I can imagine me being very confused and just trying to placate (and at that age I may have given a false pretense of understanding my interlocutor).

Yet from my current position everything Vassar said was quite straightforward (though the annotations were helpful).

Comment by Ben Pace (Benito) on Do I count as e/acc for exclusion purposes? · 2024-04-02T19:39:00.273Z · LW · GW

I would prefer not to have people reply to me about people's personal sexual activities/events (without exceptional reason such as a credible accusation of rape or other criminality). 

I also do not think that attendees of people's personal sexual events should be policed by others (nor be included when publicly discussing attendance of LW events).

Comment by Ben Pace (Benito) on Do I count as e/acc for exclusion purposes? · 2024-04-02T17:18:04.218Z · LW · GW

I wouldn’t use that phrasing, but I live and work from Lighthaven, and a great number of large Berkeley x-risk network parties happen here, and I chat with the organizers, so I have a lot of interaction with events and organizers. I’m definitely more in contact with semi-professional events, like parties run by MATS and AI Impacts and Lightcone, and there’s of course many purely social events that happen in this extended network that I don’t know much about. I also go to larger non-organizational parties run by friends like 2x/month (e.g. 20-100 people).

Comment by Ben Pace (Benito) on Do I count as e/acc for exclusion purposes? · 2024-04-02T01:34:25.963Z · LW · GW

I've never heard of such a thing happening.

Comment by Ben Pace (Benito) on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-02T00:09:25.817Z · LW · GW

This signature made me laugh:

Chicxulub - Kill many birds with one stone

Also, if you haven't done so before, I recommend googling 'Chicxulub'.

Comment by Ben Pace (Benito) on Habryka's Shortform Feed · 2024-04-01T23:12:53.418Z · LW · GW

Screenshot for posterity.

Comment by Ben Pace (Benito) on General Thoughts on Secular Solstice · 2024-03-30T00:39:35.227Z · LW · GW

Yes, but none of this require overt hostility to religion (as opposed to just rejection). I think that as long as religious people accept the conversational norms and culture on LW, them bringing in some new perspectives (that are still compatible with overall LW norms) ought to be welcome. 

I think I agree with not going out of one's way to be rude, I generally think politeness is worthwhile (and have worked to become more polite myself in recent years).[1]

I also welcome people who adhere to any religion sharing insights that they have about the world here on LessWrong.

At the same time, I am 'hostile' to religions — or at least, I am 'hostile' to any religion that claims to have infallible leaders who receive the truth directly from God(s), or that have texts about history and science and ethics that are unalterable, where adherents to the religion are not allowed to disagree with them. 

I am 'hostile' in the sense that if (prior to me working on LessWrong) a group of devout Hindus were becoming moderators of LessWrong (and were intending to follow their ethical inside views in shaping the culture of the site) I would've taken active action to prevent them having that power (e.g. publicly written arguments against this decision, moved to collect signatures against this decision, etc). I also think that if I were hypothetically freely given the opportunity to lower the hard power that religion has in some ecosystem I was cared about, such as removing a Catholic priest from having control over an existential risk grant-making institution, I would be willing to go out of my way to do so, and think that this was good.

Perhaps a better term is to say that I 'oppose' religions with (IMO) inherently corrupt epistemologies, and do not want them to have power over me or the things that I care about. 

Apart from that, there are many interesting individuals who adhere to religions who have valuable insight into how the world works, and I'm grateful to them when they share such insights openly, especially here on LessWrong.

  1. ^

    I want to mention that I don't wish to entirely police other people's hostility. I was not raised in a religious household, but I've met many who were and who were greatly hurt due to the religious practices and culture of their family and local community and I do not begrudge them their instinctive hostility to it when it appears in their environment.

Comment by Ben Pace (Benito) on General Thoughts on Secular Solstice · 2024-03-29T23:55:54.745Z · LW · GW

Staggering it sounds kind of nice. It could allow there to be a solstice event on the actual solstice (Dec 21st) as well as small solstice celebrations in the week leading up for those who cannot be there on that date.

I'd be excited to try that at Lighthaven (if the solstice organizers wanted to give it a shot), though I also really like having a big get-together. Perhaps we could have a week-long solstice celebration at Lighthaven with multiple rituals and other little fun things for people to do.

Comment by Ben Pace (Benito) on LessOnline (May 31—June 2, Berkeley, CA) · 2024-03-28T01:05:27.679Z · LW · GW

Both our teams (LessOnline and Manifest) are putting together big events on the consecutive weekends, and my guess is that there'll be a number of folks who want to travel for both. So for everyone around in the interim we're offering housing on-site, food, and will be helping folks run their own sub-events.

I haven't got anything to announce there yet but I have ~3 events in the early stages of being organized, all of which I personally want to attend.

Comment by Ben Pace (Benito) on Daniel Kahneman has died · 2024-03-27T17:30:38.311Z · LW · GW

Well, this utterly sucks.

Comment by Ben Pace (Benito) on LessOnline (May 31—June 2, Berkeley, CA) · 2024-03-26T23:32:43.749Z · LW · GW

I've sent like 80 emails out, still tracking them all down! 

(If anyone has contact info for The Last Psychiatrist or Hotel Concierge or Christian Rudder from the OK Cupid blog, I'd appreciate a connection.)

Comment by Ben Pace (Benito) on Benito's Shortform Feed · 2024-03-17T21:48:57.359Z · LW · GW

I’d say most people assume I want “the answer” rather than “some bits of information”.

Comment by Ben Pace (Benito) on Tamsin Leake's Shortform · 2024-03-17T21:40:19.729Z · LW · GW

I don’t think it applies to safety researchers at AI Labs though, I am shocked how much those folks can make.

Comment by Ben Pace (Benito) on Benito's Shortform Feed · 2024-03-17T19:44:12.479Z · LW · GW

A common experience I have is that it takes like 1-2 paragraphs of explanation for why I want this info (e.g. "Well I'm wondering if so-and-so should fly in a day earlier to travel with me but it requires going to a different airport and I'm trying to figure out whether the time it'd take to drive to me would add up to too much and also..."), but if they just gave me their ~70% confidence interval when I asked then we could cut the whole context-sharing.

Comment by Ben Pace (Benito) on Benito's Shortform Feed · 2024-03-16T20:41:02.390Z · LW · GW

Often I am annoyed when I ask someone (who I believe has more information than me) a question and they say "I don't know". I'm annoyed because I want them to give me some information. Such as:

"How long does it take to drive to the conference venue?" 

"I don't know." 

"But is it more like 10 minutes or more like 2 hours?" 

"Oh it's definitely longer than 2 hours."

But perhaps I am the one making a mistake. For instance, the question "How many countries are there?" can be answered "I'd say between 150 and 400" or it can be answered "195", and the former is called "an estimate" and the latter is called "knowing the answer". There is a folk distinction here and perhaps it is reasonable for people to want to preserve the distinction between "an estimate" and "knowing the answer".

So in the future, to get what I want, I should say "Please can you give me an estimate for how long it takes to drive to the conference venue?".

And personally I should strive, when people ask me a question to which I don't know the answer, to say "I don't know the answer, but I'd estimate between X and Y."

Comment by Ben Pace (Benito) on Toward a Broader Conception of Adverse Selection · 2024-03-15T18:48:23.092Z · LW · GW

I think it is pretty obviously a joke :P

Comment by Ben Pace (Benito) on Toward a Broader Conception of Adverse Selection · 2024-03-15T01:59:30.302Z · LW · GW

(And in case anyone was led astray: the Marx quote at the start is from Groucho, not Karl.)

Comment by Ben Pace (Benito) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T05:28:41.375Z · LW · GW

K. I recommend that people include links for those of us who mostly do not read Twitter.

Comment by Ben Pace (Benito) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T05:09:57.324Z · LW · GW

Crossposted from where?

Comment by Ben Pace (Benito) on My Clients, The Liars · 2024-03-09T03:44:53.050Z · LW · GW

Curated! Very interesting to get a vivid sense of what goes on when people are facing strong pressures to lie, and how they go about doing this. Both their adamance that they were right and their transparency to you were both fascinating. And this was very engagingly written. Thanks for the post!

Comment by Ben Pace (Benito) on Using axis lines for good or evil · 2024-03-07T16:09:55.740Z · LW · GW

As someone who's spent a while designing charts for published books, I have generally been strongly against axis lines. One thing that has really influenced my approach to using lines is the section of Butterick's Practical Typography on tables.

Nowadays I remove all lines on tables and charts unless there's a strong argument in favor of one; implied lines are much easier on the eye.

This post overall moved me toward using gridlines a little bit more, for intuitively measuring distance when that's important.

Comment by Ben Pace (Benito) on Wholesome Culture · 2024-03-07T08:08:49.631Z · LW · GW

I think this essay raises many good points, but doesn’t grapple with (to me) the hardest part of wholesomeness: when do I ignore parts of the whole?

I think that sometimes you make the choice not think about something for a while. For instance, trivially, you can only track so many hypotheses in detail. While I am designing a product that I think will change the world, I will spend most of my time considering different hypotheses for what sort of product users want, and considering how to quickly falsify them and iterate. I will not spend a ton of time questioning whether capitalism is even good for civilization. Insofar as I’m choosing to give this product a shot, that is not a good use of mental resources - the assumption questioning comes before, and after (and occasionally in the middle of I have exceptional cause for a crisis of faith).

To me the hard question of wholesomeness is about knowing when you’re choosing look away from a thing because on reflection it’s not worth the cognitive space to be tracking it as a consideration, and knowing when you’re doing it improperly because it’s painful or emotionally draining or personally inconvenient to keep looking at the thing.

(And that emotional cost itself is a factor to be weighed on the scales.)

Some written guidance on this would be valuable, I’d say.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T21:20:50.028Z · LW · GW

I assign >10% that Anthropic will at some point pause development for at least a year as a result of safety evaluations.

I think that if Anthropic cannot make a certain product-line safe and then they pivot to scaling up a different kind of model / product-line, I am not counting this as 'pausing development'. 

If they pause all novel capability development and scaling and just let someone else race ahead while pivoting to some other thing like policy or evals or something (while continuing to iterate on existing product lines) then I am counting that as pausing development.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T20:26:32.434Z · LW · GW

I've added it back in. Seemed like a fairly non-specific word to me.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T20:15:08.692Z · LW · GW

Plausible. I was imitating the phrasing used by an Anthropic funder here. I'm open to editing it in the next hour or so if you think there's a better phrasing.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T20:12:10.535Z · LW · GW

I'm interested to know why you think that. I've not thought about it a ton so I don't think I'd be a great dialogue partner, but I'd be willing to give it a try, or you could give an initial bulleted outline of your reasoning here.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T19:32:54.204Z · LW · GW

I assign >20% that many of the Anthropic employees who quit OpenAI signed Non-Disparagement Agreements with OpenAI.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T19:29:22.909Z · LW · GW

Anthropic has (in expectation) brought forward the date of superintelligent AGI development (and not slowed it down).

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T19:28:57.365Z · LW · GW

I assign >50% probability to the claim that Anthropic will release products far beyond the frontier within the next 5 years.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T19:28:52.305Z · LW · GW

I assign >20% probability to the claim that Anthropic will release products far beyond the frontier within the next 5 years.

Comment by Ben Pace (Benito) on Vote on Anthropic Topics to Discuss · 2024-03-06T19:28:38.858Z · LW · GW

I think Anthropic staff verbally communicated to many prospective employees, collaborators and funders that they were committed to not meaningfully advance the frontier with a product launch.