Posts

Applications are open for CFAR workshops in Prague this fall! 2022-07-19T18:29:19.172Z

Comments

Comment by John Steidley (JohnSteidley) on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) · 2024-07-06T07:28:46.018Z · LW · GW

Because it's obviously annoying and burning the commons. Imagine if I made a bot that posted the same comment on every post of less wrong, surely that wouldn't be acceptable behavior.

Comment by John Steidley (JohnSteidley) on Intuition for 1 + 2 + 3 + … = -1/12 · 2024-02-18T18:15:58.994Z · LW · GW

The finish was quite a jump for me. I guess I could go and try to stare at your parenthesis and figure it out myself, but mostly I feel somewhat abandoned at that step. I was excited when I found 1, 2, 4, 8... = -1 to be making sense, but that excitement doesn't quite feel sufficient for me to want to decode the relationships between the terms in those two(?) patterns and all the relevant values

Comment by John Steidley (JohnSteidley) on "Is There Anything That's Worth More" · 2023-08-03T10:54:45.046Z · LW · GW

Zack, the second line of your quoted lyrics should be "I guess *we already..."

Comment by John Steidley (JohnSteidley) on 3 Levels of Rationality Verification · 2023-04-20T00:32:47.955Z · LW · GW

I'm currently one of the four members of the core team at CFAR (though the newest addition by far). I also co-ran the Prague Workshop Series in the fall of 2022. I've been significantly involved with CFAR since its most recent instructor training program in 2019.

I second what Eli Tyre says here. The closest thing to "rationality verification" that CFAR did in my experience was the 2019 instructor training program, which was careful to point out it wasn't verifying rationality broadly, just certifying the ability to teach one specific class.

Comment by John Steidley (JohnSteidley) on NVIDIA and Microsoft releases 530B parameter transformer model, Megatron-Turing NLG · 2021-10-11T20:45:03.611Z · LW · GW

I wasn't replying to Quintin

Comment by John Steidley (JohnSteidley) on NVIDIA and Microsoft releases 530B parameter transformer model, Megatron-Turing NLG · 2021-10-11T18:36:13.691Z · LW · GW

I can't tell what you mean. Can you elaborate?

Comment by John Steidley (JohnSteidley) on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-11T09:10:31.108Z · LW · GW

I think this comment would be better placed as a reply to the post that I'm linking. Perhaps you should put it there?

Comment by John Steidley (JohnSteidley) on Did anybody calculate the Briers score for per-state election forecasts? · 2020-11-10T18:37:18.310Z · LW · GW

https://www.lesswrong.com/posts/muEjyyYbSMx23e2ga/scoring-2020-u-s-presidential-election-predictions

Comment by John Steidley (JohnSteidley) on Gifts Which Money Cannot Buy · 2020-11-04T22:45:22.649Z · LW · GW

My summary: Give gifts using the parts of your world-model that are strongest. Usually the answer isn't going to end up being based on your understanding of their hobby.

Comment by John Steidley (JohnSteidley) on A simple device for indoor air management · 2020-10-02T01:32:17.144Z · LW · GW

Window AC units don't actually pull air from outside.

https://homeairguides.com/how-does-a-window-air-conditioner-work/

Comment by John Steidley (JohnSteidley) on A simple device for indoor air management · 2020-10-02T01:28:43.386Z · LW · GW

Hey, I've been looking into air quality quite a bit recently. I have several questions.

What air quality sensor are you using? How are you getting outdoor data?

I suspect some of the confusion in the results may be due to circulation within the home and monitor placement. Have you thought much about circulation?

Additionally, it looks like indoor PM2.5 is tracking outdoor PM2.5. Have you thought much about other sources of ventilation?

Comment by John Steidley (JohnSteidley) on interpreting GPT: the logit lens · 2020-09-01T04:20:40.801Z · LW · GW

It doesn't sound hard at all. The things Gwern is describing are the same sort of thing that people do for interpretability where they, eg, find an image that maximizes the probability of the network predicting a target class.

Of course, you need access to the model, so only OpenAI could do it for GPT-3 right now.

Comment by John Steidley (JohnSteidley) on TurnTrout's shortform feed · 2020-05-25T20:20:32.795Z · LW · GW

I've was thinking along similar lines!

From my notes from 2019-11-24: "Deontology is like the learned policy of bounded rationality of consequentialism"

Comment by John Steidley (JohnSteidley) on Open & Welcome Thread - December 2019 · 2020-05-23T23:22:45.419Z · LW · GW

Welcome!