Posts

A one-question Turing test for GPT-3 2022-01-22T18:17:56.463Z
Paul Crowley's Shortform 2020-05-28T13:39:55.241Z
Why no total winner? 2017-10-15T22:01:37.920Z
Circles of discussion 2016-12-16T04:35:28.086Z
Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time" 2015-01-22T20:21:48.539Z
Slides online from "The Future of AI: Opportunities and Challenges" 2015-01-16T11:17:23.647Z
Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial 2015-01-15T16:33:48.640Z
Robin Hanson's "Overcoming Bias" posts as an e-book. 2014-08-31T13:26:24.555Z
Open thread for December 17-23, 2013 2013-12-17T20:45:00.004Z
A diagram for a simple two-player game 2013-11-10T08:59:35.069Z
Meetup : London social 2013-10-07T11:45:57.286Z
Meetup : London meetup: thought experiments 2013-09-19T20:29:33.168Z
Meetup : London social meetup 2013-09-07T15:22:04.693Z
Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future 2013-06-26T13:17:54.357Z
Welcome to Less Wrong! (July 2012) 2012-07-18T17:24:51.381Z
Useful maxims 2012-07-11T11:56:57.489Z
Quantified Self recommendations 2012-05-18T10:16:07.740Z
Holden Karnofsky's Singularity Institute critique: Is SI the kind of organization we want to bet on? 2012-05-11T07:25:56.637Z
Holden Karnofsky's Singularity Institute critique: other objections 2012-05-11T07:22:13.699Z
Holden Karnofsky's Singularity Institute Objection 3 2012-05-11T07:19:18.688Z
Holden Karnofsky's Singularity Institute Objection 2 2012-05-11T07:18:05.379Z
Holden Karnofsky's Singularity Institute Objection 1 2012-05-11T07:16:29.696Z
Meetup : London 2012-04-26T20:03:09.209Z
How accurate is the quantum physics sequence? 2012-04-17T06:54:18.488Z
How was your meetup? 2012-04-16T06:11:24.129Z
Meetup : London 2012-04-06T16:42:02.277Z
Statistical error in half of neuroscience papers 2011-09-09T23:07:33.743Z
An EPub of Eliezer's blog posts 2011-08-11T14:20:31.512Z
Unknown unknowns 2011-08-05T12:55:37.560Z
Martinenaite and Tavenier on cryonics 2011-08-04T07:39:02.702Z
Meetup : London mini-meetup 2011-08-03T18:17:15.313Z
Robert Ettinger, founder of cryonics, now CIs 106th patient 2011-07-25T12:11:52.631Z
Free holiday reading? 2011-06-28T08:59:01.845Z
The Ideological Turing Test 2011-06-25T22:17:25.746Z
Charles Stross: Three arguments against the singularity 2011-06-22T09:52:08.250Z
London meetup, Sunday 2011-05-15 14:00, near London Bridge 2011-05-13T20:54:32.138Z
GiveWell.org interviews SIAI 2011-05-05T16:29:09.944Z
Reminder: London meetup, Sunday 2pm, near Holborn 2011-04-28T09:26:04.851Z
London meetup, Sunday 1 May, 2pm, near Holborn 2011-04-03T09:47:23.852Z
London meetup, Sunday 2011-03-06 14:00, near Holborn (reminder) 2011-02-26T08:10:02.466Z
Open Thread, January 2011 2011-01-10T11:14:49.179Z
London meetup, Shakespeare's Head, Sunday 2011-03-06 14:00 2011-01-09T15:43:35.015Z
Weird characters in the Sequences 2010-11-18T08:27:20.737Z
Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) 2010-10-30T09:31:29.456Z
London UK, Saturday 2010-07-03: "How to think rationally about the future" 2010-05-31T15:23:20.972Z
LessWrong meetup, London UK, 2010-06-06 16:00 2010-05-23T13:46:44.536Z
A LessWrong poster for the Humanity+ conference next Saturday 2010-04-14T21:38:46.831Z
Meetup after Humanity+ , London, Saturday 2010-04-24? 2010-04-10T12:54:01.601Z
Less Wrong London meetup, tomorrow (Sunday 2010-04-04) 16:00 2010-04-03T09:36:05.289Z
A survey of anti-cryonics writing 2010-02-07T23:26:52.715Z

Comments

Comment by Paul Crowley (ciphergoth) on Using axis lines for good or evil · 2024-03-19T13:09:11.071Z · LW · GW

I'm sad that the post doesn't go on to say how to get matplotlib to do the right thing in each case!

Comment by Paul Crowley (ciphergoth) on Nathan Helm-Burger's Shortform · 2024-02-11T14:07:57.025Z · LW · GW

I thought you wanted to sign physical things with this? How will you hash them? Otherwise, how is this different from a standard digital signature?

Comment by Paul Crowley (ciphergoth) on Nathan Helm-Burger's Shortform · 2024-02-09T02:40:30.707Z · LW · GW

The difficult thing is tying the signature to the thing signed. Even if they are single-use, unless the relying party sees everything you ever sign immediately, such a signature can be transferred to something you didn't sign from something you signed that the relying party didn't see.

Comment by Paul Crowley (ciphergoth) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T06:42:07.042Z · LW · GW

Of course this market is "Conditioning on Nonlinear bringing a lawsuit, how likely are they to win?" which is a different question.

Comment by Paul Crowley (ciphergoth) on Paul Crowley's Shortform · 2023-04-28T17:46:46.267Z · LW · GW

Extracted from a Facebook comment:

I don't think the experts are expert on this question at all. Eliezer's train of thought essentially started with "Supposing you had a really effective AI, what would follow from that?" His thinking wasn't at all predicated on any particular way you might build a really effective AI, and knowing a lot about how to build AI isn't expertise on what the results are when it's as effective as Eliezer posits. It's like thinking you shouldn't have an opinion on whether there will be a nuclear conflict over Kashmir unless you're a nuclear physicist.

Comment by Paul Crowley (ciphergoth) on Campaign for AI Safety: Please join me · 2023-04-17T00:51:47.123Z · LW · GW

Thanks, that's useful. Sad to see no Eliezer, no Nate or anyone from MIRI or having a similar perspective though :(

Comment by Paul Crowley (ciphergoth) on Campaign for AI Safety: Please join me · 2023-04-08T15:37:02.398Z · LW · GW

The lack of names on the website seems very odd.

Comment by Paul Crowley (ciphergoth) on Campaign for AI Safety: Please join me · 2023-04-08T15:36:24.741Z · LW · GW

Don't let your firm opinion get in the way of talking to people before you act. It was Elon's determination to act before talking to anyone that led to the creation of OpenAI, which seems to have sealed humanity's fate.

Comment by Paul Crowley (ciphergoth) on Fighting without hope · 2023-03-02T19:06:17.469Z · LW · GW

This is explicitly the discussion the OP asked to avoid.

Comment by Paul Crowley (ciphergoth) on Nonprofit Boards are Weird · 2022-07-29T20:21:16.554Z · LW · GW

This is true whether we adopt my original idea that each board member keeps what they learn from these conversations entirely to themselves, or Ben's better proposed modification that it's confidential but can be shared with the whole board.

Comment by Paul Crowley (ciphergoth) on Nonprofit Boards are Weird · 2022-06-24T02:29:38.130Z · LW · GW

Perhaps this is a bad idea, but it has occurred to me that if I were a board member, I would want to quite frequently have confidential conversations with randomly selected employees.

Comment by Paul Crowley (ciphergoth) on Is there a convenient way to make "sealed" predictions? · 2022-05-09T19:58:50.052Z · LW · GW

For cryptographic security, I would use HMAC with a random key. Then to reveal, you publish both the message and the key. This eg allows you to securely commit to a one character message like "Y".

Comment by Paul Crowley (ciphergoth) on A one-question Turing test for GPT-3 · 2022-01-23T00:01:38.704Z · LW · GW

I sincerely doubt very many people would propose mayonnaise!

Comment by Paul Crowley (ciphergoth) on Internet Literacy Atrophy · 2021-12-28T01:00:50.517Z · LW · GW

The idea is that I can do all this from my browser, including writing the code.

Comment by Paul Crowley (ciphergoth) on Internet Literacy Atrophy · 2021-12-27T23:17:30.620Z · LW · GW

I'm not sure I see how this resembles what I described?

Comment by Paul Crowley (ciphergoth) on Internet Literacy Atrophy · 2021-12-26T20:16:50.508Z · LW · GW

I would love a web-based tool that allowed me to enter data in a spreadsheet-like way, present it in a spreadsheet-like way, but use code to bridge the two.

Comment by Paul Crowley (ciphergoth) on Covid Christmas · 2021-12-06T23:14:14.622Z · LW · GW

(I'm considering putting a second cube on top to get five more filters per fan, which would also make it quieter.)

 

Four more filters per fan, right?

Comment by Paul Crowley (ciphergoth) on Whole Brain Emulation: Looking At Progress On C. elgans · 2021-10-18T04:32:20.569Z · LW · GW

Any thoughts on this today?

Comment by Paul Crowley (ciphergoth) on Santa Cruz, CA – ACX Meetups Everywhere 2021 · 2021-10-01T13:40:44.340Z · LW · GW

Any thoughts on parking? Thanks!

Comment by Paul Crowley (ciphergoth) on The Point of Trade · 2021-06-22T19:33:13.251Z · LW · GW

I think this is diminishing marginal returns of consumption, not production.

Comment by Paul Crowley (ciphergoth) on Affordances · 2021-04-03T13:29:04.763Z · LW · GW

I would guess a lot of us picked the term up from Donald Norman's The Design of Everyday Things.

Comment by Paul Crowley (ciphergoth) on Direct effects matter! · 2021-03-14T07:41:47.437Z · LW · GW

The image of this tweet isn't present here, only on Substack.

Comment by Paul Crowley (ciphergoth) on The rationalist community's location problem · 2020-10-12T12:24:46.118Z · LW · GW

True; in addition, places vary a lot in their freak-tolerance.

Comment by Paul Crowley (ciphergoth) on The rationalist community's location problem · 2020-10-10T22:18:24.697Z · LW · GW

If I lived in Wyoming and wanted to go to a fetish event, I guess I'm driving to maybe Denver, around 3h40 away? I know this isn't a consideration for everyone but it's important to me.

Comment by Paul Crowley (ciphergoth) on A simple device for indoor air management · 2020-10-02T16:20:29.318Z · LW · GW

Why the 6in fan rather than the 8in one? Would seem to move a lot more air for nearly the same price.

Comment by Paul Crowley (ciphergoth) on A diagram for a simple two-player game · 2020-09-28T01:05:23.050Z · LW · GW

Thank you!

Comment by Paul Crowley (ciphergoth) on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-15T03:36:00.867Z · LW · GW

Reminiscent of Freeman Dyson's 2005 answer to the question: "what do you believe is true even though you cannot prove it?":

Since I am a mathematician, I give a precise answer to this question. Thanks to Kurt Gödel, we know that there are true mathematical statements that cannot be proved. But I want a little more than this. I want a statement that is true, unprovable, and simple enough to be understood by people who are not mathematicians. Here it is.
Numbers that are exact powers of two are 2, 4, 8, 16, 32, 64, 128 and so on. Numbers that are exact powers of five are 5, 25, 125, 625 and so on. Given any number such as 131072 (which happens to be a power of two), the reverse of it is 270131, with the same digits taken in the opposite order. Now my statement is: it never happens that the reverse of a power of two is a power of five.
The digits in a big power of two seem to occur in a random way without any regular pattern. If it ever happened that the reverse of a power of two was a power of five, this would be an unlikely accident, and the chance of it happening grows rapidly smaller as the numbers grow bigger. If we assume that the digits occur at random, then the chance of the accident happening for any power of two greater than a billion is less than one in a billion. It is easy to check that it does not happen for powers of two smaller than a billion. So the chance that it ever happens at all is less than one in a billion. That is why I believe the statement is true.
But the assumption that digits in a big power of two occur at random also implies that the statement is unprovable. Any proof of the statement would have to be based on some non-random property of the digits. The assumption of randomness means that the statement is true just because the odds are in its favor. It cannot be proved because there is no deep mathematical reason why it has to be true. (Note for experts: this argument does not work if we use powers of three instead of powers of five. In that case the statement is easy to prove because the reverse of a number divisible by three is also divisible by three. Divisibility by three happens to be a non-random property of the digits).
It is easy to find other examples of statements that are likely to be true but unprovable. The essential trick is to find an infinite sequence of events, each of which might happen by accident, but with a small total probability for even one of them happening. Then the statement that none of the events ever happens is probably true but cannot be proved.
Comment by Paul Crowley (ciphergoth) on Paul Crowley's Shortform · 2020-06-17T18:19:17.175Z · LW · GW

No sarcasm.

Comment by Paul Crowley (ciphergoth) on Covid-19: My Current Model · 2020-06-02T03:40:59.241Z · LW · GW

You're not able to directly edit it yourself?

Comment by Paul Crowley (ciphergoth) on Paul Crowley's Shortform · 2020-06-01T15:13:02.095Z · LW · GW

On Twitter I linked to this saying

Basic skills of decision making under uncertainty have been sorely lacking in this crisis. Oxford University's Future of Humanity Institute is building up its Epidemic Forecasting project, and needs a project manager.

Response:

I'm honestly struggling with a polite response to this. Here in the UK, Dominic Cummings has tried a Less Wrong approach to policy making, and our death rate is terrible. This idea that a solution will somehow spring from left-field maverick thinking is actually lethal.
Comment by Paul Crowley (ciphergoth) on Paul Crowley's Shortform · 2020-05-28T13:39:55.828Z · LW · GW

For the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.

Comment by Paul Crowley (ciphergoth) on New Year's Predictions Thread · 2020-05-05T05:48:52.181Z · LW · GW

I look back and say "I wish he had been right!"

Comment by Paul Crowley (ciphergoth) on New Year's Predictions Thread · 2020-05-05T05:33:40.212Z · LW · GW

Britain was in the EU, but it kept Pounds Sterling, it never adopted the Euro.

Comment by Paul Crowley (ciphergoth) on New Year's Predictions Thread · 2020-05-05T05:21:28.457Z · LW · GW

How many opportunities do you think we get to hear someone make clearly falsifiable ten-year predictions, and have them turn out to be false, and then have that person have the honour necessary to say "I was very, very wrong?" Not a lot! So any reflections you have to add on this would I think be super valuable. Thanks!

Comment by Paul Crowley (ciphergoth) on New Year's Predictions Thread · 2020-05-05T05:19:01.391Z · LW · GW

Hey, looks like you're still active on the site, would be interested to hear your reflections on these predictions ten years on - thanks!

Comment by Paul Crowley (ciphergoth) on Hero Licensing · 2019-12-28T05:15:30.709Z · LW · GW

It is, of course, third-party visible that Eliezer-2010 *says* it's going well. Anyone can say that, but not everyone does.

Comment by Paul Crowley (ciphergoth) on A sealed prediction · 2019-12-25T18:12:22.040Z · LW · GW

I note that nearly eight years later, the preimage was never revealed.

Actually, I have seen many hashed predictions, and I have never seen a preimage revealed. At this stage, if someone reveals a preimage to demonstrate a successful prediction, I will be about as impressed as if someone wins a lottery, noting the number of losing lottery tickets lying about.

Comment by Paul Crowley (ciphergoth) on Why so much variance in human intelligence? · 2019-10-02T00:04:09.804Z · LW · GW

Half formed thoughts towards how I think about this:

Something like Turing completeness is at work, where our intelligence gains the ability to loop in on itself, and build on its former products (eg definitions) to reach new insights. We are at the threshold of the transition to this capability, half god and half beast, so even a small change in the distance we are across that threshold makes a big difference.

Comment by Paul Crowley (ciphergoth) on Why so much variance in human intelligence? · 2019-10-01T23:46:31.188Z · LW · GW
As such, if you observe yourself to be in a culture that is able to reach technologically maturity, you're probably "the stupidest such culture that could get there, because if it could be done at a stupider level then it would've happened there first."

Who first observed this? I say this a lot, but I'm now not sure if I first thought of it or if I'm just quoting well-understood folklore.

Comment by Paul Crowley (ciphergoth) on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-30T08:32:10.789Z · LW · GW

May I recommend spoiler markup? Just start the line with >!

Another (minor) "Top Donor" opinion. On the MIRI issue: agree with your concerns, but continue donating, for now. I assume they're fully aware of the problem they're presenting to their donors and will address it in some fashion. If they do not might adjust next year. The hard thing is that MIRI still seems most differentiated in approach and talent org that can use funds (vs OpenAI and DeepMind and well-funded academic institutions)

Comment by Paul Crowley (ciphergoth) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-12-19T19:32:11.895Z · LW · GW

I note that this is now done. As I have for so many things here. Great work team!

Spoiler space test

Comment by Paul Crowley (ciphergoth) on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-19T19:21:40.503Z · LW · GW

Rot13's content, hidden using spoiler markup:

Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.

Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the requirement to pre-fund researchers’ entire contract significantly increases the effective cost of funding research there. On the other hand, hiring people in the bay area isn’t cheap either.

This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year.

I think of CSER and GCRI as being relatively comparable organisations, as 1) they both work on a variety of existential risks and 2) both primarily produce strategy pieces. In this comparison I think GCRI looks significantly better; it is not clear their total output, all things considered, is less than CSER’s, but they have done so on a dramatically smaller budget. As such I will be donating some money to GCRI again this year.

ANU, Deepmind and OpenAI have all done good work but I don’t think it is viable for (relatively) small individual donors to meaningfully support their work.

Ought seems like a very valuable project, and I am torn on donating, but I think their need for additional funding is slightly less than some other groups.

AI Impacts is in many ways in a similar position to GCRI, with the exception that GCRI is attempting to scale by hiring its part-time workers to full-time, while AI Impacts is scaling by hiring new people. The former is significantly lower risk, and AI Impacts seems to have enough money to try out the upsizing for 2019 anyway. As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019.

The Foundational Research Institute have done some very interesting work, but seem to be adequately funded, and I am somewhat more concerned about the danger of risky unilateral action here than with other organisations.

I haven’t had time to evaluate the Foresight Institute, which is a shame because at their small size marginal funding could be very valuable if they are in fact doing useful work. Similarly, Median and Convergence seem too new to really evaluate, though I wish them well.

The Future of Life institute grants for this year seem more valuable to me than the previous batch, on average. However, I prefer to directly evaluate where to donate, rather than outsourcing this decision.

I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. The current situation, with a binary employed/not-employed distinction, and upfront payment for uncertain output, seems suboptimal. I also hope to significantly reduce overhead (for everyone but me) by not having an application process or any requirements for grantees beyond having produced good work. This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.

Comment by Paul Crowley (ciphergoth) on Nyoom · 2018-12-15T19:35:02.798Z · LW · GW

I think the Big Rationalist Lesson is "what adjustment to my circumstances am I not making because I Should Be Able To Do Without?"

Comment by Paul Crowley (ciphergoth) on Topological Fixed Point Exercises · 2018-11-17T16:57:43.882Z · LW · GW

Just to get things started, here's a proof for #1:

Proof by induction that the number of bicolor edges is odd iff the ends don't match. Base case: a single node has matching ends and an even number (zero) of bicolor edges. Extending with a non-bicolor edge changes neither condition, and extending with a bicolor edge changes both; in both cases the induction hypothesis is preserved.

Comment by Paul Crowley (ciphergoth) on Last Chance to Fund the Berkeley REACH · 2018-07-01T00:08:27.703Z · LW · GW

From what I hear, any plan for improving MIRI/CFAR space that involves the collaboration of the landlord is dead in the water; they just always say no to things, even when it's "we will cover all costs to make this lasting improvement to your building".

Comment by Paul Crowley (ciphergoth) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T22:53:24.990Z · LW · GW

Of course I should have tested it before commenting! Thanks for doing so.

Comment by Paul Crowley (ciphergoth) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T17:36:22.576Z · LW · GW

Spoiler markup. This post has lots of comments which use ROT13 to disguise their content. There's a Markdown syntax for this.

Comment by Paul Crowley (ciphergoth) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T17:31:01.462Z · LW · GW

I note that this is now done.

Comment by Paul Crowley (ciphergoth) on LessWrong 2.0 Feature Roadmap & Feature Suggestions · 2018-06-17T17:30:54.379Z · LW · GW

I note that this is now done.

Comment by Paul Crowley (ciphergoth) on On the Chatham House Rule · 2018-06-14T14:27:17.868Z · LW · GW

"If you're running an event that has rules, be explicit about what those rules are, don't just refer to an often-misunderstood idea" seems unarguably a big improvement, no matter what you think of the other changes proposed here.