Memorizing a Deck of Cards 2020-09-16T01:31:05.201Z
Tofly's Shortform 2020-09-06T01:08:23.032Z
2010s Predictions Review 2019-12-30T21:47:14.699Z
Thoughts on the 5-10 Problem 2019-07-18T18:56:14.339Z


Comment by Tofly on Cornell Meetup · 2021-11-23T22:54:02.106Z · LW · GW

I am a first year CS PhD student at Cornell, and interested (though not currently working on it).  I will DM you.

Comment by Tofly on [Prediction] We are in an Algorithmic Overhang, Part 2 · 2021-10-17T17:35:13.716Z · LW · GW

The brain may also be excessively complicated to defend against parasites.

Comment by Tofly on 2021 Darwin Game - Tundra · 2021-10-05T01:45:06.200Z · LW · GW

Which random factors caused the frostwing snippers to die out? Them migrating out? Competitors or predators migrating in? Or is there some chance of not getting the seed, even if they're the only species left? I didn't get a good look at the source code, but I thought things were fairly deterministic once only one species was left.

Comment by Tofly on The Trolley Problem · 2021-09-28T21:38:12.862Z · LW · GW

In most formulations, the five people are on the track ahead, not in the trolley.

I took a look at the course you mentioned:

It looks like I got some of the answers wrong.

Where am I? 

In the trolley.  You, personally, are not in immediate danger.

Who am I?

A trolley driver.

Who's in the trolley?

You are.  No one in the trolley is in danger.

Who's on the tracks?

Five workers ahead, one to the right.

Do I work for the trolley company?


The problem was not as poorly specified as you implied it to be.

Comment by Tofly on The Trolley Problem · 2021-09-27T23:52:08.759Z · LW · GW

What year is it?

Current year.

Where am I? 

Near a trolley track.

Who am I?


Who's in the trolley?

You don't know.

Who's on the tracks?

You don't know.

Who designed the trolley?

You don't know.

Who is responsible for the brake failure?

You don't know.

Do I work for the trolley company?

Assume that you're the only person who can pull the lever in time, and it wouldn't be difficult or costly for you to do so. If your answer still depends on whether or not you work for the trolley company, you are different from most (WEIRD) people, and should explain both cases explicitly.

If so, what are its standard operating procedures for this situation?

Either there are none, or you're actually not in the situation above, but creating those procedures right now.

What would my family think?

I don't know, maybe you have an idea.

Would either decision affect my future job prospects?


Is there a way for me to fix the systemic problem of trolleys crashing in thought experiments?

Maybe, but not before the trolley crashes.

Can I film the crash and post the video online?


Comment by Tofly on Vaccination and House Rules · 2021-04-24T16:42:57.834Z · LW · GW

Note: it's probably not a good idea to post a photo of your vaccine card online.

Comment by Tofly on Matryoshka Faraday Box · 2020-11-28T07:50:32.526Z · LW · GW

If Scarlet pressed the PANIC button then she would receive psychiatric counseling, three months mandatory vacation, optional retirement at full salary and disqualification for life from the most elite investigative force in the system.

This sounds familiar, but some quick searching didn't bring anything up.  Is it a reference to something?

Comment by Tofly on [deleted post] 2020-09-17T18:25:06.834Z

From the old wiki discussion page:

I'm thinking we can leave most of the discussion of probability to Wikipedia. There might be more to say about Bayes as it applies to rationality but that might be best shoved in a separate article, like Bayesian or something. Also, I couldn't actually find any OB or LW articles directly about Bayes' theorem, as opposed to Bayesian rationality--if anyone can think of one, please add it. --A soulless automaton 19:31, 10 April 2009 (UTC)

  • I'd rather go for one article than break out a separate one for Bayesian - we can start splitting things out if the articles start to grow too long. --Paul Crowley (ciphergoth) 22:58, 10 April 2009 (UTC)
  • I added what I thought was the minimal technical information. I lean towards keeping separate concepts separate, even if the articles are sparse. If someone else feels it would be worthwhile to combine them though, go ahead. --BJR 23:07, 10 April 2009 (UTC)
  • I really would prefer to keep the maths and statistics separate from the more nebulous day-to-day rationality stuff, especially since Wikipedia already does an excellent job of covering the former, while the latter is much more OB/LW-specific. --A soulless automaton 21:59, 11 April 2009 (UTC)
Comment by Tofly on The Wiki is Dead, Long Live the Wiki! [help wanted] · 2020-09-16T19:11:53.240Z · LW · GW

For wiki pages which are now tags, should we remove linked LessWrong posts, since they are likely listed below?

What should the convention be for linking to people's names? For example, I have seen the following:

  • LessWrong profile
  • Personal website/blog
  • Wiki/tag page on person
  • Wikipedia article on person
  • No link
  • No name

Finally, should the "see also" section be a comma-separated list after the first paragraph, or a bulleted list at the end of the page?

Comment by Tofly on Tofly's Shortform · 2020-09-06T03:00:48.044Z · LW · GW

Thanks. I had skimmed that paper before, but my impression was that it only briefly acknowledged my main objection regarding computational complexity on page 4. Most of the paper involves analogies with evolution/civilization which I don't think are very useful-my argument is that the difficulty of designing intelligence should grow exponentially at high levels, so the difficulty of relatively low-difficulty tasks like designing human intelligence doesn't seem that important.

On page 35, Eliezer writes:

I am not aware of anyone who has defended an “intelligence fizzle” seriously and at great length.

I will read it again more thoroughly, and see if there's anything I missed.

Comment by Tofly on Tofly's Shortform · 2020-09-06T01:08:23.461Z · LW · GW

I believe that fast takeoff is impossible, because of computational complexity.

This post presents a pretty clear summary of my thoughts. Essentially, if the problem of “designing an AI with intelligence level n” scales at any rate greater than linear, this will counteract any benefit an AI received from its increased intelligence, and so its intelligence will converge. I would like to see a more formal model of this.

I am aware that Gwern has responded to this argument, but I feel like he missed the main point. He gives many arguments showing ability to solve an NP-complete problem in polynomial time, or still do better than a human, or still gain a benefit from performing mildly better than a human.

But the concern here is really about A.I.s performing better than humans at certain tasks. It’s about them rapidly, and recursively, ascending to godlike levels of intelligence. That’s what is being argued is impossible. And there’s a difference between “superhuman at all tasks” and “godlike intelligence enabling what seems like magic to lesser minds”.

One of Eliezer’s favorite examples of how a powerful AI might take over the world is by solving the protein folding problem, designing some nano-machines, and using an online service to have them constructed. The problem with this scenario is the part where the AI “solves the protein folding problem”. That the problem is NP-hard means that it will be difficult, no matter how intelligent the AI is.

Something I’ve felt confused about, as I’ve been writing this up, is this problem of “what is the computational complexity of designing an AI with intelligence level N?” I have an intuition that there should be some “best architecture”, at least for any given environment, and that this architecture should be relatively “simple”. And once this architecture has been discovered, that’s pretty much it for self-improvement-you can still benefit from acquiring extra resources and improving your hardware, but these have diminishing returns. The alternative to this is that there’s an infinite hierarchy of increasingly good AI designs, which seems implausible to me. (Though there is an infinite hierarchy of increasingly large numbers, so maybe it isn’t.)

Now, it could be that even without fast takeoff, AGI is still a threat. But this is a different argument than "as soon as artificial intelligence surpasses human intelligence, recursive self-improvement will take place, creating an entity we can't hope to comprehend, let alone oppose."

Edit: Here is a discussion involving Yudkowsky regarding some of these issues.

Comment by Tofly on Thoughts on the 5-10 Problem · 2019-07-19T16:42:33.493Z · LW · GW

Doesn't that mean the agent never makes a decision?

Comment by Tofly on Thoughts on the 5-10 Problem · 2019-07-19T16:37:54.503Z · LW · GW

Yes, you could make the code more robust by allowing the agent to act once its found a proof that any action is superior. Then, it might find a proof like

U(F) = 5

U(~F) = 10

10 > 5

U(~F) > U(F)

However, there's no guarantee that this will be the first proof it finds.

When I say "look for a proof", I mean something like "for each of the first 10^(10^100)) Godel numbers, see if it encodes a proof. If so, return that action.

In simple cases like the one above, it likely will find the correct proof first. However, as the universe gets more complicated (as our universe is), there is a greater chance that a spurious proof will be found first.