Posts

Beliefs at different timescales 2018-11-04T20:10:59.223Z · score: 34 (10 votes)
Counterfactuals and reflective oracles 2018-09-05T08:54:06.303Z · score: 8 (4 votes)
Counterfactuals, thick and thin 2018-07-31T15:43:59.187Z · score: 29 (12 votes)
An environment for studying counterfactuals 2018-07-11T00:14:49.756Z · score: 15 (5 votes)
Logical counterfactuals and differential privacy 2018-02-04T00:17:43.000Z · score: 1 (1 votes)
Oracle machines for automated philosophy 2015-02-17T15:10:04.000Z · score: 1 (1 votes)
Meetup : Berkeley: Beta-testing at CFAR 2014-03-19T05:32:26.521Z · score: 2 (3 votes)
Meetup : Berkeley: Implementation Intentions 2014-02-27T07:06:29.784Z · score: 1 (2 votes)
Meetup : Berkeley: Ask vs. Guess (vs. Tell) Culture 2014-02-19T20:16:30.017Z · score: 0 (1 votes)
Meetup : Berkeley: The Twelve Virtues 2014-02-12T19:56:53.045Z · score: 0 (1 votes)
Meetup : Berkeley: Talk on communication 2014-01-24T03:57:50.244Z · score: 1 (2 votes)
Meetup : Berkeley: Weekly goals 2014-01-22T18:16:38.107Z · score: 1 (2 votes)
Meetup : Berkeley meetup: 5-minute exercises 2014-01-15T21:02:26.223Z · score: 1 (2 votes)
Meetup : Meetup at CFAR, Wednesday: Nutritionally complete bread 2014-01-07T10:25:33.016Z · score: 1 (2 votes)
Meetup : Berkeley: Hypothetical Apostasy 2013-06-12T17:53:40.651Z · score: 3 (4 votes)
Meetup : Berkeley: Board games 2013-06-04T16:21:17.574Z · score: 2 (3 votes)
Meetup : Berkeley: The Motivation Hacker by Nick Winter 2013-05-28T06:02:07.554Z · score: 1 (2 votes)
Meetup : Berkeley: To-do lists and other systems 2013-05-22T01:09:51.917Z · score: 3 (4 votes)
Meetup : Berkeley: Munchkinism 2013-05-14T04:25:21.643Z · score: 2 (3 votes)
Meetup : Berkeley: Information theory and the art of conversation 2013-05-05T22:35:00.823Z · score: 1 (2 votes)
Meetup : Berkeley: Dungeons & Discourse 2013-03-03T06:13:05.399Z · score: 3 (4 votes)
Meetup : Berkeley: Board games 2013-01-29T03:09:23.841Z · score: 3 (4 votes)
Meetup : Berkeley: CFAR focus group 2013-01-23T02:06:35.830Z · score: 3 (4 votes)
A fungibility theorem 2013-01-12T09:27:25.637Z · score: 21 (26 votes)
Proof of fungibility theorem 2013-01-12T09:26:09.484Z · score: 3 (8 votes)
Meetup : Berkeley meetup: Board games! 2013-01-08T20:40:42.392Z · score: 1 (2 votes)
Meetup : Berkeley: How Robot Cars Are Near 2012-12-17T19:46:33.980Z · score: 1 (2 votes)
Meetup : Berkeley: Boardgames 2012-12-05T18:28:09.814Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Hermeneutics! 2012-11-26T05:40:29.186Z · score: 3 (4 votes)
Meetup : Berkeley meetup: Deliberate performance 2012-11-13T23:58:50.742Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Success stories 2012-10-23T22:10:43.964Z · score: 0 (1 votes)
Meetup : Different location for Berkeley meetup 2012-10-17T17:19:56.746Z · score: 1 (2 votes)
[Link] "Fewer than X% of Americans know Y" 2012-10-10T16:59:38.114Z · score: 36 (38 votes)
Meetup : Different location: Berkeley meetup 2012-10-03T08:26:09.910Z · score: 1 (2 votes)
Meetup : Pre-Singularity Summit Overcoming Bias / Less Wrong Meetup Party 2012-09-24T14:46:05.475Z · score: 5 (6 votes)
Meetup : Vienna meetup 2012-09-22T13:14:23.668Z · score: 6 (7 votes)
Meetup report: How harmful is cannabis, and will you change your habits? 2012-09-09T04:50:10.943Z · score: 11 (12 votes)
Meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind 2012-08-29T03:50:23.867Z · score: 4 (5 votes)
Meetup : Berkeley meetup: Operant conditioning game 2012-08-21T15:07:36.431Z · score: 3 (4 votes)
Meetup : Berkeley meetup: Discussion about startups 2012-08-14T17:09:10.149Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Board game night 2012-08-01T06:40:27.322Z · score: 1 (2 votes)
Meetup : Berkeley meetup: Rationalist group therapy 2012-07-25T05:50:53.138Z · score: 4 (5 votes)
Meetup : Berkeley meetup: Argument mapping software 2012-07-18T19:50:27.973Z · score: 3 (4 votes)
Meetup : Berkeley meta-meetup 2012-07-06T08:02:11.372Z · score: 1 (2 votes)
Meetup : Berkeley meetup 2012-06-24T04:36:23.833Z · score: 0 (1 votes)
Meetup : Small Berkeley meetup at Zendo 2012-06-20T08:49:46.065Z · score: 0 (1 votes)
Meetup : Big Berkeley meetup 2012-06-13T01:36:01.863Z · score: 0 (1 votes)
Meetup : Small Berkeley meetup: Nisan's house 2012-06-05T12:27:21.310Z · score: 0 (1 votes)
Meetup : Big Berkeley meetup 2012-05-28T21:25:58.312Z · score: 2 (3 votes)
Meetup : Second Copenhagen meetup 2012-05-20T20:07:04.800Z · score: 0 (1 votes)

Comments

Comment by nisan on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-08T21:22:20.968Z · score: 3 (4 votes) · LW · GW

It sounds like you want a word for "Alice is wrong, and that's terrible". In that case, you can say "Alice is fucking wrong", or similar.

Comment by nisan on Why it took so long to do the Fermi calculation right? · 2019-01-10T17:25:06.596Z · score: 3 (2 votes) · LW · GW

Good point. In that case the Drake equation must be modified to include panspermia probabilities and the variance in time-to-civilization among our sister lineages. I'm curious what kind of Bayesian update we get on those...

Comment by nisan on An environment for studying counterfactuals · 2018-12-19T20:23:45.130Z · score: 2 (1 votes) · LW · GW

The observation can provide all sorts of information about the universe, including whether exploration occurs. The exact set of possible observations depends on the decision problem.

and can have any relationship, but the most interesting case is when one can infer from with certainty.

Comment by nisan on Beliefs at different timescales · 2018-11-10T21:51:53.184Z · score: 2 (1 votes) · LW · GW

Thanks, I made this change to the post.

Comment by nisan on Beliefs at different timescales · 2018-11-10T21:49:28.138Z · score: 4 (2 votes) · LW · GW

Yeah, I think the fact that Elo only models the macrostate makes this an imperfect analogy. I think a better analogy would involve a hybrid model, which assigns a probability to a chess game based on whether each move is plausible (using a policy network), and whether the higher-rated player won.

I don't think the distinction between near-exact and nonexact models is essential here. I bet we could introduce extra entropy into the short-term gas model and the rollout would still be superior for predicting the microstate than the Boltzmann distribution.

Comment by nisan on Beliefs at different timescales · 2018-11-05T16:33:37.268Z · score: 10 (3 votes) · LW · GW

Sure: If we can predict the next move in the chess game, we can predict the next move, then the next, then the next. By iterating, we can predict the whole game. If we have a probability for each next move, we multiply them to get the probability of the game.

Conversely, if we have a probability for an entire game, then we can get a probability for just the next move by adding up all the probabilities of all games that can follow from that move.

Comment by nisan on Beliefs at different timescales · 2018-11-05T16:21:42.662Z · score: 2 (1 votes) · LW · GW

Thanks, I didn't know that about the partition function.

In the post I was thinking about a situation where we know the microstate to some precision, so the simulation is accurate. I realize this isn't realistic.

Comment by nisan on Beliefs at different timescales · 2018-11-05T00:48:59.639Z · score: 2 (1 votes) · LW · GW

The sum isn't over , though, it's over all possible tuples of length . Any ideas for how to make that more clear?

Comment by nisan on EDT solves 5 and 10 with conditional oracles · 2018-10-01T17:07:49.116Z · score: 4 (2 votes) · LW · GW

I'm having trouble following this step of the proof of Theorem 4: "Obviously, the first conditional probability is 1". Since the COD isn't necessarily reflective, couldn't the conditional be anything?

Comment by nisan on History of the Development of Logical Induction · 2018-08-29T06:40:14.962Z · score: 6 (4 votes) · LW · GW

The linchpin discovery is probably February 2016.

Comment by nisan on Counterfactuals, thick and thin · 2018-08-02T22:29:32.624Z · score: 4 (2 votes) · LW · GW

Ok. I think that's the way I should have written it, then.

Comment by nisan on Counterfactuals, thick and thin · 2018-08-01T06:03:21.138Z · score: 5 (3 votes) · LW · GW

The definition involving the permutation is a generalization of the example earlier in the post: is the identity and swaps heads and tails. And . In general, if you observe and , then the counterfactual statement is that if you had observed , then you would have also observed .

I just learned about probability kernels thanks to user Diffractor. I might be using them wrong.

Comment by nisan on Counterfactuals, thick and thin · 2018-08-01T05:46:46.037Z · score: 3 (2 votes) · LW · GW

Oh, interesting. Would your interpretation be different if the guess occurred well after the coinflip (but before we get to see the coinflip)?

Comment by nisan on Counterfactuals, thick and thin · 2018-08-01T01:57:40.074Z · score: 2 (1 votes) · LW · GW

That sounds about right to me. I think people have taken stabs at looking for causality-like structure in logic, but they haven't found anything useful.

Comment by nisan on On the Role of Counterfactuals in Learning · 2018-07-11T04:22:52.524Z · score: 4 (2 votes) · LW · GW

What predictions can we get out of this model? If humans use counterfactual reasoning to initialize MCMC, does that imply that humans' implicit world models don't match their explicit counterfactual reasoning?

Comment by nisan on An environment for studying counterfactuals · 2018-07-11T02:30:23.306Z · score: 2 (1 votes) · LW · GW

I agree exploration is a hack. I think exploration vs. other sources of non-dogmatism is orthogonal to the question of counterfactuals, so I'm happy to rely on exploration for now.

Comment by nisan on Mechanistic Transparency for Machine Learning · 2018-07-11T02:03:59.815Z · score: 7 (4 votes) · LW · GW

"Programmatically Interpretable Reinforcement Learning" (Verma et al.) seems related. It would be great to see modular, understandable glosses of neural networks.

Comment by nisan on Why it took so long to do the Fermi calculation right? · 2018-07-06T21:53:29.096Z · score: 16 (7 votes) · LW · GW

I'd like to rescue/clarify Mitchell's summary. The paper's resolution of the Fermi paradox boils down to "(1) Some factors in the Drake equation are highly uncertain, and we don't see any aliens, so (2) one or more of those factors must be small after all".

(1) is enough to weaken the argument for aliens, to the point where there's no paradox anymore. (2) is basically Section 5 from the paper ("Updating the factors").

The point you raised, that "expected number of aliens is high vs. substantial probability of no aliens" is an explanation of why people were confused.

I'm making this comment because if I'm right it means that we only need to look for people (like me?) who were saying all along "there is no Fermi paradox because abiogenesis is cosmically rare", and figure out why no one listened to them.

Comment by nisan on The Math Learning Experiment · 2018-03-24T22:48:29.109Z · score: 17 (4 votes) · LW · GW

I heard a similar story about when Paul Sally visited a grade school classroom. He asked the students what they were learning, and they said "Adding fractions. It's really hard, you have to find the greatest common denominator...." Sally said "Forget about that, just multiply the numerator of each fraction by the denominator of the other and add them, and that's your numerator." The students loved this, and called it the Sally method.

Comment by nisan on The Math Learning Experiment · 2018-03-24T17:25:34.492Z · score: 7 (2 votes) · LW · GW

Cool, do you remember what the 5-minute explanation was?

Comment by nisan on Hufflepuff Cynicism · 2018-02-13T16:33:39.343Z · score: 15 (4 votes) · LW · GW

I'd love to hear your thoughts on A Fable Of Politics And Science. Would you say that Barron's attitude is better than Ferris's, at least sometimes?

Comment by nisan on Hero Licensing · 2018-02-02T02:40:15.454Z · score: 2 (1 votes) · LW · GW

I like the resemblance to this scene from The Fall Of Doc Future.

Comment by nisan on Logical counterfactuals and differential privacy · 2018-01-24T05:38:55.000Z · score: 2 (1 votes) · LW · GW

This doesn't quite work. The theorem and examples only work if you maximize the unconditional mutual information, , not . And the choice of is doing a lot of work — it's not enough to make it "sufficiently rich".

Comment by nisan on Robustness as a Path to AI Alignment · 2017-10-13T15:56:49.694Z · score: 5 (1 votes) · LW · GW

Why is the scenario you describe the "real" argument for transitivity, rather than the sequential scenario? Or are you pointing to a class of scenarios that includes the sequential one?

Comment by Nisan on [deleted post] 2017-09-24T01:27:59.915Z

This seems to belong to a family of personal narratives invented by people at CFAR (some combination of Andrew Critch and Valentine Smith?).

  • Here's a post about a narrative that says that you're immune to narratives.

  • There's also the "Fully General Meta-Narrative" that says that you can get the benefits of any particular narrative without actually employing the narrative. (For example, if you're tired, there's probably a narrative that will give you a second wind. The Fully General Meta-Narrative says you can just get a second wind when you want to. I hope I'm remembering this correctly.)

  • In this language, you're proposing a narrative that says that you can employ any object-level narrative you wish.

Comment by nisan on LW 2.0 Strategic Overview · 2017-09-19T04:28:16.597Z · score: 1 (1 votes) · LW · GW

Regarding a couple of your concrete suggestions: I like the idea of using existing academic jargon where it exists. That way, reading LW would teach me search terms I could use elsewhere or to communicate with non-LW users. (Sometimes, though, it's better to come up with a new term; I like "trigger-action plans" way better than "implementation intentions".)

It would be nice if users did literature reviews occasionally, but I don't think they'll have time to do that often at all.

Comment by Nisan on [deleted post] 2017-05-31T07:13:56.423Z

Sent.

Comment by Nisan on [deleted post] 2017-05-31T05:36:59.218Z

Me too!

Comment by Nisan on [deleted post] 2017-05-27T23:39:08.105Z

Sorry, I was imagining a scenario where a person has an unpleasant experience and then leaves the community because for the last several months all their close contacts in the community were in the context of an unpleasant living situation. That's bad for the person, and unfortunate for the community as well.

Comment by Nisan on [deleted post] 2017-05-27T20:58:36.580Z

Cool. Yes, a mailing list sounds even better than the low-tech solution I had in mind, which was "every buddy learns 80% of the names of the other buddies through the grapevine, and they happen to be one or two hops away on the social network".

Comment by Nisan on [deleted post] 2017-05-27T20:49:43.336Z

I'm not proposing a house policy here. I'm suggesting that a Dragon would do well to have regular followups with someone outside the house, and I'm proposing that some members of the wider community offer to be those someones.

In the past I've had regular video calls with a couple people who were doing long-term experiments with their lifestyle; I think it was helpful. I believe such an arrangement was part of the Leverage polyphasic sleep experiment.

Jacob is right: There's a difference between a friend one can reach out to if one needs to, and a friend one is scheduled to talk to once a week. Personally, I struggle to keep up with friends without scheduled meetings, and it sounds like the Dragon Army will be very busy.

Also, there is a difference between reaching out to a friend when things have gone very wrong and one needs to get out; and bringing up a less drastic problem during a weekly check-in. In the first case, you need a couch to crash on and maybe a lawyer. In the second case, you need someone who will listen to you and bring an outside perspective, and maybe refer you to other resources.

Partially, I'm afraid that if this doesn't go well, our community will lose a cohort of promising people. It would be a shame if that happened because we failed to pay attention to how they were doing.

But also, if the experiment goes very well, this arrangement would be a means by which the wider community can learn from what went right.

Comment by Nisan on [deleted post] 2017-05-27T04:53:44.634Z

Are there people external to the project who are going to keep an eye on this? I think it would be sensible for each participant to have a buddy outside the house who checks in with them regularly. And for each buddy to know who the other buddies are.

Comment by nisan on Superintelligence: The Idea That Eats Smart People · 2016-12-25T04:48:11.995Z · score: 4 (4 votes) · LW · GW

It's a curious refutation. The author says that the people who are concerned about superintelligence are very smart, the top of the industry. They give many counterarguments, most of which can be easily refuted. It's as if they wanted to make people more concerned about superintelligence, while claiming to argue the opposite. And then they link directly to MIRI's donation page.

Comment by nisan on CFAR's new mission statement (on our website) · 2016-12-11T21:43:51.876Z · score: 2 (2 votes) · LW · GW

Maybe you'll cover this in a future post, but I'm curious about the outcomes of CFAR's past AI-specific workshops, especially "CFAR for ML Researchers" and the "Workshop on AI Safety Strategy".

Comment by nisan on The correct response to uncertainty is *not* half-speed · 2016-01-17T08:10:25.175Z · score: 4 (4 votes) · LW · GW

I can get O(k).

Comment by nisan on Open Thread, Jul. 6 - Jul. 12, 2015 · 2015-07-07T05:04:53.581Z · score: 2 (2 votes) · LW · GW

How much karma does one need to make a top-level post or meetup announcement? A new user wants to announce a new meetup in Kiev. If you want to help them out, you can upvote their comment.

Comment by nisan on Is Scott Alexander bad at math? · 2015-05-06T02:39:01.773Z · score: 0 (2 votes) · LW · GW

You simply measure out a length such that, if you drew a square that many meters on a side, and also drew a square 3.1415 meters on a side, they would enclose no area between the two of them. Then evenly divide this length into meters, and for each meter write down 2.7183. Now multiply those numbers together, and you'll find they make -1. Easy!

Comment by nisan on Is Scott Alexander bad at math? · 2015-05-06T02:37:18.119Z · score: 22 (26 votes) · LW · GW

Scott: I am bad at math.

Jonah: You are good at math.

Scott: No, I really am bad at math.

Jonah: No, you really are good at math.

Nisan: Esteemed colleagues, it is no use! If you continue this exchange, Scott will continue to believe they are bad at math, and Jonah will continue to disagree — forever!

Scott: Thank you for the information, but I still believe I am bad at math.

Jonah: And I still believe Scott is good at math.

Scott: And I still believe I am bad at math.

Nisan: Esteemed colleagues, give it up! Even if you persist in this exchange, neither of you will change your stated beliefs. In fact, I could truthfully repeat my previous sentence a hundred times (including the first time), and Scott would still believe they are bad at math, and Jonah would still disagree.

Scott: That's good to know, but for better or for worse, I still believe I am bad at math.

Jonah: And I still believe Scott is good at math.

Scott: Ah, but now I realize I am good at math after all!

Jonah: I agree, and what's more, I now know exactly how good at math Scott is!

Scott: And now I know that as well.

Comment by nisan on Applause Lights · 2015-04-14T05:28:22.065Z · score: 2 (2 votes) · LW · GW

"I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence..."

Seven years later, this open letter was signed by leaders of the field. It's amusing how similar it is to the above speech, especially considering how it actually marked a major milestone in the advancement of the field of AI safety.

Comment by nisan on Open thread, Feb. 23 - Mar. 1, 2015 · 2015-02-24T05:22:11.051Z · score: 1 (1 votes) · LW · GW

Kaj Sotala wrote a pdf called "How to run a Less Wrong meetup" or something like that.

Comment by nisan on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-15T01:27:17.615Z · score: 3 (5 votes) · LW · GW

Ah. Well, then there doesn't seem to be anything to debate here. If you want to do what makes you happy, then do what makes you happy.

Comment by nisan on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-15T01:22:14.022Z · score: 3 (3 votes) · LW · GW

I believe donating to the best charity is essentially correct, for the reason you state. You won't find much disagreement on Less Wrong or from GiveWell. Whether that's obvious or not is a matter of opinion, I suppose. Note that in GiveWell's latest top charity recommendations, they suggest splitting one's donation among the very best charities for contingent reasons not having to do with risk aversion.

If you had some kind of donor advised fund that could grow to produce an arbitrarily large amount of good given enough time, that would present a conundrum. It would be exactly the same conundrum as the following puzzle: Suppose you can say a number, and get that much money; which number do you say? In practice, however, our choices are limited. The rule against perpetuities prevents you from donating long after your lifetime; and opportunities to do good with your money may dry up faster than your money grows. Holden Karnofsky has some somewhat more practical considerations.

Comment by nisan on Is the potential astronomical waste in our universe too small to care about? · 2014-10-22T17:59:05.317Z · score: 0 (2 votes) · LW · GW

Claim: There are some deals you should make but can't.

Comment by nisan on Open thread, 14-20 July 2014 · 2014-07-17T01:14:36.726Z · score: 0 (0 votes) · LW · GW

It might be worth looking into which life insurance companies are friendly to cryonics.

Comment by nisan on [LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality · 2014-06-20T17:22:26.422Z · score: 6 (6 votes) · LW · GW

There's an interesting parallel with Modal Combat. Both approaches want to express the idea that "moral agents are those that cooperate with moral agents". Modal Combat resolves the circularity with diagonalization, and Eigenmorality resolves it by finding a stable distribution.

Comment by nisan on Against utility functions · 2014-06-20T16:57:26.468Z · score: 3 (5 votes) · LW · GW

That comment is about utilitarianism and doesn't mention "utility functions" at all.

Comment by nisan on Open Thread, May 19 - 25, 2014 · 2014-05-20T01:54:55.076Z · score: 4 (4 votes) · LW · GW

No, AIXI isn't computable and so does not include itself as a hypothesis.

Comment by nisan on Open Thread, May 19 - 25, 2014 · 2014-05-20T01:53:57.486Z · score: 3 (3 votes) · LW · GW

Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.

You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.

Comment by nisan on Open Thread, May 19 - 25, 2014 · 2014-05-19T15:53:19.254Z · score: 2 (2 votes) · LW · GW

In order to capture your intuition that a random sequence is "unsurprising", you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.

Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.

Comment by nisan on Group Rationality Diary, May 1-15 · 2014-05-13T14:07:00.851Z · score: 1 (1 votes) · LW · GW

Critch's Really Getting Bayes game.