Comment by coscott on I Want To Live In A Baugruppe · 2017-03-17T04:01:14.778Z · score: 1 (1 votes) · LW · GW

I am a very interested party. I am also interested in all things related to a child-friendly group house that is close to MIRICFAR.

Meetup : West LA - Big Numbers

2015-06-07T03:39:11.612Z · score: 1 (2 votes)
Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 121 · 2015-03-13T23:39:47.709Z · score: 8 (8 votes) · LW · GW

not in 2015 it isnt.

Comment by coscott on Rationality: From AI to Zombies · 2015-03-13T15:17:28.673Z · score: 22 (24 votes) · LW · GW

The cover is incorrect :(

EDIT: If you do not understand this post, read essay 268 from the book!

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T15:26:46.500Z · score: 1 (1 votes) · LW · GW

Yes, and my claim is that that is what you did too without knowing it. Think about what sidereal and solar day mean, and how you would calculate one from the other.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T04:21:21.706Z · score: 0 (0 votes) · LW · GW

If the sidereal day and the solar day mean what I am guessing they mean, your 3:55 and Lumifer's 3:55 come from the same place.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T01:49:11.004Z · score: 0 (0 votes) · LW · GW

I do not know what some terms mean, but I think that is not another close figure, that is the same figure.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T01:45:27.964Z · score: 1 (9 votes) · LW · GW

There is no way Emma Watson can get behind a story that angered feminists so much.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-10T21:12:58.987Z · score: 20 (20 votes) · LW · GW

Two quotes that are scary together:

"There can only be one king upon the chessboard. There can only be one piece whose value is beyond price. That piece is not the world, it is the world's peoples, wizard and Muggle alike, goblins and house-elves and all." - Albus Dumbledore

"I shall not... by any act of mine... destroy the world... I shall take no chances... in not destroying the world..." - Harry Potter

Harry is unfriendly. When it comes time for harry to choose between saving all the people and a small chance at saving the world, you will all learn to regret helping him get out of the box.

Comment by coscott on Announcement: The Sequences eBook will be released in mid-March · 2015-03-04T03:28:04.761Z · score: 12 (12 votes) · LW · GW

I want a leatherbound hardcopy. This is fanboyism.

Comment by coscott on Announcement: The Sequences eBook will be released in mid-March · 2015-03-03T05:26:54.079Z · score: 3 (3 votes) · LW · GW

Castify: https://www.kickstarter.com/projects/1267969302/lesswrong-the-sequences-audiobook

Comment by coscott on Announcement: The Sequences eBook will be released in mid-March · 2015-03-03T02:54:13.175Z · score: 0 (0 votes) · LW · GW

Could you provide any kind of estimate for time/cost of the physical/audio books.

I ask because I am deciding if I should read the ebook or wait for the audio.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T20:42:54.254Z · score: 21 (21 votes) · LW · GW

Here is my tentative submission to FF.net. Please comment.

I decline to help Harry out of the box.

Harry no longer has Harry-values; he has unbreakable-vow-values. He is smart, and he will do whatever he can to "not destroy the world." In the process maximizing the probability of "not destroying the world," he will likely destroy the world.

If you would allow me, I would like to appeal to Voldemort's rationality and cast Avada Kedavra on Harry before he says or does anything.

I do not think I will be able to stop other people from getting Harry out of the box. I expected people to believe me when I tried to explain why we should not let Harry out of the box. They did not. It was frustrating. You have taught me a valuable lesson about what it is like to be an FAI researcher. Thank you.

EDIT: I have posted it.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 109 · 2015-02-24T19:24:55.241Z · score: 4 (4 votes) · LW · GW

It is literary evidence, because EY is talking about the glasses.

Comment by coscott on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-19T07:55:18.597Z · score: 0 (0 votes) · LW · GW

Quirrell marked Harry as his equal. I cannot imagine anything more marking someone as your equal than replacing their mind with your own.

Comment by coscott on Is there a rationalist skill tree yet? · 2015-01-31T17:36:02.472Z · score: -2 (4 votes) · LW · GW

Rationality skills are not something you can complete and move on to the next level. If rationality moves into your system 1, then you are doing it wrong (or maybe doing it REALLY REALLY well).

Comment by coscott on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-19T01:53:38.328Z · score: 1 (1 votes) · LW · GW

What app does less wrong recommend for to-do lists? I just started using Workflowy (recommended from a LW friend), but was wondering if anyone had strong opinions in favor of something else.

P.S. If you sign up for workflowy here, you get double space.

EDIT: The above link is my personal invite link, and I get told when someone signs up using it, and I get to see their email address. I am not going to do anything with them, but I feel obligated to give this disclaimer anyway.

Comment by coscott on Je suis Charlie · 2015-01-16T18:20:37.352Z · score: 0 (0 votes) · LW · GW

I agree with you criticism. Thank you.

Comment by coscott on Je suis Charlie · 2015-01-16T17:16:28.491Z · score: 0 (0 votes) · LW · GW

I do not know the answer to you question. Here is my best guess after a couple minutes of trying to answer the question.

Short answer: Bayesianism is not about priors, it is about how evidence should change priors.

The Bayesian approach is all about evidence. Bayesian probability theory is the math of evidence. It needs a prior to work, because evidence is all about how much beliefs should change, so you need a prior to change. You could also do a lot of the Bayesian analysis without choosing a prior, and just write it down as "how much your beliefs would change." (but this doesn't end up with answers that are single numbers)

Seriously, if you define evidence as "something that sways your beliefs because it is more likely to happen under one hypothesis than the alternative hypothesis," then Bayesianism is the math of evidence, and frequentism (which is used in "Real science") is not. (and does not even really try to be)

Also, most of the people here would agree that if they do not have sufficient evidence, then they should still assign a probability, and you should be very quick to change it as you get evidence. This last claim might be controversial here, because people might have alternate hacks where they don't do this to avoid bias, but they will agree that if they could trust themselves, they would want to do this.

Comment by coscott on Je suis Charlie · 2015-01-15T15:41:37.414Z · score: 5 (5 votes) · LW · GW

My guess is that it is because

I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.

and

It sounds like a fine Bayesian approach for getting through life, but for real scientific knowledge, we can't rely on prior reasonings (even though these might involve Bayesian reasoning). Real science works by investigating evidence.

look like a significant misunderstanding of what the bayesian approach is.

Comment by coscott on What does being x% on board with the program of a movement mean? · 2015-01-06T04:56:37.361Z · score: 2 (4 votes) · LW · GW

I find myself very put off by this comment, and I am not sure if I fully understand why it is bothering me. (or if it is good that it is bothering me) My immediate reaction is that it is rude for you to accuse someone of dishonesty about his own preferences. Instead I feel that you should assume honesty (about statements of personal preferences) and try to cultivate a society where honesty is the optimal strategy.

I am not sure if I am willing to take on all of the consequences of adopting this strategy, and am I not sure if is really well defined as there is a grey area between "preferences" and "beliefs" (Here I mean beliefs as falsifiable claims/probabilities)

Comment by coscott on What does being x% on board with the program of a movement mean? · 2015-01-06T04:32:10.579Z · score: 2 (2 votes) · LW · GW

I think this issue is all about identifying clusters in a list of points in a large vector space. In particular, you want a method to identify these clusters which is independent of linear transformations on the space. (Replacing one questions with n^2 questions corresponds to multiplying the weight of that question by n) I do not know much about this, but this seems like it is doomed to fail. In particular, it seems like if the points are in any kind of general position, then the whole thing looks like a large simplex, and there is no way to tell the difference between points. You will probably always be able to change the "clusters" by sdding whatever questions you want.

I think the way past this is to allow each individual to choose their own weighting on the questions signifying how "important" that issue is to them. I think there is an important difference between two people who agree on all issues but prioritize them differently, and it is not a problem that they can agree with a movement to different degrees.

Comment by coscott on 2014 Survey Results · 2015-01-04T19:49:22.868Z · score: 1 (1 votes) · LW · GW

No, I think that a god that does not interfere with the physical universe at all counts as not supernatural by the wording of the question.

My point was that the median of the difference of two data sets is not the difference of the median. (although it is still evidence of a problem)

Comment by coscott on 2014 Survey Results · 2015-01-04T07:40:47.543Z · score: 2 (4 votes) · LW · GW

Conjunctions do not work with medians that way. From what you quoted, it is entirely possible that the median probability for that claim is 0. You can figure it out from the raw data.

Comment by coscott on Open thread, Dec. 22 - Dec. 28, 2014 · 2014-12-26T04:59:50.487Z · score: 1 (1 votes) · LW · GW

Thanks!

Comment by coscott on Open thread, Dec. 22 - Dec. 28, 2014 · 2014-12-24T04:02:10.967Z · score: 1 (1 votes) · LW · GW

If this is possible, I think it would also be nice to have a link I can bookmark which takes me straight to the most recent open thread's comments.

Comment by coscott on Rationality Jokes Thread · 2014-12-23T21:50:51.061Z · score: 4 (4 votes) · LW · GW

An uncountably infinite number of mathematicians walk into a bar. The first mathematician orders no beer. The second orders no beer. The third orders no beer. The bartender says "whoa, I'm going to run out of beer!".

Comment by coscott on How to deal with Santa Claus? · 2014-12-22T19:01:51.965Z · score: 5 (5 votes) · LW · GW

Another important question if you choose to tell the truth, is what do you do about other adults that lie to your own kids.

Would it be reasonable to request other adults not to lie to your children?

Would it be reasonable to ask lying adults to correct themselves or even apologize to your children?

Comment by coscott on How to deal with Santa Claus? · 2014-12-22T18:52:49.456Z · score: 8 (8 votes) · LW · GW

The only real question should be then whether you ask her not to tell other kids.

I really do not like the idea of encouraging children to lie to their peers. If forced to choose between lying to children about Santa, and encouraging them to lie, I think I would choose to lie to them.

Comment by coscott on How to deal with Santa Claus? · 2014-12-22T18:29:27.332Z · score: 0 (0 votes) · LW · GW

Relevant article by EY, and discussion in comments

Comment by coscott on Open thread, Dec. 22 - Dec. 28, 2014 · 2014-12-22T17:49:18.181Z · score: 5 (5 votes) · LW · GW

I suggest you make it happen. Start with a discussion level post suggesting habits, then a week later, make a discussion level post asking everyone to rank them.

Comment by coscott on Kickstarting the audio version of the upcoming book "The Sequences" · 2014-12-20T04:32:31.449Z · score: 5 (5 votes) · LW · GW

Backed! Also, the total has reached 2,505 out of the 2,500 goal!

Comment by coscott on Kickstarting the audio version of the upcoming book "The Sequences" · 2014-12-20T03:57:31.589Z · score: 0 (0 votes) · LW · GW

What about the other direction? Will all (or almost all) of the posts that have been previously recorded be in the Audio Book?

Comment by coscott on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-19T22:22:43.424Z · score: 6 (6 votes) · LW · GW

For a family gift exchange game, I am giving a gift of the form: "25 dollars will be donated to a charity of you choice from this list"

Please help me form that list. My goal is to make the list feel as diverse as possible, so it feels like there is a meaningful decision, without sacrificing very much effectiveness.

My current plan is to take the 8 charities on givewell's top charities list, and remove a couple that have the same missions as other charities on the list. Maybe adding MIRI or other Xrisk charities (give me ideas) that are very difficult to compare effectiveness with the givewell charities.

What would you put on the list?

Comment by coscott on xkcd on the AI box experiment · 2014-11-22T17:40:22.094Z · score: 1 (1 votes) · LW · GW

Please elaborate. (unless it is an infohazard to do so)

Comment by coscott on xkcd on the AI box experiment · 2014-11-21T16:24:31.917Z · score: 9 (9 votes) · LW · GW

It definitely wouldn't hurt to emphasize our connection to MIRI.

Are we optimizing for Less Wrong reputation or MIRI reputation?

Meetup : West LA Meetup: Lightning Talks

2014-11-01T06:53:13.649Z · score: 1 (2 votes)
Comment by coscott on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-30T01:17:08.756Z · score: 3 (3 votes) · LW · GW

if you memorize logs, I recommend memorizing natural logs of primes. This is all you need to quickly calculate natural log, log_2, and log_10 of any integer.

You get ln of any number by adding together the natural logs of the prime factors, and you get log_m of n by the formula

log_m(n)=ln(n)/ln(m)

(maybe memorize ln(10) too to make the calculation a little easier)

Comment by coscott on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-30T01:11:12.378Z · score: 0 (0 votes) · LW · GW

I doubt this. I conjecture that more people lie and say they would be utilitarian than lie and say they would not be utilitarian. I hope that I would do the utilitarian thing, but I am not sure that I actually would be able to get myself to do it. (Maybe I would be more likely to actually do it if I were drunk)

Comment by coscott on 2014 Less Wrong Census/Survey · 2014-10-27T04:45:51.408Z · score: 4 (4 votes) · LW · GW

I actually do not think it is very close to left-libertarian at all. I am very curious what Yvain answers for this question.

Comment by coscott on 2014 Less Wrong Census/Survey · 2014-10-26T01:32:13.287Z · score: 31 (31 votes) · LW · GW

Next year, can we have "something sort of like left-libertarianism-ist" on the big politics question. I think that there are many people here (myself included) that do not know how to categorize ourselves politically, but know that we have a lot in common with Yvain.

Comment by coscott on 2014 Less Wrong Census/Survey · 2014-10-26T01:24:06.117Z · score: 38 (38 votes) · LW · GW

Done. I accidentally hit enter when I had everything done except for the digit question, so It submitted my entry and I was not able to answer that question. :(

Comment by coscott on Non-standard politics · 2014-10-25T16:21:03.750Z · score: 1 (1 votes) · LW · GW

Correct me if I am wrong:

Ah, so you and DanielLC define "paying people to be poor" to be when government incentives make it better for people with less normal income than for people with more normal income.

I was trying to say that we would still be paying people to be poor, just not enough to cancel out 100% the negative of being poor, so that making more money is still monotonic in increasing happiness.

I think my definition is more reasonable, but yours is also reasonable, as it seems to capture some extra connotations. I retract my complaint under your definition.

Comment by coscott on Non-standard politics · 2014-10-25T15:43:10.071Z · score: 3 (3 votes) · LW · GW

I do not understand your argument. If people know that taxes/basic income are coming in the future, that is an incentive for them to become poor relative to if taxes/basic income was not coming. They may not say "Oh, that is a good deal, I want to be poor," but they may work less or take bigger financial risks because of it, because being poor is relatively less bad than it would be otherwise.

Comment by coscott on Non-standard politics · 2014-10-25T03:37:38.918Z · score: 3 (5 votes) · LW · GW

I also support basic income, but I think you are wrong when you say it is not "paying people to be poor." If you give everyone the same amount, but then just take it right back from the rich in taxes, this is basically the same a just paying the poor for being poor.

Comment by coscott on Open thread, Oct. 20 - Oct. 26, 2014 · 2014-10-23T17:54:28.930Z · score: 2 (2 votes) · LW · GW

I posted a new math puzzle. I really like this one.

Comment by coscott on 2014 Less Wrong Census/Survey - Call For Critiques/Questions · 2014-10-17T16:38:58.698Z · score: 0 (0 votes) · LW · GW

In the Religious Denomination question, "If atheist, please skip this question." should be replaced with "If non-religious, please skip this question."

Maybe even remove this sentence and add "non-religious" or "not applicable" as an option.

I think this is a very important change. I think there are many people who identify as Jewish Atheist, Buddhist Atheist, or Unitarian Universalist Atheist, (and maybe some others) and right now you are leaving it up to them to choose how to interpret the question. No information is lost by implementing this change, as there was already a question about theism.

Comment by coscott on Open thread, September 22-28, 2014 · 2014-09-25T19:30:52.238Z · score: 0 (0 votes) · LW · GW

I do not know how I learned how to argue, but I do not think it has anything to do with negative examples.

For me, it seems similar to understanding what is a valid mathematical proof (one which in theory could be expanded to following the logical rules at each step) but where you are allowed to make observations and probabilistic reasoning, all of which came naturally to me. I do not feel like I ever had any inclination to use logical fallacies, and I feel like I am quick to recognize when arguments do not make sense.

This is in contrast with cognitive biases. I feel like I am very dependent on parts of our brain that have biases, I will not be able to get past them easily, and can learn to mitigate them by being aware of them.

Comment by coscott on Open thread, September 22-28, 2014 · 2014-09-25T18:27:04.452Z · score: 2 (2 votes) · LW · GW

I do not know. To be honest, my high school self had a strong tendency to overestimate the rationality and learning potential of the general population.

Comment by coscott on Open thread, September 22-28, 2014 · 2014-09-25T14:43:01.552Z · score: 1 (1 votes) · LW · GW

Biased Pandemic is about learning about cognitive biases. Cognitive biases are different from logical fallacies.

Comment by coscott on Open thread, September 22-28, 2014 · 2014-09-25T01:47:37.722Z · score: 4 (4 votes) · LW · GW

Is it worthwhile to teach about "Logical Fallacies?"

When in high school, one of my English classes had a unit on logical fallacies. Everyone was given a list of "logical fallacies" like "appeal to authority" and "slippery slope." We had to do things like match examples with the names of the fallacies (which would almost always have multiple reasonable answers), and come up with examples of various fallacies.

At the time, I thought that this was a huge waste of time. My reasoning was that there were many more ways to give arguments incorrectly than to give arguments correctly, and that we should instead be teaching people what valid arguments are, and not to trust anything else. I have not really questioned this initial judgement until just now.

Now I am forming a new opinion on this question, and would like collect some opinions from Less Wrong.

Comment by coscott on Open thread, September 22-28, 2014 · 2014-09-24T15:36:46.424Z · score: 6 (6 votes) · LW · GW

I think the easiest way to implement this is to replace Open Thread with a third subreddit.

Meetup : West LA Meetup: Lightning Talks

2014-08-29T02:11:14.866Z · score: 1 (2 votes)

Meetup : The Prisoner's Dilemma

2014-08-12T04:58:30.768Z · score: 1 (2 votes)

Maximize Worst Case Bayes Score

2014-06-17T09:12:14.367Z · score: 7 (8 votes)

Second MIRIxLosAngeles Meeting

2014-06-07T05:55:42.257Z · score: 7 (8 votes)

Meetup : Second MIRIxLosAngeles Meeting

2014-06-07T05:47:25.375Z · score: 1 (2 votes)

Summary of the first SoCal FAI Workshop

2014-04-30T07:35:52.711Z · score: 7 (8 votes)

Southern California FAI Workshop

2014-04-20T08:55:19.467Z · score: 23 (18 votes)

Open Thread: March 4 - 10

2014-03-04T03:55:33.045Z · score: 3 (4 votes)

Open Thread February 25 - March 3

2014-02-25T04:57:43.117Z · score: 8 (9 votes)

Terminal and Instrumental Beliefs

2014-02-12T22:45:04.853Z · score: 4 (9 votes)

Open Thread for February 11 - 17

2014-02-11T18:08:23.934Z · score: 3 (4 votes)

Preferences without Existence

2014-02-08T01:34:03.772Z · score: 16 (24 votes)

Meetup : West LA Meetup-indexical and logical uncertainty

2014-01-29T22:26:20.792Z · score: 0 (1 votes)

Logical and Indexical Uncertainty

2014-01-29T21:49:53.387Z · score: 19 (20 votes)

Meetup : West LA: Surreal Numbers

2014-01-18T20:56:50.724Z · score: 2 (3 votes)

Thought Crimes

2014-01-15T05:23:32.825Z · score: 5 (20 votes)

Functional Side Effects

2014-01-14T20:22:04.704Z · score: 0 (11 votes)

On Voting for Third Parties

2014-01-13T03:16:28.394Z · score: 6 (11 votes)

Meetup : West LA Meetup: Democracy

2014-01-13T01:07:04.880Z · score: 0 (1 votes)

Even Odds

2014-01-12T07:24:25.537Z · score: 42 (40 votes)

Meetup : West Los Angeles - Resolutions

2013-12-16T20:34:00.260Z · score: 0 (1 votes)

Subjective Altruism

2013-10-18T04:06:54.419Z · score: 7 (12 votes)

Open Thread, October 13 - 19, 2013

2013-10-14T01:57:13.526Z · score: 4 (5 votes)

PSA: Very important policy change at Cryonics Institute

2013-10-03T05:47:47.004Z · score: 20 (32 votes)

Meetup : West LA Meetup: What are the odds?

2013-09-30T19:10:22.449Z · score: 0 (1 votes)

Open Thread, September 30 - October 6, 2013

2013-09-30T05:18:36.502Z · score: 4 (5 votes)

The Ultimate Sleeping Beauty Problem

2013-09-30T00:48:08.395Z · score: 5 (8 votes)

The Belief Signaling Trilemma

2013-09-20T00:50:32.810Z · score: 8 (10 votes)

Alternative to Bayesian Score

2013-07-27T19:26:59.792Z · score: 6 (7 votes)

Meetup : West LA Meetup - The Greatest Good for the Greatest Number

2013-07-17T01:06:09.154Z · score: 0 (1 votes)

Meetup : West LA - Randomness: Why We Want It, How We Get It

2013-07-01T20:08:40.764Z · score: 1 (2 votes)

Meetup : West LA Meetup - Eliezer and Nick Share a Cab...

2013-06-14T18:59:46.835Z · score: 2 (3 votes)

How should Eliezer and Nick's extra $20 be split

2013-06-14T18:14:48.140Z · score: 16 (15 votes)