Posts

Open & Welcome Thread—May 2020 2020-05-01T12:01:33.876Z · score: 17 (10 votes)
roland's Shortform 2020-04-19T10:52:26.252Z · score: 5 (1 votes)
Welcome and Open Thread June 2019 2019-06-01T13:44:38.655Z · score: 12 (7 votes)
The sad state of Rationality Zürich - Effective Altruism Zürich included 2018-02-27T14:51:05.881Z · score: -13 (25 votes)
Intuitive explanation of why entropy maximizes in a uniform distribution? 2017-09-23T09:43:44.265Z · score: 0 (0 votes)
Good forum for investing? 2015-03-19T17:16:00.579Z · score: -2 (8 votes)
Cryonics in Europe? 2014-10-10T14:58:20.761Z · score: 18 (20 votes)
Life insurance for Cryonics, how many years? 2014-05-23T17:15:00.242Z · score: 4 (9 votes)
Meetup Zürich last minute 2014-05-20T13:11:42.491Z · score: 1 (2 votes)
Meetup : Zurich/Zürich meetup (come out of the woodwork) 2014-05-20T13:08:17.842Z · score: 1 (2 votes)
I'm About as Good as Dead: the End of Xah Lee 2014-05-16T21:43:48.151Z · score: -11 (19 votes)
Good movies for rationalists? 2013-11-09T08:00:42.977Z · score: 2 (10 votes)
Gauging interest for a Zurich, Switzerland meetup group 2013-07-09T18:00:12.460Z · score: 2 (3 votes)
Meetup : Rio de Janeiro Meetup 2012-11-09T00:24:27.758Z · score: 0 (1 votes)
Gauging interest for a Rio de Janeiro meetup group. 2012-11-06T11:33:32.960Z · score: 1 (4 votes)
9/11 Survey 2011-11-02T12:49:26.074Z · score: -27 (37 votes)

Comments

Comment by roland on Open & Welcome Thread - June 2020 · 2020-06-18T11:28:51.065Z · score: 1 (3 votes) · LW · GW

If you like Yudkowskian fiction, Wertifloke = Eliezer Yudkowsky

The Waves Arisen https://wertifloke.wordpress.com/

Comment by roland on Open & Welcome Thread—May 2020 · 2020-06-18T09:35:30.931Z · score: -1 (2 votes) · LW · GW

If you like Yudkowskian fiction, Wertifloke = Eliezer Yudkowsky

The Waves Arisen https://wertifloke.wordpress.com/

Comment by roland on Open & Welcome Thread—May 2020 · 2020-06-01T19:17:09.411Z · score: 1 (1 votes) · LW · GW

Is it ok to omit facts to you lawyer? I mean is the lawyer entitled to know everything about the client?

Comment by roland on Eliezer Yudkowsky Facts · 2020-05-12T10:57:38.830Z · score: 2 (2 votes) · LW · GW

Eliezer Yudkowsky painted "The Scream" with paperclips:

The Scream by Eliezer Yudkowsky

Comment by roland on CFAR Participant Handbook now available to all · 2020-05-01T12:22:29.018Z · score: 1 (1 votes) · LW · GW

Do I deserve some credit?

https://www.lesswrong.com/posts/trvFowBfiKiYi7spb/open-thread-july-2019?commentId=MjCcvKXpvuWK4zd9g

Comment by roland on Open & Welcome Thread—May 2020 · 2020-05-01T12:11:38.948Z · score: 3 (2 votes) · LW · GW

Does a predictable punchline have high or low entropy?

From False Laughter

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

Comment by roland on roland's Shortform · 2020-04-19T10:52:26.563Z · score: 3 (2 votes) · LW · GW

Regarding laughter:

https://www.lesswrong.com/posts/NbbK6YKTpQR7u7D6u/false-laughter?commentId=PszRxYtanh5comMYS

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

Comment by roland on False Laughter · 2020-04-18T08:41:50.983Z · score: 1 (1 votes) · LW · GW

You might say that a predictable punchline is too high-entropy

I'm confused. Entropy is the average level of surprise inherent in the possible outcomes, a predictable punchline is an event of low surprise. Where does the high-entropy come from?

Comment by roland on Open & Welcome Thread - January 2020 · 2020-01-12T14:32:08.536Z · score: 2 (2 votes) · LW · GW

For the most point, admitting to having done Y is strong evidence that the person did do Y so I’m not sure if it can generally be considered a bias.

Not generally but I notice that the argument I cited is usually invoked when there is a dispute, e.g.:

Alice: "I have strong doubts about whether X really did Y because of..."

Bob: "But X already admitted to Y, what more could you want?"

Comment by roland on Open & Welcome Thread - January 2020 · 2020-01-09T19:22:18.031Z · score: 2 (2 votes) · LW · GW

What is the name of the following bias:

X admits to having done Y, therefore it must have been him.

Comment by roland on A Critique of Functional Decision Theory · 2019-09-15T10:15:07.417Z · score: 1 (1 votes) · LW · GW

if I am seeing a bomb in Left it must mean I’m in the 1 in a trillion trillion situation where the predictor made a mistake, therefore I should (intuitively) take Right. UDT also says I should take Right so there’s no problem here.

It is more probable that you are misinformed about the predictor. But your conclusion is correct, take the right box.

Comment by roland on Open Thread July 2019 · 2019-07-15T10:04:23.267Z · score: 6 (4 votes) · LW · GW

It’s pretty uncharitable of you to just accuse CfAR of lying like that!

I wasn't, I rather suspect them of being biased.

Comment by roland on Open Thread July 2019 · 2019-07-14T10:29:01.839Z · score: 7 (3 votes) · LW · GW

As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.

I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don't see it as a good justification for not making the CFAR handbook available.

Comment by roland on Open Thread July 2019 · 2019-07-07T13:14:13.527Z · score: 12 (8 votes) · LW · GW

Is the CFAR hand­book pub­li­cly available? If yes, link please. If not, why not? It would be a great re­source for those who can’t at­tend the work­shops.

Comment by roland on [deleted post] 2019-07-05T19:05:05.757Z

Is the CFAR handbook publicly available? If yes, link please. If not why not? It would be a great resource for those who can't attend the workshops.

Comment by roland on Welcome and Open Thread June 2019 · 2019-06-30T11:40:43.268Z · score: 3 (2 votes) · LW · GW

What is the conclusion of the polyphasic sleep study?

https://www.lesswrong.com/posts/QvZ6w64JugewNiccS/polyphasic-sleep-seed-study-reprise

Comment by roland on Arbital scrape · 2019-06-20T12:19:17.262Z · score: 3 (2 votes) · LW · GW

Just a reminder, the Solomonoff induction dialogue is still missing:

https://www.lesswrong.com/posts/muKEBrHhETwN6vp8J/arbital-scrape#tKgeneD2ZFZZxskEv

Comment by roland on Arbital scrape · 2019-06-07T19:25:08.013Z · score: 7 (4 votes) · LW · GW

Seconded, that part is missing. Thanks for pointing out that very interesting dialogue.

Comment by roland on Welcome and Open Thread June 2019 · 2019-06-01T13:47:57.663Z · score: 4 (3 votes) · LW · GW

Can asking for advice be bad? From Eliezer's post Final Words:

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing? For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.
Is there more to it? Please reply at the original post: Final Words.

Comment by roland on Final Words · 2019-05-24T11:49:48.894Z · score: 1 (1 votes) · LW · GW

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing?

For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.\

Lets say you already have lots of evidence for one hypothesis so asking someone is unlikely to change your mind. Yet if you are underconfident you might still be tempted to ask and if someone gives you contradictory advice you as a human will still feel the uncertainty and doubt inside you. This will just be a wasted motion.

Is there more to it?

Comment by roland on Bayesians vs. Barbarians · 2019-04-25T08:22:57.437Z · score: 1 (1 votes) · LW · GW

There were Indians fighting along with Germans:

https://en.wikipedia.org/wiki/Indian_Legion

Comment by roland on Open Thread Feb 22 - Feb 28, 2016 · 2018-10-24T08:49:07.446Z · score: 1 (1 votes) · LW · GW

From: https://www.lesswrong.com/posts/bfbiyTogEKWEGP96S/fake-justification

In The Bottom Line, I observed that only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals.

Comment by roland on Let's Discuss Functional Decision Theory · 2018-10-12T14:27:56.082Z · score: 12 (4 votes) · LW · GW

Quoting from: https://intelligence.org/files/DeathInDamascus.pdf

Functional decision theory has been developed in many parts through (largely unpublished) dialogue between a number of collaborators. FDT is a generalization of Dai's (2009) "updateless decision theory" and a successor to the "timeless decision theory" of Yudkowsky (2010). Related ideas have also been proposed in the past by Spohn (2012), Meacham (2010), Gauthier (1994), and others.
Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-08T19:27:56.315Z · score: 1 (1 votes) · LW · GW

I've send you a PM, please check your inbox. thanks

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-06T11:57:16.220Z · score: 1 (1 votes) · LW · GW

ChristianKL please see my reply here

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-06T11:56:40.063Z · score: -13 (4 votes) · LW · GW

Marko,

first, I don't think it is fair for you to mention viewpoints that I voiced either to you privately or in the group. I was doing so under the expectation of privacy, I wouldn't want it to be made public. How much can people trust you while doing circling if they have to fear it appearing on the internet?

> Roland has some pet topics such as 9/11 truth and Thai prostitutes that he brings up frequently and that derail and degrade the quality of discussion.

We touched on those topics several times, but most were in private talks between both of us, so claiming that they derail the discussion is going to far.

I reiterate, since last December I tried to talk to you, asking what is the problem, wanting to get some specific feedback. You finally agreed with a meeting on Feb. and even then you didn't bring up the points above. Again, it is very unfair from your side not trying to address the issues in private before going public.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-03T13:27:58.480Z · score: 1 (3 votes) · LW · GW

There is a difference of claims relating to who said what. But why do you automatically assume that I'm the one not being truthful?

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-03T13:25:20.448Z · score: -1 (3 votes) · LW · GW

No. What I'm saying that a pseudonymous poster without any history, who pops out of nowhere gets credibility. Specifically do people take the following affirmation at face value?

As one of the multiple people creeped out by Roland in person
Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T20:26:59.258Z · score: 5 (5 votes) · LW · GW

Giego I agree with your post in general.

> IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.

This is just a strawman that has cropped up here. From the beginning I said I don't mind dropping any topic that is not wanted. This never was the issue.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T18:09:44.157Z · score: 1 (1 votes) · LW · GW

> Ultimately, the Zurich EA group is not an official organisation representing EA. They are just a bunch of people who decide to meet up once in a while. They can choose who they do and do not allow into their group, regardless of how good/bad their reasons, criteria or disciplinary procedures are.

Fair enough. I decided to post this just for the benefit of all. Lots of people in the group don't know what is going on.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T18:00:36.377Z · score: -18 (8 votes) · LW · GW

Marko,

finally you bring some concrete specific points. Why didn't you or the others talk to me about it when I requested it? It seems a bit unfair that you bring it up now in public when I asked you in private before.

> „Effective Altruists are the new hippies“

It reflects what I see in some people, but not all of them and yes I see it as a problem in part of EA and Rationality. It is also mentioned in the third post I linked in the introduction(there the talk is about bohemians and not hippies, but I think it goes into the same direction). Yet I still go to EA meetings and think that I can learn from it.

> „Christianity is a death cult“, etc.

As a former Christian I think that is actually true by definition unless you believe that Jesus is alive, I got this from Hitchens btw. Marko you should be fair and mention that you go to a Christian church, so you are not unbiased in that respect :)

> 9/11 truth and Thai prostitutes that he brings up frequently and that derail and degrade the quality of discussion.

I'm indeed a 9/11 skeptic but I don't remember that this topic did ever take over the discussion. Neither was I the one that started the discussion on LW(I think it was Eliezer).

Thai prostitutes, we once had a long discussion on one EA meeting about prostitution in general and that did go overboard, for fairness sake you should also mention that I was one of the people that suggested changing the topic.

Again I said it several times, if those topics are the problem I would have no problem not talking about them anymore. I told that Daniel and Michal several times.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T13:51:59.193Z · score: -16 (9 votes) · LW · GW

Another thing. A new account(with 3 comments) from a pseudonymous poster who doesn't identify himself, posts some subjective claim and other claims that can't be verified and gets 42 points upvotes. Something is wrong here.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T12:31:30.062Z · score: -8 (4 votes) · LW · GW
Roland has given me new essential information about a conversation between him and another organiser mentioned in the post , I first wanted to check this with said organiser (I did now and it seems that not everything Roland told me is actually true).

I gave new information, but it is not essential. It was related to Rationality Zurich and not to EA Zurich.

About what I'm saying not being true, it seems that what Marko told you is not the same as what he told me. But again this is only related to Rationality Zürich, not EA Zürich, so what would that make a difference for you from EA?

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T12:23:18.858Z · score: -7 (5 votes) · LW · GW
If that’s what he means by having been “excluded ” he is indeed right.

Read my post, I explicitly mentioned that I was still allowed at EA meetings, just not welcome.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T12:21:50.867Z · score: 2 (4 votes) · LW · GW
roughly, more social intelligence or empathy

Hello Michael, I'm taking your criticism at face value here, although it doesn't add up with what Marko told me. He claimed that he was the one that convinced you to ban me. Anyways if social intelligence or empathy is something I lack that might be things that can be hard to fix, first because to a certain extent those are innate and second since no one in Rationality Zürich or EA provided any actionable advice or feedback.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T12:16:02.197Z · score: -12 (6 votes) · LW · GW

Dear J-

I'm responding for the benefit of the others. Your account has exactly 3 comments and I have no idea who you are. But I suspect from the initial that you might be one of Michal's dates?

I don't think it is fair to make some general accusations without providing any specific point of what exactly are the externalities being imposed onto others. You can contact me in private if you want, I'm more than willing to hear.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-02-28T18:16:05.275Z · score: -8 (10 votes) · LW · GW

DW

are you serious? I've been talking with you about this since early Dec 2017 and the reason I posted this was exactly because of the lack of clarification and clear stances.

How comes that you are still "in the process"? Also if there is/was any serious process I would expect you to go through it before excluding someone, no?

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-02-28T12:45:06.772Z · score: 5 (4 votes) · LW · GW

Elo I wouldn't mind. But how would you mediate? Over the internet?

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-02-28T12:44:33.631Z · score: 6 (3 votes) · LW · GW

Dear Florian,

thanks for taking your time to meet me. :)

Comment by roland on How Much Thought · 2017-12-05T13:19:33.719Z · score: 0 (0 votes) · LW · GW

thinking has higher expected utility when you're likely to change your mind and thinking has higher expected utility when the subject is important.

Conditioning on you changing your mind from incorret to correct.

Comment by roland on The "Outside the Box" Box · 2017-11-20T12:40:21.909Z · score: 0 (0 votes) · LW · GW

Lucifer's version

Comment by roland on Against lone wolf self-improvement · 2017-07-20T18:38:13.742Z · score: 0 (0 votes) · LW · GW

My experience attending classes in universities was extremely negative. They didn't work for me.

Comment by roland on On the importance of Less Wrong, or another single conversational locus · 2016-12-02T15:58:26.013Z · score: 7 (5 votes) · LW · GW

What explosions from EY are you referring to? Could you please clarify? Just curious.

Comment by roland on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T12:20:15.514Z · score: 3 (3 votes) · LW · GW

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment by roland on Absence of Evidence Is Evidence of Absence · 2016-09-20T14:29:38.388Z · score: 1 (1 votes) · LW · GW

Let E stand for the observation of sabotage

Didn't you mean "the observation of no sabotage"?

Comment by roland on Solve Psy-Kosh's non-anthropic problem · 2016-05-03T16:38:02.994Z · score: 0 (0 votes) · LW · GW

The error in the reasoning is that it is not you who makes the decision, but the COD (collective of the deciders), which might be composed of different individuals in each round and might be one or nine depending on the coin toss.

In every round the COD will get told that they are deciders but they don't get any new information because this was already known beforehand.

P(Tails| you are told that you are a decider) = 0.9

P(Tails| COD is told that COD is the decider) = P(Tails) = 0.5

To make it easier to understand why the "yes" strategy is wrong, if you say yes every time, you will only be wrong on average once every 9 turns, the one time where the coin comes up head and you are the sole decider. This sounds like a good strategy until you realize that every time the coin comes up head another one(on average) will be the sole decider and make the wrong choice by saying yes. So the COD will end up with 0.5*1000 + 0.5*100 = 550 expected donation.

Comment by roland on Solve Psy-Kosh's non-anthropic problem · 2016-04-20T14:32:40.050Z · score: 0 (0 votes) · LW · GW

I'm retracting this one in favor of my other answer:

http://lesswrong.com/lw/3dy/solve_psykoshs_nonanthropic_problem/d9r4

So saying "yea" gives 0.9 1000 + 0.1 100 = 910 expected donation.

This is simply wrong.

If you are a decider then the coin is 90% likely to have come up tails. Correct.

But it simply doesn't follow from this that the expected donation if you say yes is 0.9*1000 + 0.1*100 = 910.

To the contrary, the original formula is still true: 0.5*1000 + 0.5*100 = 550

So you should stil say "nay" and of course hope that everyone else is as smart as you.

Comment by roland on Eliezer Yudkowsky Facts · 2016-03-15T21:52:54.303Z · score: 0 (0 votes) · LW · GW

Eliezer Yudkowsky is AlphaGo.

Comment by roland on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-25T10:42:35.679Z · score: 0 (0 votes) · LW · GW

From http://lesswrong.com/lw/js/the_bottom_line/

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.

I remember a similar quotation regarding actions as opposed to thoughts. Does anyone remember how it went?

Comment by roland on Rationality Quotes Thread January 2016 · 2016-01-10T14:56:24.516Z · score: 3 (5 votes) · LW · GW

From a mere act of the imagination we cannot learn anything about the real world. To suppose that the resulting probability assignments have any real physical meaning is just another form of the mind projection fallacy. In practice, this diverts our attention to irrelevancies and away from the things that really matter (such as information about the real world that is not expressible in terms of any sampling distribution, or does not fit into the urn picture, but which is nevertheless highly cogent for the inferences we want to make). Usually, the price paid for this folly is missed opportunities; had we recognized that information, more accurate and/or more reliable inferences could have been made.

-- E T Jaynes Probability Theory the Logic of Science