Posts

Open & Welcome Thread—May 2020 2020-05-01T12:01:33.876Z
roland's Shortform 2020-04-19T10:52:26.252Z
Welcome and Open Thread June 2019 2019-06-01T13:44:38.655Z
The sad state of Rationality Zürich - Effective Altruism Zürich included 2018-02-27T14:51:05.881Z
Intuitive explanation of why entropy maximizes in a uniform distribution? 2017-09-23T09:43:44.265Z
Good forum for investing? 2015-03-19T17:16:00.579Z
Cryonics in Europe? 2014-10-10T14:58:20.761Z
Life insurance for Cryonics, how many years? 2014-05-23T17:15:00.242Z
Meetup Zürich last minute 2014-05-20T13:11:42.491Z
Meetup : Zurich/Zürich meetup (come out of the woodwork) 2014-05-20T13:08:17.842Z
I'm About as Good as Dead: the End of Xah Lee 2014-05-16T21:43:48.151Z
Good movies for rationalists? 2013-11-09T08:00:42.977Z
Gauging interest for a Zurich, Switzerland meetup group 2013-07-09T18:00:12.460Z
Meetup : Rio de Janeiro Meetup 2012-11-09T00:24:27.758Z
Gauging interest for a Rio de Janeiro meetup group. 2012-11-06T11:33:32.960Z
9/11 Survey 2011-11-02T12:49:26.074Z

Comments

Comment by roland on Open Thread Fall 2024 · 2024-11-03T12:24:38.486Z · LW · GW

Bayes for arguments: how do you quantify P(E|H) when E is an argument? E.g. I present you a strong argument supporting Hypothesis H, how can you put a number on that?

Comment by roland on My Number 1 Epistemology Book Recommendation: Inventing Temperature · 2024-10-12T09:32:45.124Z · LW · GW

His other books are also great.

Comment by roland on Critical review of Christiano's disagreements with Yudkowsky · 2024-01-06T22:38:32.184Z · LW · GW

that it’s reasonably for Eliezer to not think that marginally writing more will drastically change things from his perspective.

Scientific breakthroughs live on the margins, so if he has guesses on how to achieve alignment sharing them could make a huge difference.

Comment by roland on Critical review of Christiano's disagreements with Yudkowsky · 2024-01-02T16:12:09.859Z · LW · GW

I have guesses

Even a small probability of solving alignment should have big expected utility modulo exfohazard. So why not share your guesses?

Comment by roland on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-08-06T15:08:27.203Z · LW · GW

Weighted step ups instead of squats

Lunges vs weighted step ups?

Comment by roland on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-06-29T02:58:37.855Z · LW · GW

Source please

Comment by roland on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-06-29T00:55:33.501Z · LW · GW

why would a weighted step up be better and safer than a squat?

Comment by roland on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-06-28T16:59:18.433Z · LW · GW

Weighted step ups instead of squats can be loaded quite heavy.

What are the advantages of weighted step ups vs squats without bending your knees too much? Squats would have the advantage of greater stability and only having to do half the reps.

Comment by roland on Elements of Rationalist Discourse · 2023-02-13T11:50:23.029Z · LW · GW

Valence-Owning

Could you please give a definition of the word valence? The definition I found doesn't make sense to me: https://en.wiktionary.org/wiki/valence

Comment by roland on Distinguishing test from training · 2022-11-30T12:53:01.194Z · LW · GW

1.1. It’s the first place large enough to contain a plausible explanation for how the AGI itself actually came to be.

According to this criterion we would be in a simulation because there is no plausible explanation of how the Universe was created.

Comment by roland on Don't use 'infohazard' for collectively destructive info · 2022-07-24T18:55:44.687Z · LW · GW
  1. exfohazard
  2. expohazard(based on exposition)

Based on the latin prefix ex-

IMHO better than outfohazard.

Comment by roland on It’s Probably Not Lithium · 2022-07-08T15:25:38.310Z · LW · GW

The key here would be an exact quantification: how much carbs do these cultures consume in relation to the amount of physical activity.

Comment by roland on It’s Probably Not Lithium · 2022-07-08T06:56:37.795Z · LW · GW

Has the hypothesis

excess sugar/carbs -> metabolic syndrome -> constant hunger and overeating -> weight gain

been disproved?

Comment by roland on Why all the fuss about recursive self-improvement? · 2022-06-15T11:44:54.028Z · LW · GW

Rather, my read of the history is that MIRI was operating in an argumentative argument where:

argumentative environment?

Comment by roland on Philosophy of Therapy · 2020-10-30T14:10:20.782Z · LW · GW

A good critical book on this topic is House of Cards by Robyn Dawes

Comment by roland on How can an AI demonstrate purely through chat that it is an AI, and not a human? · 2020-10-09T12:50:43.143Z · LW · GW

If we have to use voice, we can still try to ask hard questions and get fast answers, but because of the lower rate itâs hard to push far past human limits.

You could go with IQ-test-type progressively harder number sequences.Use big numbers that are hard to calculate in your head.

E.g. start with a random 3 digit number, each following number is the previous squared minus 17. If he/she figures it out in 1 second he must be an ai.

Comment by roland on Open & Welcome Thread - June 2020 · 2020-06-18T11:28:51.065Z · LW · GW
Comment by roland on Open & Welcome Thread—May 2020 · 2020-06-18T09:35:30.931Z · LW · GW

If you like Yudkowskian fiction, Wertifloke = Eliezer Yudkowsky

The Waves Arisen https://wertifloke.wordpress.com/

Comment by roland on Open & Welcome Thread—May 2020 · 2020-06-01T19:17:09.411Z · LW · GW

Is it ok to omit facts to you lawyer? I mean is the lawyer entitled to know everything about the client?

Comment by roland on Eliezer Yudkowsky Facts · 2020-05-12T10:57:38.830Z · LW · GW

Eliezer Yudkowsky painted "The Scream" with paperclips:

The Scream by Eliezer Yudkowsky

Comment by roland on CFAR Participant Handbook now available to all · 2020-05-01T12:22:29.018Z · LW · GW

Do I deserve some credit?

https://www.lesswrong.com/posts/trvFowBfiKiYi7spb/open-thread-july-2019?commentId=MjCcvKXpvuWK4zd9g

Comment by roland on Open & Welcome Thread—May 2020 · 2020-05-01T12:11:38.948Z · LW · GW

Does a predictable punchline have high or low entropy?

From False Laughter

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

Comment by roland on roland's Shortform · 2020-04-19T10:52:26.563Z · LW · GW

Regarding laughter:

https://www.lesswrong.com/posts/NbbK6YKTpQR7u7D6u/false-laughter?commentId=PszRxYtanh5comMYS

You might say that a predictable punchline is too high-entropy to be funny

Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?

Comment by roland on False Laughter · 2020-04-18T08:41:50.983Z · LW · GW

You might say that a predictable punchline is too high-entropy

I'm confused. Entropy is the average level of surprise inherent in the possible outcomes, a predictable punchline is an event of low surprise. Where does the high-entropy come from?

Comment by roland on Open & Welcome Thread - January 2020 · 2020-01-12T14:32:08.536Z · LW · GW

For the most point, admitting to having done Y is strong evidence that the person did do Y so I’m not sure if it can generally be considered a bias.

Not generally but I notice that the argument I cited is usually invoked when there is a dispute, e.g.:

Alice: "I have strong doubts about whether X really did Y because of..."

Bob: "But X already admitted to Y, what more could you want?"

Comment by roland on Open & Welcome Thread - January 2020 · 2020-01-09T19:22:18.031Z · LW · GW

What is the name of the following bias:

X admits to having done Y, therefore it must have been him.

Comment by roland on A Critique of Functional Decision Theory · 2019-09-15T10:15:07.417Z · LW · GW

if I am seeing a bomb in Left it must mean I’m in the 1 in a trillion trillion situation where the predictor made a mistake, therefore I should (intuitively) take Right. UDT also says I should take Right so there’s no problem here.

It is more probable that you are misinformed about the predictor. But your conclusion is correct, take the right box.

Comment by roland on Open Thread July 2019 · 2019-07-15T10:04:23.267Z · LW · GW

It’s pretty uncharitable of you to just accuse CfAR of lying like that!

I wasn't, I rather suspect them of being biased.

Comment by roland on Open Thread July 2019 · 2019-07-14T10:29:01.839Z · LW · GW

As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.

I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don't see it as a good justification for not making the CFAR handbook available.

Comment by roland on Open Thread July 2019 · 2019-07-07T13:14:13.527Z · LW · GW

Is the CFAR hand­book pub­li­cly available? If yes, link please. If not, why not? It would be a great re­source for those who can’t at­tend the work­shops.

Comment by roland on [deleted post] 2019-07-05T19:05:05.757Z

Is the CFAR handbook publicly available? If yes, link please. If not why not? It would be a great resource for those who can't attend the workshops.

Comment by roland on Welcome and Open Thread June 2019 · 2019-06-30T11:40:43.268Z · LW · GW

What is the conclusion of the polyphasic sleep study?

https://www.lesswrong.com/posts/QvZ6w64JugewNiccS/polyphasic-sleep-seed-study-reprise

Comment by roland on Arbital scrape · 2019-06-20T12:19:17.262Z · LW · GW

Just a reminder, the Solomonoff induction dialogue is still missing:

https://www.lesswrong.com/posts/muKEBrHhETwN6vp8J/arbital-scrape#tKgeneD2ZFZZxskEv

Comment by roland on Arbital scrape · 2019-06-07T19:25:08.013Z · LW · GW

Seconded, that part is missing. Thanks for pointing out that very interesting dialogue.

Comment by roland on Welcome and Open Thread June 2019 · 2019-06-01T13:47:57.663Z · LW · GW

Can asking for advice be bad? From Eliezer's post Final Words:

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing? For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.
Is there more to it? Please reply at the original post: Final Words.

Comment by roland on Final Words · 2019-05-24T11:49:48.894Z · LW · GW

You may take advice you should not take.

I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing?

For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.\

Lets say you already have lots of evidence for one hypothesis so asking someone is unlikely to change your mind. Yet if you are underconfident you might still be tempted to ask and if someone gives you contradictory advice you as a human will still feel the uncertainty and doubt inside you. This will just be a wasted motion.

Is there more to it?

Comment by roland on Bayesians vs. Barbarians · 2019-04-25T08:22:57.437Z · LW · GW

There were Indians fighting along with Germans:

https://en.wikipedia.org/wiki/Indian_Legion

Comment by roland on Open Thread Feb 22 - Feb 28, 2016 · 2018-10-24T08:49:07.446Z · LW · GW

From: https://www.lesswrong.com/posts/bfbiyTogEKWEGP96S/fake-justification

In The Bottom Line, I observed that only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals.

Comment by roland on Let's Discuss Functional Decision Theory · 2018-10-12T14:27:56.082Z · LW · GW

Quoting from: https://intelligence.org/files/DeathInDamascus.pdf

Functional decision theory has been developed in many parts through (largely unpublished) dialogue between a number of collaborators. FDT is a generalization of Dai's (2009) "updateless decision theory" and a successor to the "timeless decision theory" of Yudkowsky (2010). Related ideas have also been proposed in the past by Spohn (2012), Meacham (2010), Gauthier (1994), and others.
Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-08T19:27:56.315Z · LW · GW

I've send you a PM, please check your inbox. thanks

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-06T11:57:16.220Z · LW · GW

ChristianKL please see my reply here

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-06T11:56:40.063Z · LW · GW

Marko,

first, I don't think it is fair for you to mention viewpoints that I voiced either to you privately or in the group. I was doing so under the expectation of privacy, I wouldn't want it to be made public. How much can people trust you while doing circling if they have to fear it appearing on the internet?

> Roland has some pet topics such as 9/11 truth and Thai prostitutes that he brings up frequently and that derail and degrade the quality of discussion.

We touched on those topics several times, but most were in private talks between both of us, so claiming that they derail the discussion is going to far.

I reiterate, since last December I tried to talk to you, asking what is the problem, wanting to get some specific feedback. You finally agreed with a meeting on Feb. and even then you didn't bring up the points above. Again, it is very unfair from your side not trying to address the issues in private before going public.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-03T13:27:58.480Z · LW · GW

There is a difference of claims relating to who said what. But why do you automatically assume that I'm the one not being truthful?

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-03T13:25:20.448Z · LW · GW

No. What I'm saying that a pseudonymous poster without any history, who pops out of nowhere gets credibility. Specifically do people take the following affirmation at face value?

As one of the multiple people creeped out by Roland in person
Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T20:26:59.258Z · LW · GW

Giego I agree with your post in general.

> IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.

This is just a strawman that has cropped up here. From the beginning I said I don't mind dropping any topic that is not wanted. This never was the issue.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T18:09:44.157Z · LW · GW

> Ultimately, the Zurich EA group is not an official organisation representing EA. They are just a bunch of people who decide to meet up once in a while. They can choose who they do and do not allow into their group, regardless of how good/bad their reasons, criteria or disciplinary procedures are.

Fair enough. I decided to post this just for the benefit of all. Lots of people in the group don't know what is going on.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T18:00:36.377Z · LW · GW

Marko,

finally you bring some concrete specific points. Why didn't you or the others talk to me about it when I requested it? It seems a bit unfair that you bring it up now in public when I asked you in private before.

> „Effective Altruists are the new hippies“

It reflects what I see in some people, but not all of them and yes I see it as a problem in part of EA and Rationality. It is also mentioned in the third post I linked in the introduction(there the talk is about bohemians and not hippies, but I think it goes into the same direction). Yet I still go to EA meetings and think that I can learn from it.

> „Christianity is a death cult“, etc.

As a former Christian I think that is actually true by definition unless you believe that Jesus is alive, I got this from Hitchens btw. Marko you should be fair and mention that you go to a Christian church, so you are not unbiased in that respect :)

> 9/11 truth and Thai prostitutes that he brings up frequently and that derail and degrade the quality of discussion.

I'm indeed a 9/11 skeptic but I don't remember that this topic did ever take over the discussion. Neither was I the one that started the discussion on LW(I think it was Eliezer).

Thai prostitutes, we once had a long discussion on one EA meeting about prostitution in general and that did go overboard, for fairness sake you should also mention that I was one of the people that suggested changing the topic.

Again I said it several times, if those topics are the problem I would have no problem not talking about them anymore. I told that Daniel and Michal several times.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T13:51:59.193Z · LW · GW

Another thing. A new account(with 3 comments) from a pseudonymous poster who doesn't identify himself, posts some subjective claim and other claims that can't be verified and gets 42 points upvotes. Something is wrong here.

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T12:31:30.062Z · LW · GW
Roland has given me new essential information about a conversation between him and another organiser mentioned in the post , I first wanted to check this with said organiser (I did now and it seems that not everything Roland told me is actually true).

I gave new information, but it is not essential. It was related to Rationality Zurich and not to EA Zurich.

About what I'm saying not being true, it seems that what Marko told you is not the same as what he told me. But again this is only related to Rationality Zürich, not EA Zürich, so what would that make a difference for you from EA?

Comment by roland on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T12:23:18.858Z · LW · GW
If that’s what he means by having been “excluded ” he is indeed right.

Read my post, I explicitly mentioned that I was still allowed at EA meetings, just not welcome.