Posts
Comments
Bayes for arguments: how do you quantify P(E|H) when E is an argument? E.g. I present you a strong argument supporting Hypothesis H, how can you put a number on that?
His other books are also great.
that it’s reasonably for Eliezer to not think that marginally writing more will drastically change things from his perspective.
Scientific breakthroughs live on the margins, so if he has guesses on how to achieve alignment sharing them could make a huge difference.
I have guesses
Even a small probability of solving alignment should have big expected utility modulo exfohazard. So why not share your guesses?
Weighted step ups instead of squats
Lunges vs weighted step ups?
Source please
why would a weighted step up be better and safer than a squat?
Weighted step ups instead of squats can be loaded quite heavy.
What are the advantages of weighted step ups vs squats without bending your knees too much? Squats would have the advantage of greater stability and only having to do half the reps.
Valence-Owning
Could you please give a definition of the word valence? The definition I found doesn't make sense to me: https://en.wiktionary.org/wiki/valence
1.1. It’s the first place large enough to contain a plausible explanation for how the AGI itself actually came to be.
According to this criterion we would be in a simulation because there is no plausible explanation of how the Universe was created.
- exfohazard
- expohazard(based on exposition)
Based on the latin prefix ex-
IMHO better than outfohazard.
The key here would be an exact quantification: how much carbs do these cultures consume in relation to the amount of physical activity.
Has the hypothesis
excess sugar/carbs -> metabolic syndrome -> constant hunger and overeating -> weight gain
been disproved?
Rather, my read of the history is that MIRI was operating in an argumentative argument where:
argumentative environment?
A good critical book on this topic is House of Cards by Robyn Dawes
If we have to use voice, we can still try to ask hard questions and get fast answers, but because of the lower rate itâs hard to push far past human limits.
You could go with IQ-test-type progressively harder number sequences.Use big numbers that are hard to calculate in your head.
E.g. start with a random 3 digit number, each following number is the previous squared minus 17. If he/she figures it out in 1 second he must be an ai.
If you like Yudkowskian fiction, Wertifloke = Eliezer Yudkowsky
The Waves Arisen https://wertifloke.wordpress.com/
Is it ok to omit facts to you lawyer? I mean is the lawyer entitled to know everything about the client?
Eliezer Yudkowsky painted "The Scream" with paperclips:
Do I deserve some credit?
https://www.lesswrong.com/posts/trvFowBfiKiYi7spb/open-thread-july-2019?commentId=MjCcvKXpvuWK4zd9g
Does a predictable punchline have high or low entropy?
From False Laughter
You might say that a predictable punchline is too high-entropy to be funny
Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?
Regarding laughter:
https://www.lesswrong.com/posts/NbbK6YKTpQR7u7D6u/false-laughter?commentId=PszRxYtanh5comMYS
You might say that a predictable punchline is too high-entropy to be funny
Since entropy is a measure of uncertainty a predictable punchline should be low entropy, no?
You might say that a predictable punchline is too high-entropy
I'm confused. Entropy is the average level of surprise inherent in the possible outcomes, a predictable punchline is an event of low surprise. Where does the high-entropy come from?
For the most point, admitting to having done Y is strong evidence that the person did do Y so I’m not sure if it can generally be considered a bias.
Not generally but I notice that the argument I cited is usually invoked when there is a dispute, e.g.:
Alice: "I have strong doubts about whether X really did Y because of..."
Bob: "But X already admitted to Y, what more could you want?"
What is the name of the following bias:
X admits to having done Y, therefore it must have been him.
if I am seeing a bomb in Left it must mean I’m in the 1 in a trillion trillion situation where the predictor made a mistake, therefore I should (intuitively) take Right. UDT also says I should take Right so there’s no problem here.
It is more probable that you are misinformed about the predictor. But your conclusion is correct, take the right box.
It’s pretty uncharitable of you to just accuse CfAR of lying like that!
I wasn't, I rather suspect them of being biased.
As the same time I accept the idea of intellectual property being protected even if that’s not the case they are claiming.
I suspect that this is the real reason. Although if the much vaster sequences by Yudkowsky are freely available I don't see it as a good justification for not making the CFAR handbook available.
Is the CFAR handbook publicly available? If yes, link please. If not, why not? It would be a great resource for those who can’t attend the workshops.
Is the CFAR handbook publicly available? If yes, link please. If not why not? It would be a great resource for those who can't attend the workshops.
What is the conclusion of the polyphasic sleep study?
https://www.lesswrong.com/posts/QvZ6w64JugewNiccS/polyphasic-sleep-seed-study-reprise
Just a reminder, the Solomonoff induction dialogue is still missing:
https://www.lesswrong.com/posts/muKEBrHhETwN6vp8J/arbital-scrape#tKgeneD2ZFZZxskEv
Seconded, that part is missing. Thanks for pointing out that very interesting dialogue.
Can asking for advice be bad? From Eliezer's post Final Words:
You may take advice you should not take.
I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing?
For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves.
How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.
Is there more to it?
Please reply at the original post:
Final Words.
You may take advice you should not take.
I understand that this means to just ask for advice, not necessarily follow it. Why can this be a bad thing?
For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves. How can we cut ourselves in this case? I suppose you could have made up your mind to follow a course of action that happens to be correct and then ask someone for advice and the someone will change your mind.\
Lets say you already have lots of evidence for one hypothesis so asking someone is unlikely to change your mind. Yet if you are underconfident you might still be tempted to ask and if someone gives you contradictory advice you as a human will still feel the uncertainty and doubt inside you. This will just be a wasted motion.
Is there more to it?
There were Indians fighting along with Germans:
From: https://www.lesswrong.com/posts/bfbiyTogEKWEGP96S/fake-justification
In The Bottom Line, I observed that only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals.
Quoting from: https://intelligence.org/files/DeathInDamascus.pdf
Functional decision theory has been developed in many parts through (largely unpublished) dialogue between a number of collaborators. FDT is a generalization of Dai's (2009) "updateless decision theory" and a successor to the "timeless decision theory" of Yudkowsky (2010). Related ideas have also been proposed in the past by Spohn (2012), Meacham (2010), Gauthier (1994), and others.
I've send you a PM, please check your inbox. thanks
ChristianKL please see my reply here
Marko,
first, I don't think it is fair for you to mention viewpoints that I voiced either to you privately or in the group. I was doing so under the expectation of privacy, I wouldn't want it to be made public. How much can people trust you while doing circling if they have to fear it appearing on the internet?
> Roland has some pet topics such as 9/11 truth and Thai prostitutes that he brings up frequently and that derail and degrade the quality of discussion.
We touched on those topics several times, but most were in private talks between both of us, so claiming that they derail the discussion is going to far.
I reiterate, since last December I tried to talk to you, asking what is the problem, wanting to get some specific feedback. You finally agreed with a meeting on Feb. and even then you didn't bring up the points above. Again, it is very unfair from your side not trying to address the issues in private before going public.
There is a difference of claims relating to who said what. But why do you automatically assume that I'm the one not being truthful?
No. What I'm saying that a pseudonymous poster without any history, who pops out of nowhere gets credibility. Specifically do people take the following affirmation at face value?
As one of the multiple people creeped out by Roland in person
Giego I agree with your post in general.
> IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.
This is just a strawman that has cropped up here. From the beginning I said I don't mind dropping any topic that is not wanted. This never was the issue.
> Ultimately, the Zurich EA group is not an official organisation representing EA. They are just a bunch of people who decide to meet up once in a while. They can choose who they do and do not allow into their group, regardless of how good/bad their reasons, criteria or disciplinary procedures are.
Fair enough. I decided to post this just for the benefit of all. Lots of people in the group don't know what is going on.
Marko,
finally you bring some concrete specific points. Why didn't you or the others talk to me about it when I requested it? It seems a bit unfair that you bring it up now in public when I asked you in private before.
> „Effective Altruists are the new hippies“
It reflects what I see in some people, but not all of them and yes I see it as a problem in part of EA and Rationality. It is also mentioned in the third post I linked in the introduction(there the talk is about bohemians and not hippies, but I think it goes into the same direction). Yet I still go to EA meetings and think that I can learn from it.
> „Christianity is a death cult“, etc.
As a former Christian I think that is actually true by definition unless you believe that Jesus is alive, I got this from Hitchens btw. Marko you should be fair and mention that you go to a Christian church, so you are not unbiased in that respect :)
> 9/11 truth and Thai prostitutes that he brings up frequently and that derail and degrade the quality of discussion.
I'm indeed a 9/11 skeptic but I don't remember that this topic did ever take over the discussion. Neither was I the one that started the discussion on LW(I think it was Eliezer).
Thai prostitutes, we once had a long discussion on one EA meeting about prostitution in general and that did go overboard, for fairness sake you should also mention that I was one of the people that suggested changing the topic.
Again I said it several times, if those topics are the problem I would have no problem not talking about them anymore. I told that Daniel and Michal several times.
Another thing. A new account(with 3 comments) from a pseudonymous poster who doesn't identify himself, posts some subjective claim and other claims that can't be verified and gets 42 points upvotes. Something is wrong here.
Roland has given me new essential information about a conversation between him and another organiser mentioned in the post , I first wanted to check this with said organiser (I did now and it seems that not everything Roland told me is actually true).
I gave new information, but it is not essential. It was related to Rationality Zurich and not to EA Zurich.
About what I'm saying not being true, it seems that what Marko told you is not the same as what he told me. But again this is only related to Rationality Zürich, not EA Zürich, so what would that make a difference for you from EA?
If that’s what he means by having been “excluded ” he is indeed right.
Read my post, I explicitly mentioned that I was still allowed at EA meetings, just not welcome.