Posts
Comments
Formatting note: something seems to have deleted a couple of your time units:
I spent ~1500 working on genuinely original scientific research.
and
which subsumes the 100 old Poincare conjecture,
Why do we need the full tower? Why couldn't it be the case that just one (or some other finite number) of the Turing Oracle levels are physically possible?
We don't know that it's physically impossible, although it does look that way, but even if we did that doesn't mean it's contradictory, not to the extent that using it you'll "mostly derive paradox theorems and contradictions".
Why do you think that a hypercomputer is inherently contradictory?
The LW Tumblr contingent has a Skype group.
The relevant test is 'Do I want to see more things like this on LW', and the answer is no, because I value clarity more than seeing things I would agree with did I understand them.
Relevant previous LW posts on the A_p distribution and model stability
http://lesswrong.com/lw/igv/probability_knowledge_and_metaprobability/
http://lesswrong.com/lw/h78/estimate_stability/
http://lesswrong.com/lw/hnf/model_stability_in_intervention_assessment/
Interestingly, both concepts seem worthwhile to me... and I mostly advocate a combination of hedonistic and preference utilitarianism.
Eliezer said this would just have been Harry antimatter-suiciding and Hermione waking up in a flaming crater.
Is the novel content written by you, Eliezer, or others?
Cool, that clears it up, thanks!
(I got that you were being sarcastic, but I wasn't clear which possible sucky thing you were disapproving of)
I got the four, but not the rectangle - I just noticed that two elements only appeared three times.
Huh. I got the same answer, but a different way.
Rnpu vgrz vf znqr hc bs gur cerfrapr be nofrapr bs bar bs fvk onfvp ryrzragf. Rnpu ryrzrag nccrnef sbhe gvzrf, rkprcg gubfr gjb.
This comment confuses me.
sure?
how sure are you, and how much do you have to bet?
There's a Welcome Thread that you might want to check out!
it is if you can get evidence about your UF.
I think you overestimate the likelihood that EY even read your comment. I doubt he reads all comments on hpmor discussion anymore.
Why was this trolling? This was in fact true, although Wei Dai's UDT ended up giving rise to a better framework for future and more general DT work.
The link to Non-Omniscience, Probabilistic Inference, and Metamathematics isn't right. Also, 'published earlier this year' is now wrong, it should be 'midway through last year' :D
The author also needs to work on his own rationality. The car example is just bad start to finish. You need a lot more information to even estimate net deaths from the car in question.
Which has nothing to do with the point being made.
IIRC, we were doing it as an initial pass-through, but that plan might have changed.
Perhaps not, but there is good evidence for drugs+therapy doing better than either alone.
I'm trying to learn Linear Algebra and some automata/computability stuff for courses, and I have basic set theory and logic on the backburner.
An actual device?!?
Thanks! I didn't fine it with my minute of googling, good to know it's legit.
I don't suppose you have a source for the quote? (at this point, my default is to disbelieve any attribution of a quote unknown to me to Einstein)
'noble phantasm' is probably a reference to Fate/Stay Night, wherein a noble phantasm is a weapon or object of unusual reknown which a certain class of beings have that grants them signature powers.
I would put such things in the bragging thread - why the separation?
I don't think it's a good idea to write things expressing opinions like this as if you're presenting the majority view, even when you think it is. I for one completely disagree with the first paragraph, and would only like transparency wrt deletions if it was unobtrusive.
So, after reading the comments, I figure I should speak up because selection effects
I appreciated the deleting of the original post. I thought it was silly, and pointless and not what should be on LW. I didn't realize it was being upvoted (or I would have downvoted it), and I still don't know why it was.
I endorse the unintrusive (i.e, silent and unannounces) deleting of things like this (particularly given that the author was explicitly not taking the posting seriously - written while drunk, etc), and I suspect others do as well.
There's a thing that happens wherein any disagreement with moderation ends up being much more noticable than agreement. I wouldn't be surprised if there were many who, like me, agreed with decisions like this and weren't speaking up. If so, I urge you to briefly comment (even just "I agree/d with the decision to delete").
This comments' parenthetical was at least 10x more valuable to me as the OP
I was scrolling through, saw this comment and reread ialdabaoth's comment and upvoted, which I wouldn't have without yours. upvoted.
You mean on average? The studies I'm thinking of had small or no differences, but I'm pretty sure there are other results out there.
I don't have the citation to hand, but IIRC there's research suggesting higher variance among parents is the most significant effect.
You offering?
Ooops, I actually didn't mean to post that! Usually when I'm making an obvious criticism, after I write it I go back and double-check that I haven't missed or misinterpreted something, and I noticed that and meant to delete the unposted comment. I guess I must have hit enter at some point.
Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off.
grumble grumble only if the people the money went from were drawn from the same or similar distribution as the person it goes to.
I don't see a public spectacle - the names were redacted, etc. And Kaj's post seems to be asking "what should our policy on this be" to me.
I suggest you ask at the Effective Altruist facebook page - there's usually fairly good coverage there and if such a thing exists someone there will know of it.
I don't see why Yudkowsky makes superintelligence a requirement for this.
Because often when we talk about 'worst-case' inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don't think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can make it. Assuming worst-case input is like assuming that the hacker has to do no work to solve any of these problems, and automatically knows the inputs that will screw with your solution most.
Not sure how much relevant overlap there is. CM seems focused primarily on education, and on spreading relatively available but difficult to compile info to people who can derive value from it.
CFAR is largely focused on much less available content, and on the developing of new content, bith focused at more general needs.
Not so sure about 80K, although it is much less education focused. It also seems to have an additional purpose - providing support for people to do good, as well as do be more competent.
Neither appear to have a funding gap
I've seen discussion of Diarist (and maybe MRI) on LW before. None of the others seem to me to be plausible enough to even bother considering, and even those seem primarily only useful as a supplement to for the people reconstructing you from cryonics.
Did this get written?
that's worse than letting billions of children be tortured to death every year. that's worse than dying from a supernova.
No? The story explicitly rejects this. It is only because the Superhappies can deal with the Babyeaters on their own, and that solutions to the human problem do not prevent this that the story is resolved other ways.
that's worse than dying from mass suicide.
I don't see the story as advocating this - Akon does not suicide, for example. It is not that the value difference between human life before and after the change is so large (large than the negative value of death) that is the problem. It is that difference in value, multiplied by the entire human race and it's future potential/member is so large as to outweigh a comparatively tiny number of deaths. I'm not sure that is true, but it is the position of those in the story.
you really think existence without pain is that bad? you really they are not "true humans".
I don't think he thinks that. I think he (Eliezer_2009) thinks they have lost something important, some aspect of their humanity - but that doesn't mean they are completely inhuman.
Do you have planned articles for discussing? How late do you plan on going?
This has been circulating among the tumblrers for a little bit, and I wrote a quick infodump on where it comes from.
TL;DR: The article comes from (is co-authored by) a brand-new x-risk organization founded by Max Tegmark and four others, with all of the authors from it's scientific advisory board.
The guys in the Australian community are truly awesome! I'd definitely recommend it if it's a viable option for you (and I'm happy to talk about the people I met at the meetup next Sunday if anyone wants)
I think the main effect wrt the former is as a introduction to rationality and the Sequences