Posts
Comments
So basically the whole universe is a Boltzmann brain.
Please, no. The world already has a sickening amount of steampunk.
Does it now? Care to recommend some?
It may be hard to rob you, but easy to shoot you down.
It would be fun to have corporations build space stations, ostensibly for technological benefits, but not disclosing details, so that your question would remain unanswered inside the story.
I would also mention Deborah Anapol's "Polyamory: the new love for the 21st century". I think about it as a survey of polyamorous practices, struggles, communities. It was crucial for me to get the sense of normality. Haven't read Taormino.
This may be a case of regression to the mean, with the thing which parameters regress being conscious and not caring about these particular parameters.
And... done. I would like to point out that X-Risk question may be confusing when skimming. P(X-Risk) looks as if it were asking for probability of catastrophe coming to pass, but the explanations spells out that the probability of humanity successfully avoiding catastrophe should be entered.
It would have been a nice insurance agains possible future PR shitstorms. Was that your primary reason for suggesting it?
This is weird. I haven't noticed that until you've pointed it out, but I believe that my masculinity score was only a little lower than all the benchmarks and not extremely low only because I've considered how my partner would gauge BSRI questions. They seem to push me towards expressing masculine traits. Isn't it interesting that a sex-role inventory doesn't make allowances for situations priming different sex roles in people?
Ooh, missed the announcement. I won't make it in time now. Anyhow, I'll keep it in mind to get in touch with you next time I'm there. Have a blast!
WHEN: 26 October 2014 03:00:00PM (+0400)
We start at 14:00 and stay until at least 19-20.
So which one is it?
It looks as if it might be worth to manually disable pictures (e.g. with HTTP switchboard) and browse profiles only seeing text.
Immersion is not an option for me currently.
Whatever you do, immerse yourself as much as possible in your circumstances. This most likely means having radio blaring in Hebrew most of the time when it's not actively obstructing whatever you're trying to do; plastering your living space with labels, adding Hebrew blogs to your blogroll, seeking social activities outside your comfort zone such as volunteering at a retirement home with lonely seniors or attending insipid school plays at your local center for Hebrew language and culture.
Yes, I have had similar issues and I can't say that I did manage to overcome them successfully, but I'm committed to continue, and I'm in the middle of reorganizing my life so that I could direct more resources there.
I'm not offering any specific advice for now, beside the obvious: http://www.sparringmind.com/changing-habits/, http://www.sparringmind.com/productivity-science/, but I'm responding here to start the dialogue and to nudge us both in the right direction.
So yeah, I wish for you to untangle your motivations and follow through.
In what way were their hormone levels affected? I can't even begin to guess.
Aren't psychostimulators, such as amphetamine and its derivatives or modafinil¹, legitimate means for augmenting mood, cognition and productivity? Or are they seriously dangerous? Can you point out some relevant research?
¹ Can modafinil be lumped together with other psychostimulators?
It's as if you're participating in a prediction market such as PredictionBook or The Good Judgement Project.
It seems that you could capture benefits of both having the material online and searchable and retaining interested readers with regular updates by publishing everything at once, and then regularly posting your analytical readings of Eliezer's material. If you have the audience already, those could naturally grow into discussion posts.
May I suggest applying CSS skills to styling Anki cards? A personal anecdote: I have a lot of decks for various languages, and having cards styled differently helps with switching context. It's also nice to have them look pleasant.
Do you have a citation for 15-30 minutes being a reasonable time for blood glucose levels changing in response to consuming a banana? I remember reading that it takes significantly longer than that, up to 150 minutes, but I can't find a proper source at the moment. The closest I can find is the 4-hour body, and I don't know how trustworthy it is. It also says that fructose may lower blood glucose levels.
I've tried using HabitRPG before, but didn't stick with it. I've started using Lift, working out every day following the http://7-min.com. Somehow the expectation of checking off habits for today keep me going through the motions, and the automated timer reduces friction of changing into the mental state appropriate for exercising.
There's even a special page on the Amazon website for the express purpose of cancelling ebook purchases within the last 7 days: http://www.amazon.com/gp/help/customer/display.html?nodeId=200144510
Could you name some actual writer's IRC channels? I've never seen any.
Sounds like a case of extreme discounting or a very close planning horizon.
On Will Newsome's IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.) Make it happen, somebody.
Ooh, this would be so great!
What if Bludgers, being modelled after naive physics, have inherent knocking-people-out property? Wouldn't that be in line with how canon is being dealt with in HPMOR?
If we're taking seriously the possibility of basilisks actually being possible and harmful, isn't it your invitation really dangerous? After all, what if Axel has thought of an entirely new cognitive hazard, different from everything you may already be familiar with? What if you succumb to it? I'm not saying that it's probable, only that it should warrant the same precautions as the original basilisk debacle, which led to enacting censorship.
Aye. If you need another nudge, I'd like to say that it's a great idea, and yes, I would help you test resulting decks.
I'm not so sure that AI suggesting murder is clear evidence of it being unfriendly. After all, it can have a good reason to believe that if it doesn't stop a certain researcher ASAP and at all costs, then humanity is doomed. One way around that is to give infinite positive value to human life, but can you really expect CEV to be handicapped in such a manner?
It may be benevolent and cooperative in its present state even if it believes FAI to be provably impossible.
That's what I figured, but I hoped I was wrong, and there's still a super-secret beer-lovers' club which opens if you say "iftahh ya simsim" thrice or something. Assuming you would let me in on a secret, of course.
Wait, I thought that library.nu was shut down back in the spring. What am I missing?
This is an interesting way to look at things. I would assert a higher probability, so I'm voting up. Even a slight tweaking (x+ε, m-ε) is enough. I'm imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.
Fairly certain (85%—98%).
I was confused about getting several upvotes quickly, but without prompting debate. I began wondering if my proposition pattern-matched something not as interesting to discuss.
What a fun game! I notice that I'm somewhat confused, too. I see a couple of different approaches; maybe some of the upvoters would step in and explain themselves.
Irrationality game
Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam's humanity, you would need very little additional information to get a good idea of Adam's morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it's not possible to cheat by formalizing moral separately.
Dark arts are very toxic, in the sense that you naturally and necessarily use any and all of your relevant beliefs to construct self-serving arguments on most occasions. Moreover, once you happen to successfully use some rationality technique in a self-serving manner, you become more prone to using it in such a way on future occasions. Thus, once you catch other people using dark arts and understand what's going on, you are more likely to use the same tricks yourself. >80% sure (I don't have an intuitive feeling for amounts of evidence, but here I would need at least 6dB of evidence to become uncertain).
Registered.
This is clear, entertaining and to the point. Thank you.
A nitpick:
So a strong slippery slope argument is one where both the utility of the outcome, and the outcome's probability is high
You may have meant "disutility".
They don't need many aurors, it's just that aurors come in trios.
Draco doesn't even need to know about this.
A good first step in optimizing the world according to your wishes is noticing and acknowledging that you've got a problem.
With that in mind, why should the rational community frame its core activities — development of epistemic and acquisition of instrumental rationality, plus public advocacy of sanity — simply as another fun game to engage in, with an added benefit of warm fuzzies and making oneself feel smart?
Wouldn't it be better to provoke a question, or, better yet, an acknowledgement — yes, I am (neurotypical) human, I am fallible (irrational), I wish I could become less wrong.
That's why I'm proposing to have /Ir/rationality in the name. As for the name itself, I am at loss. I don't see the benefit or adhering to some common naming convention as being substantial. How about "We, the Irrational Humans"? Humans, because I don't envision any AGIs, swarms, extraterrestrials, chimps, squids, or dolphins joining us any time soon.
Email sent.
This is a very nice way to close the feedback loop between the practice of research and the sort of theory preached here.
This is all well and good, but imagine that, instead of living in a word where people generally don't communicate optimally and tend to irrationally cling to their memes, we live in the world of rational discourse, where truths are allowed to naturally bubble up to the surface and manifest as similar conclusions from disparate experiences.
In this hypothetical world you would benefit from arguing with a crackpot — you would supply xem with the evidence xe overlooked (because from within xyr model it felt irrelevant, so xe didn't pursue it — that's how I imagine one could end up with crackpot beliefs in a rational world), and xyr non-obvious truth would come up as a reason for xyr weird world-view. In that situation marginal benefit of engagement is high, because behind most crackpot theories there would be an extremely rare, and thus valuable experience (= piece of evidence about a nature of your common world), and the marginal cost of engagement is diminished because your effort is expended on adjusting both your and xyr map, and not on defeating their cognitive defenses.
With me so far? It gets better. There's no hard and fast boundary between our world and the one painted above. And there are different kinds of crackpots. I'm pretty sure that there are many people with beliefs that you have good enough reasons to dismiss, yet which make total sense to somebody with their experiences. And many of them can be argued with. They may be genuinely in interested in finding the truth, or winning at life, or hearing out contrarian opinions. They may be not shunned by society enough to develop thick defenses. They may be smart and rational (as far as humans go, which is not very far.)
So finding the right kind of crackpots becomes a lucrative problem — source of valuable insights and debating practice.
Weakly related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/ and http://en.wikipedia.org/wiki/God_of_the_gaps
An innocent person is a lot more receptive than someone who has heard the retarded version of an idea. To paraphrase Schopenhauer, it is not weakness of the cognitive faculties that leads people astray, it is preconception, prejudice.)
How do we know that the situation with various crackpot ideas is any different? We don't actually go and spend weeks seeking out and dissecting the most sane version of every conspiracy we've caught wind of. How can we be so certain that if we did that we wouldn't find some non-obvious truths?
What was the subject of their argument?
I'm familiar with LaTeX and willing to help. I'd love to discuss some of the papers in more detail, too.
Edit: email sent.
That study sounds interesting, could you post a link if you happen to find it?