Posts
Comments
You might be interested in reading about aspiration adaptation theory: https://www.sciencedirect.com/science/article/abs/pii/S0022249697912050
To me the most appealing part of it is that goals are incomparable and multiple goals can be pursued at the same time without the need for a function that aggregates them and assigns a single value to a combination of goals.
I'm quite late (the post was made 4 years ago), and I'm also new to LessWrong, so it's entirely possible that other, more experienced members, will find flaws in my argument.
That being said, I have a very simple, short and straightforward explanation of why rationalists aren't winning.
Domain-specific knowledge is king.
That's it.
If you are a programmer and your code keeps throwing errors at you, then no matter how many logical fallacies and cognitive biases you can identify and name, posting your code on stackoverflow is going to provide orders of magnitude more benefit.
If you are an entrepreneur and you're trying to start your new business, then no matter how many hours you spend assessing your priors and calibrating your beliefs, it's not going to help you nearly as much as being able to tell a good manager apart from a bad manager.
I'm not saying that learning rationality isn't going to help at all, rather I'm saying that the impact of learning rationality on your chances of success will be many times smaller than the impact of learning domain-specific knowledge.
Ok, thank you for the clarification!
I'm very new to Less Wrong in general, and to Eliezer's writing in particular, so I have a newbie question.
any more than you've ever argued that "we have to take AGI risk seriously even if there's only a tiny chance of it" or similar crazy things that other people hallucinate you arguing.
just like how people who helpfully try to defend MIRI by saying "Well, but even if there's a tiny chance..." are not thereby making their epistemic sins into mine.
I've read AGI Ruin: A List of Lethalities, and I legitimately have no idea what is wrong with "we have to take AGI risk seriously even if there's only a tiny chance of it". What is wrong with it? If anything, this seems like something I would say if I had to explain the gist of AGI Ruin: A List of Lethalities to someone else very briefly and using very few words.
The fact that I have absolutely no clue what is wrong with it probably means that I'm still very far from understanding anything about AGI and Eliezer's position.