Posts
Comments
"Beliefs shoulder the burden of having to reflect the territory, while emotions don't."
This is how I have come to think of beliefs. It's like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I'm perfectly happy to call that "truth". So long as that does not break my tools.
"Beliefs shoulder the burden of having to reflect the territory, while emotions don't." Superb point that. And thanks for the links.
Good point about beliefs possibly only "feeling" useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it's often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.
Why not both useful beliefs and useful emotions?
Why privilege beliefs?
I don't think we have got the right explanation for our epiphany addiction here. We are "addicted" to epiphanies because that is what our community rewards its members. Even if the sport is ostensibly about optimizing one's life, the actual sport is to come up with clever insights into how to optimize one's life. The incentive structure is all wrong. The problem ultimately comes down to us being rewarded more status for coming up with and understanding epiphanies than for such epiphanies having a positive impact on our lives.
No problem. I'm here for the entire summer. You may choose to contact myself closer to time and we'll organize a meetup then.
Hi. I will attend! I also wish to apologize for not having had the strength and courage to persist in organizing this meetup past the time it went fallow :P
Hi. First of all thanks for the immensely helpful summary of the literature!
Since you have gone through so much of the literature, I was wondering if you have come across any theories about the functional role of happiness?
I'm currently only aware of Kaj Sotala's post some time ago about how happiness regulates risk-taking. I personally think happiness does this because risk-taking is socially advantageous for high status folks. The theory is that happiness is basically a behavioural strategy pursued by those who have high status. As in, happiness is performed, not pursued. Depression and anxiety would be the opposite of happiness. I remember some studies showing how in primates the low status ones exhibit depression-like and anxious behavior.
It may simply be my ignorance of the literature, but it seems strange that all these (otherwise wonderful) empirical investigations into happiness are motivated only by a common folk theory of its function.
Hi, thanks for linking to your post here. It seems relevant to what I tweeted. But please help me understand what you are saying here. I think I'm having trouble at "Subgroups form that may value intentional suppression of their former values". Why would they value suppression of former values?
I'm guessing you're trying to say that subgroups will find their aesthetic more interesting because they experience their aesthetic as providing greater improvement in compressibility given preexisting inculcation in that aesthetic?
Surely the costs/benefits to everybody, including third-parties, counts. Surely the real issue is the ultimate economic efficiency of these prizes as a way to allocate our collective resources toward achieving the most collective benefit from solved problems.
Perhaps not what most religious folks would call its 'essence' (part of the problem that they won't admit this really) but certain religion-based social norms which are still relevant in today's world.
The question is not about philosophy but institutionalized philosophy.
a) Would those immature sciences not have been born if not for institutionalized philosophy? b) Do you expect new sciences to be born within the philosophy departments we have today?
Or do you expect rather that a new science is more likely to arise as a result of Big Questions being asked in the mundane disciplines of our empirical sciences?
I really like this. It emphasizes the fundamentally instrumental nature of rationality.
I was aware of that yes. But I was also assuming what you considered to be high prestige within this community was well calibrated.
What I has in mind was his devotion to the cause, even as it ultimately harmed it, we think more than compensates for his lack of strategic foresight and late graduation.
With that book, we think of him less for not contributing in a more direct way to the book, even as we abstractly understand what a vital job it was.
Though of course that may just be me.
How many such communities can you be part of (because surely you don't only have one goal) and still not have them a diluted effect on yourself? How many such communities don't fall prey to lost purposes? How many can monitor your life with enough fidelity that they can tell if you go astray?
I'm not so sure we accord Kaj less status overall for having taking more years to graduate and more status for helping Eliezer write that book. Are we so sure we do? We might think so, and then reveal otherwise by our behavior.
This is a difficult problem. I have come to realize there is no one solution. The general strategy I think is to have consistency checks on what you are doing. Your subconscious can only trick you into seeking status and away from optimizing your goals by hiding the contradictions from you. But as 'willpower' is not the answer, eternal vigilance isn't either. But rather you pick up via a mass of observation the myriad ways in which you are led astray, and you fix these individually. Pay attention to something different you regularly do every day and check if this comports with your goals. If you are lucky, your subconscious cannot trick you the same way twice. Though it is quite ingenious.
In other words you try to legislate your actions. But your subconscious will find loopholes and enforcement will slip.
I don't doubt that might be the ultimate cause, as different methods are amenable to different subject matters. But that does not affect the inference I want to draw here, that in doing abstract reasoning, one has to hold oneself to a ridiculously high standard of precision and rigor.
Supernaturalism is a distraction. Theologists defend supernaturalism as an indirect way of defending whatever God they want to believe in. See http://www.uncrediblehallq.net/2011/06/24/atheism-is-just-thinking-there-arent-any-gods/.
The sequences are not specifically tailored to convince people of atheism. They are rather a more general set of tools in going about and reasoning about the world. So don't over-ascribe relevance to atheism many of the philosophical ideas you see in there.
So are you suggesting their differences in success has to do with subject matter?
Hmm, why is this the case? I think I'm missing background knowledge here.
We talk. Discuss stuff usually discussed on LW. In a social setting.
Ahem, that embedded map on this page is not right! Why does it show New Delhi?
Bravo! That's insightful. Thank you.
(I placed the Nesov quote there to hopefully prime people into not immediately accept whatever senses of regret which seem to 'make sense'. For example, merely looking for 'consistency'.)
Well I don't think it makes sense to regret one's entire past and be satisfied with merely that. You want to draw specific lessons from your past. An ideal agent might not need regret of course, being able to learn from past mistakes without a feeling of regret toward a specific event which gave rise to the general lesson. But I think humans might find it useful to have an event serve as a reminder of a lesson learned.
We can interpret Caplan's "no regret" (perhaps too charitably) as "my past does not contain any lessons wrt. me behaving in a certain way in order to have my children be a certain way". But this leaves room for other lesson-specific regret wrt. other genuine lessons.
As for the massive counterfactual of "Caplan having behaved even minisculely different in his past", I think it's quite useless and hence meaningless, at least with respect to Caplan and his children. It doesn't help him better raise his children, for example. It's like how not every English sentence corresponds to a meaningful statement.
Hmm it depends on what you're trying to accomplish with the counterfactual I think. Is there a particular reason why you think it would make more sense to empathize with the Caplan-counterfactual, independent of it being more 'consistent' I guess?
This seems likely. Still I think there's a lesson to be learned here :)
I don't see how it's even approximately the same mistake.
Caplan is correct in thinking that his children would be different persons than who they are now, if there were any alterations to his past. He is correct in thinking Caplan-now would not love these children. He also realizes that Caplan-counterfactual would love these children.
It's as with the pebble sorters. You could acknowledge that you would find prime number heaps morally correct if you were to become a pebble sorter yet deny that prime number heaps are morally correct.
I just don't think it's a very productive regret Caplan is having there. Because he should regret some parts of his past life because such regret would be instrumental in making his future better.
Whoops. Thanks for the correction.
I follow people who have a taste for insight. Mostly via twitter, but there are other ways as well.
In short, mold your life so that the path of least resistance is the path of maximum productivity.
Yup, that's how reality does it as well with the principle of least action.
If the learning agent does not find any new knowledge why does it make Martha report having learned something new? Why not make her feel as if nothing changed?
Why can't 2+2=4 also be an observed fact? It's just not a fact that is localizeable in time or space.
I think instead of universal vs. contingent, it's better to think non-localizeable vs. localizeable. Or if you like, location-dependent vs. location-independent.
I like Voevodsky's pragmatism. The universe/mathematics doesn't explode when you find an inconsistency, only your current tools for determining mathematical truth. And that one might possibly locally patch up our tools for verifying proofs even in a globally inconsistent system.
I shall attend.
Our ancestors didn't have the benefit of modern medicine, so some causes of chronic pain may have just killed them outright. On the other hand, not all of the things causing chronic pain today were an issue back then.
I was actually using pain as an analogy for suffering. I know that chronic pain simply wasn't as much of an issue back then. Which was why I compared chronic pain to chronic suffering. If chronic suffering was as rare as chronic suffering back then (they both sure seem more common now), then there is no issue.
Are the current attention-allocational conflicts us modern people experience somehow more intractable? Do our built in heuristics which usually spring into action when noticing the suffering signal fail in such vexing attention-allocational conflicts?
Why do we need to have read your post, then employed this quite conscious and difficult process of trying to figure out the attention-allocational conflict? Why didn't the suffering just do its job without us needing to apply theory to figure out its purpose and only then manage to resolve the conflict?
Fixing the problem requires removing chronic pain without blocking acute pain when it's useful.
I guess you can look at it as a type I - type II error tradeoff. But you could also simply improve your cognitive algorithms which respond to a suffering signal.
No, I didn't mean that the badness was bad and hence evolution would want it to go away. Acute suffering should be enough to make us focus on conflicts between our mental subsystems. It's as with pain, acute pain leads you to flinch you away from danger, but chronic pain is quite useless and possibly maladaptive since it leads to needless brooding and wailing and distraction which does not at all address the underlying unsolveable problem and might well exacerbate it.
Suffering happens all too readily IMHO (or am I misjudging this?) for evolution to not have taken chronic attention-allocational conflict into account and come up with a fix.
To take an example for comparison, is the ratio of chronic to acute pain roughly equal to the ratio of chronic to acute attention-allocational conflict? My intuitions fail me here, but I seem to personally experience more chronic suffering than chronic pain. But then again I was diagnosed with mild depression before and hence not typical.
It seems to me there that utility functions are not only equivalent up to affine transformations. Both utility functions and subjective probability distributions seem to take some relevant real world factor into account. And it seems you can move these representations between your utility function and your probability distribution while still giving exactly the same choice over all possible decisions.
In the case of discounting, you could for example represent uncertainty in a time-discounted utility function, or you do it with your probability distribution. You could even throw away your probability distribution and have your utility function take into account all subjective uncertainty.
At least I think thats possible. Have there been any formal analyses of this idea?
May I ask how the doubling time of the economy can suggest how we discount future utility?
One predictable way I have seen many rationalists (including myself) deceive themselves is by flooding their working memory and confusing themselves. They do this via nitpicking, pursuing arguments and counter-arguments in a rabbit hole depth-first fashion and neglecting other shallower ones, using long and grammatically complex sentences, etc. There are many ways. All you have to do is to ensure that you max out your working memory, which then makes you less able to self-monitor for biases.
How do you counter this? Do note that arguments are not systematically distributed wrt. their complexity. So it's just best to stick to simple arguments which you can fully comprehend, and with some working memory capacity to spare.
It is easier to say new things than to reconcile those which have already been said.
Vauvenargues, Reflections and Maxims, 1746
You won't be great just by sitting there of course, but I suspect great people wouldn't be as great if they weren't driven by an urge to achieve greatness to some extent for its own sake.
Great people also like to countersignal how their greatness was never something they had in mind, and that they are just truly dedicated to their art.
OK. So as we have agreed, we will discuss our mini-presentations for next week's (yes it's weekly now) meetup here.
Mine is simple, it will be a summary on Schelling's The Strategy of Conflict :)
What's yours?
Thanks! That makes sense.
Fake-FAQs can be a method of misrepresenting arguments against your viewpoint. Like: "Check out all these silly arguments anti-consequentialists frequently use". Just an example, I'm not saying Yvain is doing this.
I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.
I unfortunately don't get the main point :(
Could you elaborate on or at least provide a reference for how a consideration of Schelling points would suggest that we shouldn't push the fat man?