Posts

[video] Kelly McGonigal on willpower 2012-06-17T10:39:22.817Z · score: 6 (7 votes)

Comments

Comment by bobertron on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-13T08:00:44.336Z · score: 0 (0 votes) · LW · GW

I understand your post to be about difficult truths related to politics, but you don't actually give examples (except "what Trump has said is 'emotionally true'") and the same idea applies to simplifications of complex material in science etc. I just happened upon an example from a site teaching drawing in perspective (source):

Now you may have heard of terms such as one point, two point or three point perspective. These are all simplifications. Since you can have an infinite number of different sets of parallel lines, there are technically an infinite number of potential vanishing points. The reason we can simplify this whole idea to three, two, or a single vanishing point is because of boxes.

[...] . Because of this, people like to teach those who are new to perspective that the world can be summarized with a maximum of 3 vanishing points.

Honestly, this confused me for years

The author way lied to about the possible number of vanishing points in a drawing. But instead of realizing the falsehood he was confused.

Comment by bobertron on Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth · 2017-08-12T09:37:40.715Z · score: 0 (0 votes) · LW · GW

Suppose X is the case. When you say "X" your opposite will believe Y, which is wrong. So, even though "X" is the truth, you should not say it.

Your new idea as I understand it: Suppose saying "Z" will let your opposite will believe X. So, even though saying "Z" is, technically, lying, you should say "Z" because the listener will come to have a true believe.

(I'm sorry if I misunderstood you or you think I'm being uncharitable. But even if I misunderstood I think others might misunderstand in a similar way, so I feel justified in responding to the above concept)

First I dislike that approach because it makes things harder for people that could understand, if only people would stop lying to them or prefer to be told the truth along the lines of "study macro economics for two years and you will understand".

Second, that seems to me to be a form of the-end-justifies-the-means that, even though I think of myself as a consequentialist, I'm not 100% comfortable with. I'm open to the idea that sometimes it's okay, and even proper, to say something that's technically untrue, if it results in your audience coming to have a truer world-view. But if this "sometimes" isn't explained or restricted in any way, that's just throwing out the idea that you shouldn't lie.

Some ideas on that:

  • Make sure you don't harm your audience because you underestimate them. If you simplify or modify what you say to the point that it can't be considered true any more because you think your audience is limited in their capacity to understand the correct argument, make sure you don't make it harder to understand the truth for those that can. That includes the people you underestimated, people that you didn't intend to address but heard you all the same and people that really won't understand now, but will later. (Children grow up, people that don't care enough to follow complex arguments might come to care).
  • It's not enough that your audience comes to believe something true. It needs to be justified true believe. Or alternatively, your audience should not only believe X but know it. For a discussion on what is meant with "know" see most of the field of epistemology, I guess. Like, if you tell people that voting for candidate X will give them cancer and the believe you they might come to the correct believe that voting for candidate X is bad for them. But saying that is still unethical.
  • I guess if you could give people justified true believe, it wouldn't be lying at all and the whole idea is that you need to lie because some people are incapable of justified true believe on matter X. But then it should at least be "justified in some sense". Particularly, your argument shouldn't work just as well if "X" were false.
Comment by bobertron on Game Theory & The Golden Rule (From Reddit) · 2017-07-29T16:25:21.146Z · score: 0 (0 votes) · LW · GW

When playing around in the sandbox, simpleton always bet copy cat (using default values put a population of only simpleton and copycat). I don't understand why.

Comment by bobertron on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-23T18:01:29.677Z · score: 1 (1 votes) · LW · GW

"Just being stupid" and "just doing the wrong thing" are rarely helpful views

I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I'm not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it's actually solvable using the described skill.

I think this misses the point, and damages your "should" center

Potentially, yes. I'm deliberately proposing something that might be a little dangerous. I feel my should center is already broken and/or doing more harm to me than the other way around.

"Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing."

That's definitely not good enough for me. I never smoked in my life. I don't think smoking is worth it. And if I were smoking, I don't think I would stop just because I think it's a net harm. And I do think that, because I don't want to think about the harm of smoking or the diffiicutly of quitting, I'd avoid learning about either of those two.

ADDED: First meaning of "I should-1 do X" is "a rational agent would do X". Second meaning (idiosyncratic to me) of "I should-2 do X" is "do X" is the advice I need to hear. should-2 is based on my (miss-)understanding of Consequentialist-Recommendation Consequentialism. The problem with should-1 is that I interpret "I shoud-1 do X" to mean that I should feel guilty if I don't do X, which is definitely not helpful.

Comment by bobertron on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-20T23:05:47.027Z · score: 2 (2 votes) · LW · GW

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the startup, I should not try it". As long as I believe that, how can I separate these two and not flinch?

Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don't want to look into the health risks of smoking, because you don't want to quit. I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less. But I'm fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won't be any reason to look into the health risks of smoking anyway.

Maybe you should model yourself as two people. One person is rationality. It's responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it's recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it's advice and stop giving it queries.

Comment by bobertron on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-18T10:56:26.424Z · score: 2 (2 votes) · LW · GW

Here are some things that I, as an infrequent reader, find annoying about the LW interface.

  • The split between main and discussion doesn't make any sense to me. I always browse /r/all. I think there shouldn't be such a distinction.
  • My feed is filled with notices about meetups in faraway places that are pretty much guaranteed to be irrelevant to me.
  • I find the most recent open thread to be pretty difficult to find on the side bar. For a minute I thought it just wasn't there. I'd like it if the recent open thread and rationality quotes were sticked at the top of r/discussion.
Comment by bobertron on Sample means, how do they work? · 2016-12-18T10:41:17.015Z · score: 0 (0 votes) · LW · GW

I don't get this (and I don't get Benquo's OP either. I don't really know any statistics. Only some basic probability theory.).

"the process has a 95% chance of generating a confidence interval that contains the true mean". I understand this to mean that if I run the process 100 times, 95 times the resulting CI contains the true mean. Therefore, if I look at random CI amongst those 100 there is a 95% chance that the CI contains the true mean.

Comment by bobertron on Seeking better name for "Effective Egoism" · 2016-11-26T20:59:27.679Z · score: 7 (7 votes) · LW · GW

"Effective self-care" or "effective well-being".

Okay. The "effective"-part in Effective Altruism" refers to the tool (rationality). "Altruism" refers to the values. The cool thing about "Effective Altruism", compared to rationality (like in LW or CFAR), is that it's specific enough that it allows a community to work on relatively concrete problems. EA is mostly about the global poor, animal welfare, existential risk and a few others.

What I'd imagine "Effective self-care" would be about is such things as health, fitness, happiness, positive psychology, life-extension, etc. It wouldn't be about "everything that isn't covered by effective altruism", as that's too broad to be useful. Things like truth and beauty wouldn't be valued (aside from their instrumental value) by either altruism nor self-care.

"Effective Egoism" sounds like the opposite of Effective Altruism. Like they are enemies. "Effective self-care" sounds like it complements Effective Altruism. You could argue that effective altruists should be interested in spreading effective self-care both amongst others since altruism is about making others better off, and amongst themselves because if you take good care for yourself you are in a better position to help others, and if you are efficient about it you have more resources to help others.

On the negative side, both terms might sound too medical. And self-care might sound too limited compared to what you might have in mind. For example,one might be under the impression that "self-care" is concerned with bringing happiness levels to "normal" or "average", instead of super duper high.

Comment by bobertron on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-11T20:45:27.170Z · score: 3 (3 votes) · LW · GW

None of this is a much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

I don't understand. Could you correct the grammar mistakes or rephrase that?

The way I understand the argument isn't that the status quo in the level B game is perfect. It isn't that Trump is a bad choice because his level B strategy is taking too much risk and therefore bad. I understand the argument as saying: "Trump doesn't even realize that there is a level B game going on and even when he finds out he will be unfit to play in that game".

Comment by bobertron on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-11T20:37:40.128Z · score: 1 (1 votes) · LW · GW

As I understand it you are criticizing Yudkowski's ideology. But MrMind wants to hear our opinion on whether or not Scott and Yudkowski's reasoning was sound, given their ideologies.

Comment by bobertron on Non-Fiction Book Reviews · 2016-08-16T18:02:13.932Z · score: 0 (0 votes) · LW · GW

I've read it those two books after LW. Assuming you have read the sequences: It wasn't a total waste, but from my memory I would recommend What Intelligence Tests Miss only if you have an interest specifically in psychology, IQ or the heuristics and biases field. I would not recommend it simply because you have a casual interest in rationality and philosophy ("LW-type stuff") or if you've read other books about heuristics and biases. The Robot's Rebellion is a little more speculative and therefore more interesting, Robot's Rebellion and What Intelligence Test Miss also have a significant overlap in covered material.

Comment by bobertron on Non-Fiction Book Reviews · 2016-08-14T18:30:37.870Z · score: 0 (0 votes) · LW · GW

I haven't read "Good and Real" or "Thinking, Fast and Slow" yet, because I think that I won't learn something new as a long term Less Wrong reader. In the case of "Good and Real" part seems to be about physics and I don't think I have the physics background to profit from hat (I feel a refresher on high school physics would be more appropirate for me). In the case of "Thinking, Fast and Slow" I have already read books by Keith Stanovich (What Intelligence Tests Miss and The Robot's Rebellion) and some chapters of academic books edited by Kahneman.

Does anyone think those two books are still worth my time?

Comment by bobertron on Open Thread May 2 - May 8, 2016 · 2016-05-07T20:57:08.988Z · score: 3 (3 votes) · LW · GW

A Suite of Pragmatic Considerations in Favor of Niceness

Comment by bobertron on Lesswrong 2016 Survey · 2016-04-01T18:38:36.390Z · score: 22 (22 votes) · LW · GW

Me, too! I've taken the survey and would like to receive some free internet points.

Comment by bobertron on PSA: even if you don't usually read Main, there have been several worthwhile posts there recently · 2015-12-19T21:35:28.554Z · score: 7 (7 votes) · LW · GW

I tend to read http://lesswrong.com/r/all/new/

Comment by bobertron on Help with understanding some non-standard-LW philosophy viewpoints · 2015-12-04T21:26:48.517Z · score: 1 (1 votes) · LW · GW

"Verständnis" seems totally wrong to me. It's from the verb "verstehen" (to understand, to comprehend). It usually means "understanding" ("meinem Verständnis nach" -> "according to my understanding"). Maybe if you use it in a sentence?

I think "Vermutung" (and it's synonyms) is pretty much what I was looking for. Maybe it's even better than "belief" in some ways, since "belief" suggests a higher degree of confidence than "Vermutung" does.

"unterstützen" (to support something) seems right, thanks. But it's useful to have nouns. Also "das unterstützt deine Behauptung nicht" is much wordier than "that's not evidence".

"Evidenz ist all das, was eine Vermutung unterstützt."

Comment by bobertron on Help with understanding some non-standard-LW philosophy viewpoints · 2015-12-04T19:54:35.362Z · score: 0 (0 votes) · LW · GW

A different German speaker here.

In English you have a whole cloud of related words: mind, brain, soul, I, self, consciousness, intelligence. I don't think it's much of a problem that German does not have perfect match for "mind". The "mind-body-Problem" would be "Leib-Seele-Problem", where "seele" would usually be translated as "soul". The German wikipedia page for philosophy of mind does use the English word "mind" once to distinguish that meaning for "Geist" from a different concept from Hegel that I never heard about before ("Weltgeist").

Then again I don't have much need to discuss philosophy of mind with the people around me, so maybe that's why I don't feel the need for a German word is more like "mind".

But I do have massive problems with talking about epistemological concepts in German. Help from other German speakers would be very welcome. I don't know how to talk about "degrees of belief" in German. Or how to call those things that get updated when we learn new evidence ("beliefs" in English).

If you translate the noun "a belief" into German ("ein Glaube") and back into English, it will always come out as "faith" (as in " the Buddhist faith" or in "having faith in redemption"). A different candidate would be "Überzeugung", but that literally means conviction (something you belief with absolute certainty). Hardly seems like a good word for talking about uncertainty. Wikipedia uses "Grad an Überzeugung" to translate "degrees of belief", but gives the English in parentheses to make sure the meaning is clear. I don't like it. "Eine Überzeugung" sounds wrong.

"Evidence" is another difficult one. The closest might be "Beweis", but that means "proof". Then there is "Evidenz", but I've only ever seen that word used to translate "evidence based medicine". The average German would be unlikely to know that word.

But I wonder if Less Wrong has given me a skewed view of the English language. Maybe the way LW uses "belief" wouldn't feel so natural to the average native speaker. Maybe the average native speaker has quite a different notion of what "evidence" means.

Comment by bobertron on Help with understanding some non-standard-LW philosophy viewpoints · 2015-12-04T19:01:24.102Z · score: 1 (1 votes) · LW · GW

I intuitively feel that there really are objective morals (or: objective mathematics, actual free will, tables and chairs, minds).

Therefore, there really are objective morals (etc.).

"Morals" is just a word. But unlike some other words, it's not 100% clear to me what it means. There is no physical entity that "morals" clearly refers to. There is no agreed upon list of axioms that define what "morals" is. That's why, to me, "there are objective morals" doesn't feel entirely like a factual statement.

I might justify that there are objective morals by relying on my intuition. But that's not because I think intuitions are reliable sources of knowledge. That's because I think intuitions are the correct normative source of how we use words (together with common usage, I guess).

It's till possible that my intuitions contradict each other, or that they contradict facts. So they are not sufficient to say with confidence that objective morals exist. But they are relevant.

Comment by bobertron on Examples of growth mindset or practice in fiction · 2015-10-01T21:00:23.321Z · score: 0 (0 votes) · LW · GW

Naruto is the opposite of Tsuioku Naritai. It's the story of "everyone had something to protect and practiced like mad, but none of it made a huge difference and most everyone would have been about as powerful anyway

But the series clearly wants to be "Tsuioku Naritai". The good guys all value hard work. Maybe the show is hypocritical, then.

I'm not sure if the message that sticks with the people who watch Naruto is what the characters say (work hard) or how the show actually develops (be born special).

Comment by bobertron on Help me test out my Bayes Academy game · 2015-09-27T19:25:52.540Z · score: 2 (2 votes) · LW · GW

I actually really like that you have to spend a resource to learn new information and that the score is dependent on luck. I.e. you use limited resources to optimize the gamble you are making. That seems like a very good description of how life works, only, it's all transparent and quantified in your game.

Some suggestions:

  • In the tutorial, why do I first get to read a description of a picture and then I'm presented with the picture? Obviously, it should be the other way around.
  • You should be able to progress the text by mouse.
  • It should be easier to distinguish new text from old. I think in visual novels, the text box never "scrolls". If the new text doesn't fit into the text box, or to make a new paragraph, the text box is cleared. You could make separate textboxes for the current message and the history.
  • The confusing notation is a real distraction and zaps away a lot of the potential fun. Understanding the notation actually seems more interesting than winning the game, but I have too little information to understand it, which leads to frustration. Why are there two big boxes with normal nodes? Why do normal nodes have all those boxes instead of simple bar that shows the probability? Why do bayes-nodes have all those rows instead of just two bars? What are there grey bars? How do 'and' and 'or' nodes work? I would think that one input corresponds to the vertical division and one input to the horizontal division. It should be more obvious which node is which (by having the input into that side of the box). The connections of nodes did not have arrows. If I understood the game correctly, that would help distinguish inputs from outputs.
  • The effect of clicking on one node shouldn't be instant. At first, it should probably go step by step: You click on a node and reveal it's truth-value (some text appears explaining which node changed and why). Press a key -> the next affected node gets updated. Until affected nodes are updated. Later you don't have to click, there is a small pause between each change. That way you could see the effect of measuring a node and understand why the effect was the way it was, instead of ... trying to work that information out for yourself with only being able to see the aftermath.
  • You should make it more linear. Put the tutorial and the main game into one. I don't see the use of this decision between introductory and intermediate psychology.
  • Have the player start out with much simpler networks and infinite energy.
  • Introduce new types of nodes during the game, not all at once in the tutorial. Every time you introduce something new, go back to simple networks with unlimited energy.

Of those, explaining or simplifying the notation seems the most important to me.

Comment by bobertron on Ideas for rationality slogans? · 2015-09-20T19:43:01.834Z · score: 3 (3 votes) · LW · GW

change your mind, get a cookie

admitting you're wrong = winning/learning

conservation of expected evidence (add formula)

The path to truth is a random walk

discussions are random walks

what is true is already so

rationality: outcomes > rituals of thought

what can be destroyed by truth, should be

update beliefs incrementally

beliefs should pay rent

the cat 's alive, curiosity got framed

optimize everything

delta knowledge = surprise

minimize future surprise

A diagram like this with some actual data e.g. about P(autism|vaccine) or P(violence|video games).

A matrix representation of the prisoners dilemma with an arrow pointing to (cooperate, cooperate) saying "let's meet here".

Comment by bobertron on Ideas for rationality slogans? · 2015-09-20T19:09:03.651Z · score: 3 (3 votes) · LW · GW

I really like "The facts don't know whose side they're on", though the other two might require less wrong knowledge.

Comment by bobertron on Make your bad habits the villains · 2015-09-06T17:52:00.203Z · score: 1 (1 votes) · LW · GW

following up to my own post: I was sceptical because the examples AshwinV provided were examples that lend themselves to punishing oneself and using guilt, shame etc. But by flipping the title of the post to "Make good habits the heroes" all that criticism becomes irrelevant and AshwinV's idea remains the same. I think that is very related to the idea of identity, which has been discussed previously here on lesswrong. Use Your Identity Carefully is a good an relevant example.

Comment by bobertron on Make your bad habits the villains · 2015-09-06T12:56:57.854Z · score: 3 (3 votes) · LW · GW

First, your markup is broken. I can see the link-syntax, instead of the links. Also, the firs link is to an article by Phil Goetz, not Eliezer Yudkowsky.

Now about the actual content. I'm all for trying to use one's natural tendencies, instead of just trying to compensate for them. But I'm critical of the concrete examples you gave. What you are trying to do seems to be to motivate yourself through shame and guilt. And no one seems to be in favour of that. Some reasons why I think it's a bad idea:

  1. I believe you train yourself to be judgemental, not just about yourself but about others. I see no reason why the behaviour of judging your own actions wouldn't generalize to judging other people's behaviour.
  2. Punishing yourself is unlikely to be effective, because you are unlikely to do it every single time you transgress. AFAIK punishment works best when it's a reliable consequence of the behaviour you want to control ('continuous punishment' in behavioural psychlology). It works very poorly otherwise, because every other time, the behaviour still gets reinforced. E.g. every time you take a cookie out of the cookie jar (a habit you want to minimize because you are on a diet) and you forget to conjure up a mental image of Dudley Dursly (a fat character from Harry Potter), you still get rewarded by a delicious cookie.
  3. You start associate related concepts with the punishment. Essentially, you are building an ugh-field. Suppose you associate procrastination with laziness. What do you associate procrastination with? With the very tasks that you are putting off. Now event thinking about doing the dishes makes you feel worse than you felt before you conjured up the image of a disgusting messy dying of food poisoning in their never-clean house.
  4. It simply doesn't feel good.

See also: a summary of what /u/pjeby says about the topic, many posts on http://mindingourway.com/

If you never apply the negative image (the "enemy") to yourself, that might be a slightly different matter. Maybe the image of an alcoholic can help keep you sober if you never drink alcohol in the first place. But even then, you learn to be judgemental of people and, should you start drinking, you will have the before mentioned problems with punishment.

EDIT: corrected "disgress" to "transgress"

Comment by bobertron on Deworming a movement · 2015-08-30T11:23:17.622Z · score: 0 (0 votes) · LW · GW

I've heard of the controversy. I think it was mentioned in a link post on slatestarcodex, and obviously on GiveWell's blog.

the community seems to be comprehensively inept, poor at marketing, extremely insular, methodologically unsophisticated but meticulous, transparent and well-intentioned

I find it stylistically strange to have a long list of negative adjectives end with two positive ones (transparent and well-intentioned are good things, right?) without any explanation. Wouldn't one say something like "These things suck:...., but on the good side there is also ...."?

More importantly, you do not explain why "EA movement building does more harm than good".

I understand you to mean "EA movement building does more harm than good, because the EA movement does more harm than good" (stop me right there if I miss understood you). Why though?

As I understood it, no one argues that de-worming does more harm than good. The argument is only that it is ineffective, not harmful. If you want to make some argument that de-worming takes away resources that should be better spend, you have to actually make that argument.

Could you explain what's so bad about GiveWell's reaction, particularly the blog post you linked? Not just where you disagree with their analysis, but how that post is evidence that GiveWell is more harmful than beneficial.

Finally, even if the EA-movement is wrong about de-worming, there are other interventions that EA tends to support. Your post isn't very convincing right now because it doesn't mention that fact at all. Do you think that all interventions popular among EAs are on as shaky a ground as de-worming (or worse)?

Comment by bobertron on Predict - "Log your predictions" app · 2015-08-27T17:16:24.405Z · score: 0 (0 votes) · LW · GW

Great! Works so far.

Comment by bobertron on Predict - "Log your predictions" app · 2015-08-23T15:22:05.142Z · score: 1 (1 votes) · LW · GW

Sounds nice. Making predictions about personal events makes more sense to me than predicting e.g. elections or sport events (beauce a) I don't know anything about it, and b) I don't care about it). But I don't like the idea of making them (all) public, like on prediction book. Though a PredictionBook integration sounds like an obvious fancy feature.

And I liked what I saw the one second I could use the app ;-)

After installing, it crashed pressing "save" on the first prediction. Now it chrashed right on startup. I get to see the app for a moment, but I can't do anything. After deleting the data (from the android setting) I can make a new prediction, but again, it crashes after pressing "save".

I installed from the apk-link you provided.

I've got a Moto G (2. Generation) with Android 5.0.2.

Hope that helps. And if anyone can tell me how to diagnose the problem in more detail, I'd be interested in that, too.

Comment by bobertron on "Spiritual" techniques that actually work thread · 2015-03-13T16:16:49.982Z · score: 1 (1 votes) · LW · GW

Sounds like it's the same or similar to what some modern practicing stoics do.

Comment by bobertron on Counterfactual trade · 2015-03-11T11:19:14.235Z · score: 1 (1 votes) · LW · GW

No, your real friend is the one you helped. The friend that helps you in a counterfactual situation where you are in trouble is just in your head, not real. Your counterfactual friend helps you, but in return you help your real friend. The benefit you get is that once you really are in trouble, the future version of your friend is similar enough to the counterfactual friend that he really will help you. The better you know your friend, the likelier this is.

I'm not saying that that isn't a bit silly. But I think it's coherent. In fact it might be just a geeky way to describe how people often think in reality.

Comment by bobertron on Open thread, Feb. 23 - Mar. 1, 2015 · 2015-02-25T09:50:55.252Z · score: 2 (2 votes) · LW · GW

I just read a book on behavior and that's the kind of thing I would expect to read in that book: Attention is generally a reinforcer. Swearing can be reinforced by attention. When you stop paying attention to swearing, swearing stops (extinction). Of course that will only stop the child from swearing when talking to you, not when it's in school.

Comment by bobertron on 2014 Less Wrong Census/Survey · 2014-10-25T21:50:36.621Z · score: 39 (39 votes) · LW · GW

Done

Comment by bobertron on Improving the World · 2014-10-11T10:38:31.099Z · score: 2 (2 votes) · LW · GW

For the Greman speakers this is the introductory paragraph I already wrote for the blog: [...]

I'm not much of a writer, and this might not be the final version, but I still like giving advice.

I'd really like to see some citations and references here. Are all those opinions based only on you own observations or also from things you have read? Since I don't have children, I'm not interested in the answer to that question, but your readers will be.

Werte, die während der Kindheit anerzogen wurden, werden während der Pubertät auch durch die natürliche Gehirnentwicklung in Frage gestellt

Ich würde "auch durch die natürliche Gehirnentwicklung" hier entfernen, da es eigentlich keine Informationen liefert. Außer du hättest villeicht irgend eine Referenz um deine Behauptunt (Werte werden in der Puberät in Frage gestellt) wissenschaftlich zu untermauern. Dann könnte das statdessen hin.

Meinem Verständnis der evolutionspsychologie nach nutzt dieses natürliche Verhalten den jungen Erwachsenen, da sie selbstbestimmt mehr (Fortpflanzungs-)Erfolg haben.

Zu sagen, dass etwas von Evolutionärem Nutzen ist, da es den Fortpflanzungserfolg steigert ist (zumindest nahezu) eine Tautologie, braucht also eigentlich nicht gesagt zu werden. Dass etwas was den evolutionären Erfolg steigert dem Individuum nutzen muss (du schreibst "nutzt [,,,] den jungen Erwachsenen"), stimmt meines Wissens nach nicht (Egoistisches Gen und so). Was ich hier wirklich gerne wissen möchte ist, warum Selbstbestimmtheit deiner Meinung nach den evolutionären Erfolg steigert.

Comment by bobertron on Open thread, Oct. 6 - Oct. 12, 2014 · 2014-10-07T18:07:29.107Z · score: 1 (1 votes) · LW · GW

I was wondering why. It doesn't seem all that useful, unless you are abnormally bad at color perception or you have a job or hobby that somehow needs good color perception (something in art or design?). I suppose it's fun and interesting to see how well that kind of thing can be trained, and how it changes your experience, but I was wondering if there was more to it.

I have written about this on LW in the past.

Here and here.

Comment by bobertron on Open thread, Oct. 6 - Oct. 12, 2014 · 2014-10-07T15:40:03.956Z · score: 1 (1 votes) · LW · GW

Can you tell me something about your color perception deck? Are you trying to train yourself to be better at distinguising (and naming?) colours for some reason?

Comment by bobertron on [Link] Animated Video - The Useful Idea of Truth (Part 1/3) · 2014-10-05T23:17:00.997Z · score: 6 (6 votes) · LW · GW

I like the animation and the voice, but I dislike the text. I don't need it and it really distracts from the animations. And if I did need to read along with what you say, I think YT has a subtitle feature that would be much less distracting and could be turned off. I suppose I've seen videos using the style you attempt here, but I'm not sure I like then, either, and they typically use text only, while you also use pictures.

Oh, and I suppose you would be faster in producing those videos if you were to give up on the text.

Comment by bobertron on Overly convenient clusters, or: Beware sour grapes · 2014-09-02T14:24:15.756Z · score: 4 (4 votes) · LW · GW

There is this idea (I think it's a stoic one) that's supposed to show that no one ever has anything to worry. It goes like this:

Either you can do something about it, in which case you don't have to worry, you just do it. Or there is nothing you can do, then you can simply accept the inevitabel

It throws out the possiblility that you don't know whether you can do anything (and what precisely) or not. As I see it, worry is precisely the (sometimes maladaptive) attempt to answer that.

Every calse dichotomy is another example for this failure mode (if I understood you correctly).

Comment by bobertron on Bayesianism for humans: prosaic priors · 2014-08-26T08:43:54.727Z · score: 4 (4 votes) · LW · GW

The idea that it's a habit is, in a way, boring, true.

But when I read that industriousness and creativity can be learned like described in the learned industriousness wikipedia article, I was quite surprised. So the iedea isn't boring to me at all.

Comment by bobertron on Bayesianism for humans: prosaic priors · 2014-08-25T11:18:33.048Z · score: 4 (4 votes) · LW · GW

I know it's just an example, but concerning

I find it hard to do something I consider worthwhile while on a spring break

maybe you have learned to be lazy on spring break? I mean, the theory that it's a habit seems more prosaic to me than being tired or something about "activasion energy".

Comment by bobertron on Why are people "put off by rationality"? · 2014-08-07T09:09:51.204Z · score: 0 (0 votes) · LW · GW

Such a person would probably strongly [missing verb?] rationality, rationalists, and the complex of ideas surrounding rationality, for probably understandable reasons

Since I kind of like your comment, I'd liked to know how that sentence should have sounded. Strongly dislike, hate, mistrust?

Comment by bobertron on Causal Inference Sequence Part 1: Basic Terminology and the Assumptions of Causal Inference · 2014-08-05T09:18:15.970Z · score: 1 (1 votes) · LW · GW

The "A=a" stands for the event that the random variable A takes on the value a. It's another notation for the set {ω ∈ Ω | A(ω) = a}, where Ω is your probability space and A is a random variable (a mapping from Ω to something else, often R^n).

Okay, maybe you know that, but I just want to point out that there is nothing vague about the "A=a" notation. It's entirely rigorous.

Comment by bobertron on Value ethics vs. agency ethics · 2014-07-27T18:57:06.192Z · score: 0 (0 votes) · LW · GW

Societies often punish people that refuse to help. Why not consider people that break the law as defectors?

In fact, that would be an alternative (and my preferred) way to fix you second and third objection to value ethics. Consider everyone who breaks the laws and norms within your community as a defector. Where I live, torture is illegal and most people think it's wrong to push the fat man, so pushing the fat man is (something like) breaking a norm.

Have you read Whose Utilitarianism?? Not sure if it addresses any of your concerns, but it's good and about utilitarianism.

Comment by bobertron on Value ethics vs. agency ethics · 2014-07-27T08:59:21.273Z · score: 1 (1 votes) · LW · GW

So I'm still a defector and society would do well to defect against me in proportion

Which, of course, they wouldn't do. They wouldn't have much sympathy for the guy sitting one bear repellant, who chose not to help. In fact, refusing to help can be illegal.

I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it's okay to defect against him.

Comment by bobertron on Tapestries of Gold · 2014-04-28T14:19:08.386Z · score: 0 (0 votes) · LW · GW

Ok, I didn't understand the post. Like you are saying that the blue lines don't have any direktion, and then you go on to paint (directed) arrows over it.. Is this by mistake? Did you want to make the green arrows double directed or something like that? I suppose that not only does the blue line not have a direction, it also doesn't have an order? E.g. could you have written from top to bottom "Psychology Physiology Chemistry Morality Physics Neuroscience"? It's clearly no accident that you wrote those "sets of things that exist" in that familiar order, but is there any way to justifiy that order if the blue line represents identity? Is it simply meaningless convention?

But I like how the beige threads look like vandalism. Tells me what I should think about supernaturalism. Would be even more impressive if you used the MS Paint spray-can tool.

Comment by bobertron on Be comfortable with hypocrisy · 2014-04-08T20:59:48.408Z · score: 2 (2 votes) · LW · GW

As I understand it, a bodhisattva also enters niravana eventually, so I don't see the hypocrisy.

Comment by bobertron on Open thread, 24-30 March 2014 · 2014-03-25T11:14:17.783Z · score: 2 (2 votes) · LW · GW

There are some blogs mentioned on the wiki.

Comment by bobertron on On not diversifying charity · 2014-03-14T21:56:15.898Z · score: 0 (0 votes) · LW · GW

Keeping money for yourself can be thought of as a [small] charity

Oh, interesting. I assumed the reason I keep anything beyond the bare minimum to myself is that I'm irrationally keeping my own happiness and the well-being of strangers as two separate, incomparable things. I probably prefer to see myself as irrational compared to seeing myself as selfish.

The concept I was thinking of (but didn't quite remember) when I wrote the comment was Purchase Fuzzies and Utilons Separately.

Comment by bobertron on On not diversifying charity · 2014-03-14T13:28:41.892Z · score: 0 (0 votes) · LW · GW

Most people set aside an amount of money they spend an charity, and an amount they spend on their own enjoyment. It seems to me that whatever reasoning is behind splitting money between charity and yourself, can also support splitting money between multiple charities.

Comment by bobertron on On not diversifying charity · 2014-03-14T13:15:32.794Z · score: 0 (0 votes) · LW · GW

Ergo, if you're risk-averse, you aren't a rational agent. Is that correct?

Comment by bobertron on Halloween thread - rationalist's horrors. · 2013-11-03T21:01:32.641Z · score: 3 (3 votes) · LW · GW

Reading that ZFC has countable models spooked me. How can uncountable sets exist and an axiomatization of set-theory have a countable model? For a fraction of a second it made me doubt mathematics was real. For a few seconds after that I was thinking of giving up on understanding maths, or at least logic. Then I realized that there had to be a trick about it that made everything make sense again.

Comment by bobertron on How habits work and how you may control them · 2013-10-12T23:51:43.095Z · score: 0 (0 votes) · LW · GW

I think that fits what I've read about worry.

From Chapter nine of "Resilience–How to Survive and Thrive in Any Situation A Teach Yourself Guide (Teach Yourself: Relationships & Self-Help) by Donald Robertson":

When we worry, we perceive danger, feel anxious, and naturally try to problem-solve in order to remove the perceived threat and achieve a sense of safety. As long as we believe future problems are threatening and remain unsolved there’s a tendency for our attention to automatically return to them as ‘unfinished business’, which partially explains why worry episodes tend to keep recurring.

[,,,]

Worry can therefore be seen as a failed attempt to avoid future dangers by mentally problem-solving and preparing to cope with them. Hence, people often feel reluctant to stop worrying because at some level they assume it helps to protect them against looming threats by giving them an opportunity to problem-solve and rehearse coping strategies, although it seldom does so very effectively and normally causes anxiety to escalate instead.

[...]

Rumination can be spotted and postponed in a similar way to worry