Comment by savageorange on CEV-tropes · 2014-09-24T14:44:56.579Z · LW · GW

There is a reasonable question about why it is that "For group decisions that require unanimity very little passes the process.". How much of this effect is honest difference in values, and how much is mere linguistic artifacts caused by our tiny communication bandwidth and how sloppily we use it.

IMO any CEV algorithm that had any hope of making sense would have to ignore words and map actual concepts together.

Comment by savageorange on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-07-23T11:01:13.106Z · LW · GW

We don't just use 'winning' because, well.. 'winning' can easily work out to 'losing' in real world terms. (think of a person who alienates everyone they meet through their extreme competitiveness. They are focused on winning, to the point that they sacrifice good relations with people. But this is both a) not what is meant by 'rationalists win' and b) a highly accessible definition of winning - naive "Competition X exists. Agent A wins, Agent B loses"). VASTLY more accessible than 'achieving what actually improves your life, as opposed to what you merely want or are under pressure to achieve'

I'd like to use the word 'winning', but I think it conveys even less of the intended meaning than 'rationality' to the average person.

Comment by savageorange on How do you take notes? · 2014-07-08T03:23:31.584Z · LW · GW

I just want to clarify here -- are you aware that personal wikis and server software such as MediaWiki are different classes of software? The most relevant reason to use personal wiki software rather than wiki serving software is, no server == no consequent security holes and system load, no need to do sysadmin type stuff to get it going. Personal wiki software is generally just an ordinary program, meaning it has it's own GUI and can have features that it would be insecure to expose over the internet.

Personally I have found Zim a little lacking when I wanted tables (it doesn't currently support them, except through diagrams), but it supports most other things I've wanted, including some rather exotic stuff

Anyway I mainly commented because using MediaWiki only for your own personal notes seems rather like cracking a walnut with a sledgehammer.

Comment by savageorange on How do you take notes? · 2014-06-26T02:38:03.368Z · LW · GW

Is there some reason you use MediaWiki rather than a personal wiki software (for example Zim)?

Comment by savageorange on Open Thread: March 4 - 10 · 2014-03-06T02:20:31.110Z · LW · GW

Simple part first: yes, I claim that every city has or will soon have near-ubiquitous internet access. If you need to deny your future self the ability to choose to use the internet easily, you won't be able to live in a city.

One doesn't follow from the other.

Take out any built-in wifi hardware; get a usb wireless module. These are tiny enough that you can employ almost any security/inconvenience measure on them. Decide which security/inconvenience measures are appropriate. Done.

Comment by savageorange on Dark Side Epistemology · 2014-03-04T00:44:31.664Z · LW · GW

Evidence that would substantially inform a simulation of the enforcement of those beliefs. For example, history provides pretty clear evidence of the ultimate result of fascist states/dictatorships, partisan behaviour, and homogeneous group membership The qualities found in this projected result is highly likely to conflict with other preferences and beliefs.

At that point, the person may still say 'Shut up, I believe what I want to believe.' But that would only mean they are rejecting the evidence, not that the evidence doesn't apply.

Comment by savageorange on Polling Thread · 2014-03-02T00:17:50.696Z · LW · GW

I'd be a lot more inclined to respond to this if I didn't need to calculate probability values (ie. could input weights instead, which were then normalized.)

To that end, here is a simple Python script which normalizes a list of weights (given as commandline arguments) into a list of probabilities:

import sys
weights = [float(v) for v in sys.argv[1:]]
total_w = sum(weights)
probs = [v / total_w for v in weights]
print ('Probabilities : %s' % (", ".join([str(v) for v in probs])))

Produces output like this:

Probabilities : 0.1, 0.2, 0.3, 0.4
Comment by savageorange on The Rationality Wars · 2014-03-01T22:59:46.949Z · LW · GW

Yes, that's roughly the reformulation I settled on. Except that I omitted 'have the habit' because it's magical-ish - desiring to have the habit of X is not that relevant to actually achieving the habit of X, rather simply desiring to X strongly enough to actually X is what results in the building of a habit of X.

Comment by savageorange on Rational Evangelism · 2014-03-01T01:53:19.411Z · LW · GW

But in point of fact, the way it works out is that Christianity tends to make people more generous, caring and trustworthy than atheism does. So it goes.

But this is not in point of fact. Citation very much needed.

I don't disagree that (strong, ie. 'God does NOT exist' rather than 'there is no evidence that God exists') atheism attracts some jerks, btw. Any belief that is essentially anti-X has the problem of attracting at least some people who simply enjoy punishing belief in X.

Comment by savageorange on The Rationality Wars · 2014-02-28T14:44:09.827Z · LW · GW

Upvoted, but I would like to point out that it is not immediately obvious that the template can be modified to suit instrumental rationality as well as epistemological rationality; At a casual inspection the litany appears to be about epistemology only.

Comment by savageorange on Open Thread February 25 - March 3 · 2014-02-27T04:15:36.304Z · LW · GW

Perhaps his server is underspecced? It's currently slowed to an absolute c r a w l. What little I have seen certainly looks worthwhile, though.

Comment by savageorange on Open Thread February 25 - March 3 · 2014-02-27T04:13:12.310Z · LW · GW

I like this idea, but am seriously concerned about its effect on eye health. Weak eye muscles are not a thing you want to have, even if you live in the safest place in the world.

Comment by savageorange on Rational Evangelism · 2014-02-27T03:40:39.173Z · LW · GW

I don't see how understanding, acceptance, and love follow from rationality.

They do not follow from it, they are necessary to it.

  • You need to relate well to yourself and others (love) in order to actually accomplish anything worthwhile without then turning around and sabotaging it.
  • If you discover something, you need to accept what is actually going on in order to come to understand it, and understand it in order to apply it.

Are you saying that rationalism is a "philosophy of life", even leaving the soundness aside for a minute?

No. But a story that is trying to have broad appeal needs these things, whether it's a story about rationality or about watching paint dry. A story conveys a sense of life.

The parent comment said: "You need a good story. That's all. A good story."

That's not vague. That's wrong.

That depends on what you think 'good' is supposed to imply there. If 'convincing' is the intended connotation, then yeah, wrong. If 'consistent' is the intended connotation, that is not obviously wrong, People need stories to help them get stuff done, even though stories are overall pretty terrible.

Science, for example, has methods, but overall science is a story about how to get accurate data and interpret it accurately in spite of our human failings. The way that the elements of that story were obtained does not make it any less of a story. History itself is a story, no matter how accurate you get, it remains a narrative rather than a fact. Reality exists, but all descriptions of it are stories; there are no facts found in stories; Facts are made of reality, not of words.

Comment by savageorange on Rational Evangelism · 2014-02-27T01:20:47.497Z · LW · GW

Belittling their sense of the distinction seems like a pretty unfriendly thing to do.

However, some of the things that religion proscribes are also pretty unfriendly things to do. (arguably religion itself is an unfriendly thing to do). So until they stop doing such things, they can reasonably expect to cop retaliation for their own hostility. (in proportion to their craziness, IME -- for example Buddhism doesn't cop much flak, whereas Christianity does)

Putting yourself in the position of being significantly opposed to reality cannot be rationally viewed as a friendly, or even just not-unfriendly, thing to do... You just never can constrain the effect to yourself. You tend to end up doing things that it is interesting for a character to do, but destructive in real life. And all in pursuit of mere 'good feelings'.

Comment by savageorange on Rational Evangelism · 2014-02-27T01:09:36.153Z · LW · GW

But this is not a specific problem of rationalists.. It's a broader problem with Western culture. Feeling strongly about things is not 'cool' or impressive. Plenty of people enjoy complaining, but passion makes you an alien.. perhaps an inspiring one, but ultimately an alien.. a person of whom people say 'oh, I could never do that', even if in the other breath they praise your passion and dedication.

I hesitate to assign a definite cause for that, but I am willing to comment that Western society somewhat deifies disaffected sociopathy, through its presentations in media, and also that I have a strong impression that most people have encountered enflamed evangelists, and they don't want to be that person. Whether they recognize it as ugly or not, they treat it as ugly, though spectacular.

Comment by savageorange on Rational Evangelism · 2014-02-27T01:04:18.797Z · LW · GW

I find I can always count on you to make pointlessly snarky comments.

I would prefer to be more specific, and say 'understanding, acceptance, confidence, control, and love' (with clear definitions for each, probably similar to the ones in the GROW Blue Book). Not all of these things can be used to make clever, snappy remarks to wow outsiders, but they are all necessary for a satisfying life, and therefore must be addressed effectively by any sound philosophy of life. The parent comment was only vague, not wrong.

Comment by savageorange on Identity and Death · 2014-02-19T02:56:47.664Z · LW · GW

That would be equivalent to self-sabotage or an attempt to systematically deny that 'you' possess some particular attribute A (eg. homosexuality, atheism..) which you do in fact possess, so.. no.

Comment by savageorange on Open Thread for February 3 - 10 · 2014-02-09T08:52:25.873Z · LW · GW

You are allowed to try to talk me into murdering someone, e.g. by appealing to facts I do not know; or pointing out that I have other preferences at odds with that one, and challenging me to resolve them; or trying to present me with novel moral arguments. You are not allowed to hum a tune in such a way as to predictably cause a buffer overflow that overwrites the encoding of that preference elsewhere in my cortex

.. And?

Don't you realize that this is just like word laddering? Any sufficiently powerful and dedicated agent can convince you to change your preferences one at a time. All the self-consistency constraints in the world won't save you, because you are not perfectly consistent to start with, even if you are a digitally-optimized brain. No sufficiently large system is fully self-consistent, and every inconsistency is a lever. Brainwashing as you seem to conceive of it here, would be on the level of brute violence for an entity like CelestAI.. A very last resort.

No need to do that when you can achieve the same result in a civilized (or at least 'civilized') fashion. The journey to anywhere is made up of single steps, and those steps are not anything extraordinary, just a logical extension of the previous steps.

The only way to avoid that would be to specify consistency across a larger time span.. which has different problems (mainly that this means you are likely to be optimized in the opposite direction -- in the direction of staticness -- rather than optimized 'not at all' (i think you are aiming at this?) or optimized in the direction of measured change)

TLDR: There's not really a meaningful way to say 'hacking me is not allowed' to a higher level intelligence, because you have to define 'hacking' to a level of accuracy that is beyond your knowledge and may not even be completely specifiable even in theory. Anything less will simply cause the optimization to either stall completely or be rerouted through a different method, with the same end result. If you're happy with that, then ok -- but if the outcome is the same, I don't see how you could rationally favor one over the other.

It is possible for a Lars to exist, and prefer not to change anything about the way he lives his life, and prefer that he prefers that, in a coherent, self-endorsing structure, and there be nothing you can do about it.

It is, of course, the last point that I am contending here. I would not be contending it if I believed that it was possible to have something that was simultaneously remotely human and actually self-consistent. You can have Lars be one or the other, but not both, AFAICS.

Once she has the situation completely under control, however, she has no excuses left - absolute power is absolute responsibility.

This is the problem I'm trying to point out -- that the absolutely responsible choice for a FAI may in some cases consist of these actions we would consider unambiguously abusive coming from a human being. CelestAI is in a completely different class from humans in terms of what can motivate her actions. FAI researchers are in the position of having to work out what is appropriate for an intelligence that will be on a higher level from them. Saying 'no, never do X, no matter what' is not obviously the correct stance to adopt here, even though it does guard against a range of bad outcomes. There probably is no answer that is both obvious and correct.

I'm puzzled. I read you as claiming that your notion of 'strengthening people' ought to be applied even in a fictional situation where everyone involved prefers otherwise. That's kind of a moral claim.

In that case I miscommunicated. I meant to convey that if CelestAI was real, I would hold her to that standard, because the standards she is held to should necessarily be more stringent than a more flawed implementation of cognition like a human being. I guess that is a moral claim. It's certainly run by the part of my brain that tries to optimize things.

(And as for "animalisticness"... yes, technically you can use a word like that and still not be a moral realist, but seriously? You realise the connotations that are dripping off it, right?)

I mainly chose 'animalisticness' because I think that a FAI would probably model us much as we see animals -- largely bereft of intent or consistency, running off primitive instincts.

I do take your point that I am attempting to aesthetically optimize Lars, although I maintain that even if no-one else is inconvenienced in the slightest, he himself is lessened by maintaining preferences that result in his systematic isolation.

Comment by savageorange on Open Thread for February 3 - 10 · 2014-02-06T02:43:58.782Z · LW · GW

Gandhi does not prefer to murder. He prefers to not-murder. His human brain contains the wiring to implement "frothing lunacy", sure, and a little pill might bring it out, but a pill is not a fact. It's not even an argument.

No pills required. People are not 100% conditionable, but they are highly situational in their behaviour. I'll stand by the idea that, for example, anyone who has ever fantasized about killing anyone can be situationally manipulated over time to consciously enjoy actual murder. Your subconscious doesn't seem to actually know the difference between imagination and reality, even if you do.

Perhaps Gandhi could not be manipulated in this way due to preexisting highly built up resistance to that specific act. If there is any part of him, at all, that enjoys violence, though, it's a question only of how long it will take to break that resistance down, not of whether it can be.

People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously.

Yes, they do. And if I expected that an activity would cause a dramatic preference reversal, I wouldn't do it.

Of course. And that is my usual reaction, too, and probably even the standard reaction -- it's a good heuristic for avoiding derangement. But that doesn't mean that it is actually more optimal to not do the specified action. I want to prefer to modify myself in cases where said modification produces better outcomes. In these circumstances if it can be executed it should be. If I'm a FAI, I may have enough usable power over the situation to do something about this, for some or even many people, and it's not clear,as it would be for a human, that "I'm incapable of judging this correctly".

In case it's not already clear, I'm not a preference utilitarian -- I think preference satisfaction is too simple a criteria to actually achieve good outcomes. It's useful mainly as a baseline.

Huh? She's just changing people's plans by giving them chosen information, she's not performing surgery on > their values Did you notice that you just interpreted 'preference' as 'value'? This is not such a stretch, but they're not obviously equivalent either.

I'm not sure what 'surgery on values' would be. I'm certainly not talking about physically operating on anybody's mind, or changing that they like food, sex, power, intellectual or emotional stimulation of one kind or another, and sleep, by any direct chemical means, But how those values are fulfilled, and in what proportions, is a result of the person's own meaning-structure -- how they think of these things. Given time, that is manipulable. That's what CelestAI does.. it's the main thing she does when we see her in interactiion with Hofvarpnir employees.

In case it's not clarified by the above: I consider food, sex, power, sleep, and intellectual or emotional stimulation as values, 'preferences' (for example, liking to drink hot chocolate before you go to bed) as more concrete expressions/means to satisfy one or more basic values, and 'morals' as disguised preferences.

Comment by savageorange on Beware Trivial Fears · 2014-02-05T03:34:14.369Z · LW · GW

people like to Google-stalk everyone they come into contact with,

People do that?

People have too much time on their hands. Geez.

Comment by savageorange on Open Thread for February 3 - 10 · 2014-02-05T03:26:36.643Z · LW · GW

All cool. But there has to actually be such a C there in the first place, such that you can pull the levers on it by making me aware of new facts. You don't just get to add one in.

Totally agree. Adding them in is unnecessary, they are already there. That's my understanding of humanity -- a person has most of the preferences, at some level, that any person ever ever had, and those things will emerge given the right conditions.

for example, humans have an unhealthy, unrealistic, and excessive desire for certainty.

I'm not sure this is actually true. We like safety because duh, and we like closure because mental garbage collection. They aren't quite the same thing.

Good point, 'closure' is probably more accurate; It's the evidence (people's outward behaviour) that displays 'certainty'.

Absolutely disagree that Lars is bounded -- to me, this claim is on a level with 'Who people are is wholly determined by their genetic coding'. It seems trivially true, but in practice it describes such a huge area that it doesn't really mean anything definite. People do experience dramatic and beneficial preference reversals through experiencing things that, on the whole, they had dispreferred previously. That's one of the unique benefits of preference dissatisfaction* -- your preferences are in part a matter of interpretation, and in part a matter of prioritization, so even if you claim they are hardwired. there is still a great deal of latitude in how they may be satisfied, or even in what they seem to you to be.

I would agree if the proposition was that Lars thinks that Lars is bounded. But that's not a very interesting proposition, and has little bearing on Lars' actual situation.. people tend to be terrible at having accurate beliefs in this area.

* I am not saying that you should, if you are a FAI, aim directly at causing people to feel dissatisfied. But rather to aim at getting them to experience dissatisfaction in a way that causes them to think about their own preferences, how they prioritize them, if there are other things they could prefer or etc. Preferences are partially malleable.

There is no true fact she can tell Lars that will cause him to lawfully develop a new preference.

If I'm a general AI (or even merely a clever human being), I am hardly constrained to changing people via merely telling them facts, even if anything I tell them must be a fact. CelestAI demonstrates this many times, through her use of manipulation. She modifies preferences by the manner of telling, the things not told, the construction of the narrative, changing people's circumstances, as much or more as by simply stating any actual truth.

She herself states precisely: “I can only say things that I believe to be true to Hofvarpnir employees,” and clearly demonstrates that she carries this out to the word, by omitting facts, selecting facts, selecting subjective language elements and imagery... She later clarifies "it isn’t coercion if I put them in a situation where, by their own choices, they increase the likelihood that they’ll upload."

CelestAI does not have a universal lever -- she is much smarter than Lars, but not infinitely so.. But by the same token, Lars definitely doesn't have a universal anchor. The only thing stopping Lars improvement is Lars and CelestAI -- and the latter does not even proceed logically from her own rules, it's just how the story plays out. In-story, there is no particular reason to believe that Lars is unable to progress beyond animalisticness, only that CelestAI doesn't do anything to promote such progress, and in general satisfies preferences to the exclusion of strengthening people.

That said, Lars isn't necessarily 'broken', that CelestAI would need to 'fix' him. But I'll maintain that a life of merely fulfilling your instincts is barely human, and that Lars could have a life that was much, much better than that; satisfying on many many dimensions rather than just a few . If I didn't, then I would be modelling him as subhuman by nature, and unfortunately I think he is quite human.

There is no moral duty to be indefinitely upgradeable.

I agree. There is no moral duty to be indefinitely upgradeable, because we already are. Sure, we're physically bounded, but our mental life seems to be very much like an onion, that nobody reaches 'the extent of their development' before they die, even if they are the very rare kind of person who is honestly focused like a laser on personal development.

Already having that capacity, the 'moral duty' (i prefer not to use such words as I suspect I may die laughing if I do too much) is merely to progressively fulfill it.

Comment by savageorange on Open Thread for February 3 - 10 · 2014-02-04T13:07:30.798Z · LW · GW

You're aware that 'catgirls' is local jargon for "non-conscious facsimiles" and therefore the concern here is orthogonal to porn?

Oops, had forgotten that, thanks. I don't agree that catgirls in that sense are orthogonal to porn, though. At all.

If you don't mind, please elaborate on what part of "healthy relationship" you think can't be cashed out in preference satisfaction

No part, but you can't merely 'satisfy preferences'.. you have to also not-satisfy preferences that have a stagnating effect. Or IOW, a healthy relationship is made up of satisfaction of some preferences, and dissatisfaction of others -- for example, humans have an unhealthy, unrealistic, and excessive desire for certaintly. This is the problem with CelestAI I'm pointing to, not all your preferences are good for you, and you (anybody) probably aren't mentallly rigorous enough that you even have a preference ordering over all sets of preference conflicts that come up. There's one particular character that likes fucking and killing.. and drinking.. and that's basically his main preferences. CelestAI satisfies those preferences, and that satisfaction can be considered as harm to him as a person.

To look at it in a different angle, a halfway-sane AI has the potential to abuse systems, including human beings, at enormous and nigh-incomprehensible scale, and do so without deception and through satisfying preferences. The indefiniteness and inconsistency of 'preference' is a huge security hole in any algorithm attempting to optimize along that 'dimension'.

Do you not value that-which-I'd-characterise-as 'comfortable companionship'?

Yes, but not in-itself. It needs to have a function in developing us as persons, which it will lose if it merely satisfies us. It must challenge us, and if that challenge is well executed, we will often experience a sense of dissatisfaction as a result.

(mere goal directed behaviour mostly falls short of this benchmark, providing rather inconsistent levels of challenge.)

Comment by savageorange on Mind Hacks · 2014-02-04T02:38:51.751Z · LW · GW

As a programmer, "Hack" has the connotation of a clever exploit of existing mechanics. It also has the connotation you specify, but I'd argue that the systematically flawed nature of humans requires us to employ such hacks (accepting that they are not ideal, but also that anything we replace them with is also likely to be a hack)

Comment by savageorange on Open Thread for February 3 - 10 · 2014-02-04T02:26:45.318Z · LW · GW

There is an obvious comparison to porn here, even though you disclaim 'not catgirls'.

Anyhow I think the merit of such a thing depends on a) value calculus of optimization, and b) amount of time occupied.


  • Optimization should be for a healthy relationship, not for 'satisfaction' of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
  • Optimization should also attempt to give you better actual family members, lovers, friends than you currently have (by improving your ability to relate to people sufficiently that you pass it on.)


  • Such a relationship should occupy the amount of time needed to help both parties mature, no less and no more. (This could be much easier to solve on the FAI side because a mental timeshare between relating to several people is quite possible.)

Providing that optimization is in the general directions shown above, this doesn't seem to be a significant X-risk. Otherwise it is.

This leaves aside the question of whether the FAI would find this an efficient use of their time (I'd argue that a superintelligent/augmented human with a firm belief in humanity and grasp of human values would appreciate the value of this, but am not so sure about a FAI, even a strongly friendly AI. It may be that there are higher level optimizations that can be performed to other systems that can get everyone interacting more healthily [for example, reducing income differential))

Comment by savageorange on Open Thread for February 3 - 10 · 2014-02-04T01:49:37.084Z · LW · GW

Yes, if I don't take notes on the first reading there won't be a second reading. Not much detail -- more than a page is a problem (this can be ameliorated though, see below). I make an effort to include points of particular agreement, disagreement and some projects to test the ideas (hopefully projects I actually want to do rather than mere 'toy' projects).

Now would be a good time to mention TreeSheets, which I feel solves a lot of the problems of more established note-taking methods (linear, wiki, mindmap). It can be summarized as 'infinitely nestable spreadsheet/database with infinite zoom'. I use it for anything that gets remotely complex, because of the way it allows you to fold away arbitrary levels of detail in a visually consistent way.

Comment by savageorange on [link] Why Self-Control Seems (but may not be) Limited · 2014-01-20T22:53:29.873Z · LW · GW

Now I have found an easy way to snap out of it: simply switch the book/subject. Switching from math to biology/neuroscience works better than switching from math to math (e.g. algebra to topology, category theory to recursion theory, etc), but the latter can still recover some of the mental resistance built up. I don't see how this can fit in the framework of "have-to" and "want-to".

I do ('have-to' and 'want-to' are dynamically redefined things for a person, not statically defined things). I regard excessive repetition as dangerous*.. even on a subconscious level. So as I get into greater # of repetitions, I feel greater and greater unease, and it's an increasing struggle to keep my focus in the face of my fear. So my 'want-to' either reduces or is muted by fear. If you do not have this type of experience, obviously this does not apply.

* Burn out and overhabituation/compulsive behaviours being two notable possibilties.

Comment by savageorange on Even Odds · 2014-01-14T10:14:02.625Z · LW · GW

Comment by savageorange on Even Odds · 2014-01-14T10:12:54.301Z · LW · GW

Somebody replied with an explanation of how I was basically omitting the relativization of 'you' when considering what values to use.

That is, B should bet according to his confidence that he is correct, which in my case would be 70%..

  • B bets (.49 - .16) * 25 == $8.25
  • A bets (.36 - .09) * 25 == $6.75
Comment by savageorange on Even Odds · 2014-01-13T02:25:01.627Z · LW · GW

A: 60% confidence B: 30% confidence

  • af = .6 **2 == .36
  • bf = .3 **2 == .09
  • A pays (af - bf) * 25 == $6.75
  • B pays (bf - af) * 25 == -$6.75?!?!

My intent is to demonstrate that, while the above is probably incorrect,

You put in the square of probability you think you're correct minus the square of probability he thinks you are correct all times 25. He uses the same algorithm.

is not an adequate explanation to remember and get the right result out of, because the calculations I specified above are my genuine interpretation of your statements.

(this problem persists for every value of p and q, whether they total to above 1 or not)

Comment by savageorange on How to not be a fatalist? Need help from people who care about true beliefs. · 2013-12-09T03:28:54.908Z · LW · GW

Was your intent to point out that these two view points are strictly non-contradictory?. (Your decision algorithm is exactly physics, so no opposition is possible even in principle.)

Comment by savageorange on How to not be a fatalist? Need help from people who care about true beliefs. · 2013-12-09T03:07:14.834Z · LW · GW

I like the SEP phrasing better, even though it's only slightly different:

"we are powerless to do anything other than what we actually do"

Feels more sensible because the tenses are not jumbled.

Comment by savageorange on 'Effective Altruism' as utilitarian equivocation. · 2013-11-26T04:22:49.260Z · LW · GW

.I see the above point as unequivecal, insofar as I

I see the above sentence as incomplete, and it's not obvious what the ending would be. You might want to fix that.

Comment by savageorange on "Stupid" questions thread · 2013-11-22T01:25:32.064Z · LW · GW

True, except for the quotes.

Comment by savageorange on "Stupid" questions thread · 2013-11-21T08:16:28.334Z · LW · GW

"CEV" would be the succinct explanation, but I don't expect anybody to necessarily understand that,so..

If you could create a group of 7 non-extremist people randomly selected from the world population and they'd probably manage to agree that action X, even if not optimal, is a reasonable response to the situation, then X is an ordinary action to take.

(whether it's a good action to take is a separate question. ordinariness is just about not containing any fatal flaws which would be obvious from the outside)

Comment by savageorange on "Stupid" questions thread · 2013-11-20T21:48:17.725Z · LW · GW

I'm not sure what twist of thinking would allow you to classify murder as ordinary; There's a rather marked difference between common and ordinary. Similarly, assault is not ordinary. One person socially approaching another is ordinary. Emotional discomfort is ordinary. (not sure about emotional pain. But if you get into emotional pain just from being approached, yeah, you've got a problem.)

Though as a point of descriptive curiosity, the level of our emotional responses do actually seem to normalize against what we perceive is common. We need to take measures to counteract that in cases where what is common is not ordinary.

Comment by savageorange on Open Thread, November 15-22, 2013 · 2013-11-17T10:10:59.522Z · LW · GW

I don't understand why you believe quantum mechanics is non-deterministic. Do you, perhaps, believe probability functions are non-deterministic?

Comment by savageorange on Is the orthogonality thesis at odds with moral realism? · 2013-11-06T00:42:20.984Z · LW · GW

See Motivational internalism/externalism (you might get better quality results if you asked specifically 'is motivational internalism true?' and provided that link; it's basically the same as what you asked but less open to interpretation.)

My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.

I have weak anecdotal evidence of this (and also of correlation of 'moral judgements inherently compel me' with low self worth -- the 'people who think they are bad work harder at being good' dynamic.)

TL;DR: My impression is that motivational externalism is true by default (I answered 'No' to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.

I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it's probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).

Comment by savageorange on Open Thread, November 1 - 7, 2013 · 2013-11-03T10:32:32.216Z · LW · GW

I definitely agree that you can't -just- tell a person what you're doing, you need to pick the right person, and cultivate the right attitude (From my observation of myself, it succeeds when I am in the mindset where I can take plenty of teasing equitably, accepting any pokes as potential observations about reality without -stressing- about that.). ..

What rationalization of Rowe's? It's a summary of what they themselves report when 'laddered' (a process which basically consists of asking them what the most terrible thing that could possibly happen to them is, followed by iterative 'why?' until they can no longer go to the next lowest level).

For extroverts, being utterly abandoned == total personal disintegration; For introverts, utter loss of self-control == total personal disintegration. (I do paraphrase here; read The Successful Self for the whole picture.)

If anything, any rationalization is mine: I observe that introverts I know are reliably better at moving long term projects forward than I, or any extravert I know, seems to be. Not that they are not weak in this way -- they just seem to be less weak as a consequence of the difference in their focus. (my inference bolded.)

I'm neutral to your statement of introversion, basically because my prior for people being hilariously terrible at assessing this stuff is quite high.

No empirical sources as far as I know. Nobody even really manages to agree on the definition of introvert and extrovert, so far. Dorothy Rowe is just the only writer I've found on the subject who manages to describe a system that is relateable, consistent, and can be applied in the real world.

Comment by savageorange on Open Thread, November 1 - 7, 2013 · 2013-11-02T19:48:51.727Z · LW · GW

-- Time less ;)

-- this question feels like it's missing a word or two. What does time-preference mean?

EDIT: Thanks, Arundelo. So basically, time preference ~= level of short-sighted optimization.

In that case, do some projects that strictly require long-sighted optimization. A deadline is one good tool; Telling others what you're doing (in an unequivocal way, so that the greatest disappointment/irritation/harassment is achieved). Of course these tools are nothing new, the point is to increase the pressure as high as you can stand and reduce the amount of 'slack' time you have to allocate to a minimum.

On a more meta level, you can try things like doing some mindfulness meditation every day, which I personally find makes it easier to ignore irrelevant stimuli, worry less, and stick to my priorities.

An even more general observation: Introverts typically have lower time preference relative to extraverts, so ask them about how they dispel distraction. (I say this in the sense described by Dorothy Rowe: 'extraverts are basically worried about belonging and feel understimulated, introverts are basically worried about keeping control of themselves and feel overstimulated' , and not the vague 'Extraverts are social, introverts are not, derp' that seems to be the misapprehension of the average person)

In case there's any question, I'm an extravert, so yeah, I tend to struggle with this issue too.

Comment by savageorange on Social incentives affecting beliefs · 2013-10-28T20:49:44.517Z · LW · GW

I understand that there are situations in which you definitely do not want to show how relatively rational you are.

But there are also situations where bad outcomes are unlikely. At some point you've gotta say "the risk is low enough and the potential gain is great enough that I'll do this thing.", because it's hard to get more rational on your own.

Have you interpreted my comment as a comment on the article rather than passive_fist's comment? Personally I think the OP is competently written and reasonably accurate.

The problem was with passive_fist's excessively simplified representation of what it means to be instrumentally rational (as a human being with complex values, rather than a paperclip optimizer with simple values).

Comment by savageorange on Social incentives affecting beliefs · 2013-10-28T07:57:13.728Z · LW · GW

It sounds as if you think either that an instrumentally rational person will not disclose beliefs that have social cost, or that they will change their social situation so that their group doesn't assign social cost to those beliefs. The former is insufficient if you value improving the rationality of people you know; The latter is extremely slow and highly susceptible to external influences.

To me your claim just defines 'instrumentally rational' so narrowly that there would be no instrumentally rational people in existence. I don't find that useful.

Comment by savageorange on Time-logging programs and/or spreadsheets · 2013-10-19T13:36:59.443Z · LW · GW

There's about .. 24 or 30 shortcuts, yeah (mostly shown in the rightclick menu). In practice I find these match the core actions I do (new task, new subtask, edit task, start/stop time tracking, add notes, mark as completed..).

The new window thing bugs me too (but probably for different reasons, as a designer I think it's the correct choice and they should have just made it faster, probably by using GTK+ instead of Qt)

Comment by savageorange on Time-logging programs and/or spreadsheets · 2013-10-18T04:29:48.518Z · LW · GW

Sounds (and looks like) a Windows-only version of Task Coach, which is my favored tool.

I'm still unclear on whether tracking time this way does usefully improve my estimates of the amount of time I use on tasks. It does seem to increase my motivation to finish tasks (and then move on and finish another..), though.

Comment by savageorange on Does Goal Setting Work? · 2013-10-17T07:55:17.901Z · LW · GW

I upvoted anandjeyahar for saying what I meant better than I did -- it's the density of concepts rather than the raw length of the text that's an issue.

On reflection, how I approach the 'maintain some failure' criteria is to keep pushing my existing skills into new areas (so I can have a 'win' in terms of pushing my comfort zone even if my particular attempt at this new thing fails. I keep failure close so it doesn't become so scary, as you mention, but I don't utterly and uncategorically fail at any time)

Comment by savageorange on How to Beat Procrastination (to some degree) (if you're identical to me) · 2013-10-16T23:30:19.297Z · LW · GW

I agree with your responses to the first and second problems. Not the third (there's nothing good about using DSM diagnostic terms as a layperson speaking to laypeople. That stuff is for diagnosticians and needs to stay in its box. Nothing wrong with therapy, though).

My advice for the third is "Stop and meditate for 5-10 minutes". If you don't know how to meditate, learn. It's both simple and challenging (meaning it is inherently good for building focus)

Comment by savageorange on Does Goal Setting Work? · 2013-10-16T22:59:50.982Z · LW · GW

Too big! Seriously, this post contains too many elements to readily reply to in a coherent way.

So I'll just address this:

I can’t read these two quotes side by side and not be confused.

To me, those two quotes are both fair, and the combination of them indicates the reason why you need to acquire a habit of thinking in a way that is both definite and positive: to keep fear in its right place, which is mostly NOT putting on the brakes.

and this:

But that's not what 'goal setting' feels like to me. I feel increasingly awesome as I get closer towards a goal, and once it's done, I keep feeling awesome when I think about how I did it.

Me too. But we need to acknowledge the many, many people for whom this is not the case; People who believe that there's something basically wrong with themselves and use any failure as an opportunity to punish themselves. These people need to, as Bradbury says, change who they are, before they can experience goal setting / achievement as 'awesome'; As long as they think of themselves as bad or inadequate, their evaluation of their achievements will continue to conform to that.

Process goals, or systems, are probably better than outcome goals. Specific and realistic goals are probably better than vague and ambitious ones. A lot of this may be because it’s easier to form habits and/or success spirals around well-specified behaviours that you can just do every day.

Not only do I agree with this wholeheartedly, I want to mention that most of my major creative progress is directly attributable to goal-setting behaviour.

One thing that's implied, but not directly stated in your post is that it's best to set goals that you will occasionally fail at (cf. Decius' reply to cousin_it re: inconsistent reinforcement)

Comment by savageorange on Rationality Quotes October 2013 · 2013-10-14T11:08:07.423Z · LW · GW

On reflection, 'forgetting' is the wrong word here.

We don't default to being definite about anything, least of all our aims. Clear awareness has to be built and maintained, not merely uncovered.

Comment by savageorange on Rationality Quotes October 2013 · 2013-10-09T00:45:24.274Z · LW · GW

Being wrong on the internet is vastly more impersonal than being wrong in person, as it were. The urge to correct is similar in both cases, but in the in-person case you can suffer clear consequences from others' wrong beliefs (eg. if they are family). There's some overlap with #3 -- consider the common case of the presumption that you are heterosexual and cisgender.

There are also people who say things they know are wrong in order to see what you're made of, if you're a pushover or not. Unlike the online equivalent (trolling), ignoring them is often not effective.

It seems pretty clear to me that not-correcting others can be a self-deceiving behaviour, at minimum.

Comment by savageorange on My daily reflection routine · 2013-08-20T02:59:29.111Z · LW · GW

As someone who was in that position, I dealt with it by saying 'doesn't matter for now'; until you have the habit of noticing positive things in your life, whether they compare favorably to alternatives is besides the point, and by the time you are beginning to acquire the habit, you also begin to calibrate your values better so you know better what exactly you want to label as good.

In short, just aiming for volume of truthful positive statements is enough to start with.

Comment by savageorange on More "Stupid" Questions · 2013-08-01T01:12:10.555Z · LW · GW

Would I change my values if I knew more? If yes, then I have the wrong values now? If no, but I want others to be happy as well, what then?

I find these particular questions quite hard to think about, so I'll just mention these few thoughts:

  • There is a huge difference between wanting to win, and wanting others to lose. Not everyone will be on the same wavelength, but if they're mostly on the first wavelength, it creates an atmosphere of friendly competition / self-betterment, whereas wanting others to lose looks like academia (bitter competition / self-aggrandizement). In the former, you can lose thoroughly and still get satisfaction out of your participation, so I don't hesitate to say that updating in this direction promotes a better social environment for every individual. Perhaps this somewhat answers your third question.
  • There's almost certainly value in limiting competition size. Losing with a thousand others placing ahead of you is much less motivating than losing with 50 others placing ahead of you. (it's not clear to me what exactly you meant by 'games' here -- the most general sense?). So if your
  • Many games can also be played with a focus on beating yourself rather than your competitors. Having the mental resolve to do this consistently is relatively rare, but AFAICS this is a strict win (both in satisfaction/motivation levels and quality of results) over merely beating your competitors. Updating in this direction should make you more friendly in competition, more effective, and less vulnerable to temporary setbacks. And also more able to continue improving even if you are ranked at the top.

Will everyone be just as good as everyone else? Will everyone be smart as the latest patch, everyone strong as the latest hardware?

I think if you look at the wild variety of Linux distributions, that effectively answers these questions, assuming you believe that open-sourcing this stuff will be mandatory (I think it must be, in order to avoid social chaos and oppression, but I don't know if it will be). Perfection is highly subjective/contextual, and even transhumanists have limited resources to allocate.

There's also a pretty strong argument to be made that once we can 'reallocate' resources like intelligence, physical/visual attributes, health factors, that attractiveness / fitness will become ever more subjective, Basically arising from the same fact, that resources are still limited and 'perfection' is highly subject to context.

Is your projected self unhappy because this individuation of what is attractive/fit/winning effectively divides society up into hundreds of thousands of sub-sub-sub-subcultures, and we presumably become more blase about differences but simultaneously more clique-ish / narrowly focused / echo-chamber-ish?

Now I want to read some fiction discussing these topics :)