Posts

Comments

Comment by wanderingsoul on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-09-18T03:55:47.977Z · LW · GW

Ah, that clears this up a bit. I think I just didn't notice when N' switched from representing an exploitive agent to an exploitable one. Either that, or I have a different association for exploitive agent than what EY intended. (namely, one which attempts to exploit)

Comment by wanderingsoul on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-09-17T23:57:28.260Z · LW · GW

I'm not getting what you're going for here. If these agents actually change their definition of fairness based on other agents definitions then they are trivially exploitable. Are there two separate behaviors here, you want unexploitability in a single encounter, but you still want these agents to be able to adapt their definition of "fairness" based on the population as a whole?

Comment by wanderingsoul on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-09-17T05:39:05.507Z · LW · GW

I tried to generalize Eliezer's outcomes to functions, and realized if both agents are unexploitable, the optimal functions to pick would lead to Stuart's solution precisely. Stuart's solution allows agents to arbitrarily penalize the other though, which is why I like extending Eliezer's concept better. Details below, P.S. I tried to post this in a comment above, but in editing it I appear to have somehow made it invisible, at least to me. Sorry for repost if you can indeed see all the comments I've made.


It seems the logical extension of your finitely many step-downs in "fairness" would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these functions, with one of them inverted to put them on the same axes. (This intersection is guaranteed to exist in the only interesting case, where the agents do not accept as fair enough each other's magical fairness point)

Curiously, if both agents use this strategy, then both agents seem to be incentivized to have their function have as much "skew" (as EY defined it in clarification 2) as possible, as both functions are monotonically increasing so decreasing your opponents share can only decrease your own. Asymptotically and choosing these functions optimally, this means that both agents will end up getting what the other agent thinks is fair, minus a vanishingly small factor!

Let me know if my reasoning above is transparent. If not, I can clarify, but I'll avoid expending the extra effort revising further if what I already have is clear enough. Also, just simple confirmation that I didn't make a silly logical mistake/post something well known in the community already is always appreciated.

Comment by wanderingsoul on Cooperating with agents with different ideas of fairness, while resisting exploitation · 2013-09-17T05:20:50.367Z · LW · GW

It seems the logical extension of your finitely many step-downs in "fairness" would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these functions, with one of them inverted to put them on the same axes. (This intersection is guaranteed to exist in the only interesting case, where the agents do not accept as fair enough each other's magical fairness point)

Curiously, if both agents use this strategy, then both agents seem to be incentivized to have their function have as much "skew" (as EY defined it in clarification 2) as possible, as both functions are monotonically increasing so decreasing your opponents share can only decrease your own. Asymptotically and choosing these functions optimally, this means that both agents will end up getting what the other agent thinks is fair, minus a vanishingly small factor!

Let me know if my reasoning above is transparent. If not, I can clarify, but I'll avoid expending the extra effort revising further if what I already have is clear enough.

Comment by wanderingsoul on Torture vs Dust Specks Yet Again · 2013-08-21T04:23:05.592Z · LW · GW

I agree with you a lot, but would still like to raise a counterpoint. To illustrate the problem with mathematical calculations involving truly big numbers though, what would you regard as the probability that some contortion of this universe's laws allows for literally infinite computation? I don't give it a particularly high probability at all, but I couldn't in any honesty assign it one anywhere near 1/3^^^3. The naive expected number of minds FAI affects (effects?) doesn't even converge in that case, which at least for me is a little problematic

Comment by wanderingsoul on Meetup : (Improv?) games meetup · 2013-07-28T03:42:13.910Z · LW · GW

Try to put meeting location in the title, just to save people not involved a click and better draw in people actually in the area

Comment by wanderingsoul on How to Write Deep Characters · 2013-06-17T04:33:02.948Z · LW · GW

Please taboo "good". When talking about stories especially, good has more than one meaning, and I think that's part of your disagreement

Comment by wanderingsoul on Optimizing for attractiveness · 2013-05-31T15:13:45.266Z · LW · GW

A couple others have mentioned warnings on doing something only to become attractive (e.g. You will tire of it or become resentful). Something like general fitness with multiple benefits likely isn't a problem, but there's also an alternate perspective that has worked really well for me. Instead of optimizing for attractiveness, consider optimizing for awesomeness. Being awesome will tend to make people attracted to you, but it has the added bonus of improving your self-confidence (which again increases attractiveness) and life-satisfaction.

As far as how to do this, I wouldn't mind tips myself, but the general gist of what I do is just keep that drive to be more awesome at the back of my mind when making decisions (in LW parlance, adopt awesomeness as an instrumental value). Anyone else have ideas?

Comment by wanderingsoul on [LINK] The Unbelievers: Lawrence Krauss and Richard Dawkins Team Up Against Religion · 2013-05-01T09:47:46.429Z · LW · GW

Well then LW will be just fine; after all we fit quite snugly into that category

Comment by wanderingsoul on [SEQ RERUN] Belief in Self-Deception · 2013-03-16T19:46:18.993Z · LW · GW

Moderately on topic:

I'll occasionally take "drugs" like airborne to boost my immune system if I feel myself coming down with something. I fully know that they have little to no medicinal effect, but I also know the placebo effect is real and well documented. In the end, I take them because I expect them to trigger a placebo effect where I feel better, and expect it to work because the placebo effect is real. This feels silly.

I wonder whether it is possible to switch out the physical action of taking a pill with an entirely mental event and get this same effect. I also wonder if this is just called optimism. Lastly, I wonder if I truly believe that "drugs" like airborne are able to help me, or just believe I believe, and am unsure what impact that has on my expectations given the placebo effect.

Comment by wanderingsoul on Exponent of Desire · 2013-02-27T06:05:57.785Z · LW · GW

I don't really care much about the it

My friends do though, so I often wish I cared more

I'm unsure whether I want to be moved by that consideration though

I really wish I had stronger opinions about things like that

But I don't really know how much good that wish is doing me

At least I give self reflection a shot though, people always say it has good effects

Though I'm unsure whether I should believe the hype

I dislike always being uncertain

Though I admit that dislike has both unpleasant and motivating aspects

And I love just what this drive to dispel uncertainty can do

...

Bonus points to whoever manages to make one recurse on itself and actually get the infinite series

Comment by wanderingsoul on The value of Now. · 2013-02-01T11:16:12.723Z · LW · GW

It does draw attention to the fact that we're often bad at deciding which entities to award ethical weight to. Not necessarily the clearest post doing so, and missing authorial opinion, but I wouldn't be shocked if the LW community could have an interesting discussion resulting from this post

Comment by wanderingsoul on [LINK] NYT Article about Existential Risk from AI · 2013-01-28T10:54:32.253Z · LW · GW

It seems we have a new avatar for Clippy; the automated IKEA furniture factory

Comment by wanderingsoul on Credence calibration game FAQ · 2012-11-27T10:06:47.841Z · LW · GW

Nice game, good to see someone making it easy to just practice being well calibrated.

My calibration started off wonky, (e.g. was wrong each of the first six times I claimed 70% certainty) but quickly improved. Unfortunately, it improved suspiciously well, I suspect I may have been assigning probabilities with my primary goal not being scoring points, but instead with trying to get that bar graph displayed every 5 or 10 questions to even out. It's a well designed game, but unfortunately at least for me the score wasn't the main motivator, which is a problem because the score is the quantity that increases by being helpfully well-calibrated. Anyone else have a similar experience?

Comment by wanderingsoul on 2012 Less Wrong Census/Survey · 2012-11-07T00:58:02.996Z · LW · GW

Took the survey, plus the IQ test out of curiosity, I'd never had my IQ tested before.

Along similar reasoning, do we know how well the iqtest.dk test correlates with non-internet tests of IQ? Getting a number is cool, knowing it was generated by a process fundamentally different than rand(100,160) would be even better

Comment by wanderingsoul on Open Thread, October 16-31, 2012 · 2012-10-25T19:56:01.804Z · LW · GW

I hadn't, but it was worth the while. I agree, thanks

Comment by wanderingsoul on Open Thread, October 16-31, 2012 · 2012-10-17T09:58:21.865Z · LW · GW

Not too long ago I wanted to write a poem to express a poem to express a certain emotion, defiance toward death, but it only occurred to me recently that it might be LW appropriate. I took a somewhat different path than "do not go gentle..." but you can judge yourselves how it went. Posted in the open thread as I feel it is relatively open to random stuff like this. (Formatting screwy because I'm not used to the format here yet)

Defiance

I am afraid

        All about me the lights blink out

        Seeing their fate I’m filled with fear

        I want to run, I want to shout

        Perhaps this time someone will hear

I am dust

        Dancing mannequin of the wind

        I cannot see what strings bind me

        I have lived and laughed, loved and sinned

        Never knowing if I was free

                                But no more!

I am alive

        Bind me no more, you dust!  This mind

        knows the fires of love and life

        Any dust that burns this bright

        Is not cold enough to truly bind

I am human

        Two arms, two legs, a mind of steel

        Latest line of nature’s skyward stride

        But so much more, to think and feel

        In the land where no dreams have died

I am mankind

        I am not, nor was ever alone

        With loving brothers at my side

        I’ll shout the truth this love has shown

        The joy for which so many cried

And I am rising The life we feel is more than death

        The love is worth more than the fear

        And one day we’ll kill you, little death

        If it takes mankind a thousand years



        So take me if you will and can

        Though I’ll fight you the whole way

        Soon will come the age of man

        That day when a child can say

                                                         that

I am not afraid

Comment by wanderingsoul on Which questions about online classes would you ask Peter Norvig? · 2012-09-20T04:08:58.187Z · LW · GW

I'm no Peter Norvig, but this is the discussion section after all....

One tool that may or may not have a place in online education is gamification. To put a long story short, the gaming industry has gotten plenty of practice motivating people to keep going, even at tasks that wouldn't necessarily be the most interesting. Other industries have finally noticed this, and started trying it out to see which concepts from gaming carry over well to other fields. I don't personally know of any research specific to education, but would be interested if anything relevant was found

An enthusiastic, low-level introduction to gamifying education: http://www.penny-arcade.com/patv/episode/gamifying-education

Comment by wanderingsoul on Stupid Questions Open Thread Round 3 · 2012-07-08T03:27:09.761Z · LW · GW

I might as well take a shot at explaining. Pascal's wager says I might as well take on the relatively small inconvenience of going through the motions of believing in God, because if the small probability event occurs that he does exist, the reward is extremely large or infinite (eternal life in heaven presumably)

Pascal's mugging instead makes this a relatively small payment ($5 as Yudkowsky phrased it) to avoid or mitigate a minuscule chance that someone may cause a huge amount of harm (putting dust specks in 3^^^^3 people's eyes or whatever the current version is)

Thus for both of them people are faced with making some small investment to, should an event of minuscule probability occur, vastly increase their utility. A lottery ticket