Posts

Off to Alice Springs 2012-05-16T19:32:29.872Z
Just a reminder, for everyone that signed up for the intro to AI class, it's started. 2011-10-10T19:07:27.820Z
Am confused about part of the IC* algorithm in Pearl's Causality. 2011-08-06T16:16:22.565Z
Stanford Intro to AI course to be taught for free online 2011-07-30T16:22:49.197Z
Rationalist (well, skeptic, at least) webcomic. 2011-04-21T17:30:39.444Z
Gamification and rationality training 2011-04-09T17:59:00.780Z
death-is-bad-ism going a little bit more mainstream? 2011-03-24T08:27:23.506Z
Farmington Hills, MI Less Wrong meetup: Sunday, February 20 2011-02-13T18:42:18.958Z
Interest in LW meetup in Farmington Hills, Michigan? 2011-02-11T06:06:33.491Z
Ethics of Jury nullification and TDT? 2010-10-26T21:01:23.563Z
Self control may be contagious 2010-01-14T03:41:20.728Z
Timeless Identity Crisis 2009-09-11T02:37:01.745Z
Don't be Pathologically Mugged! 2009-08-28T21:47:56.463Z
How Not to be Stupid: Brewing a Nice Cup of Utilitea 2009-05-09T08:14:18.363Z
Conventions and Confusing Continuity Conundrums 2009-05-01T01:41:14.146Z
How Not to be Stupid: Adorable Maybes 2009-04-29T19:15:01.162Z
How Not to be Stupid: Know What You Want, What You Really Really Want 2009-04-28T01:11:46.326Z
How Not to be Stupid: Starting Up 2009-04-28T00:23:24.171Z

Comments

Comment by Psy-Kosh on The Quantum Arena · 2014-09-10T20:55:48.274Z · LW · GW

Ah, nevermind then. I was thinking something like let b(x,k) = 1/sqrt(2k) when |x| < k and 0 otherwise

then define integral B(x)f(x) dx as the limit as k->0+ of integral b(x,k)f(x) dx

I was thinking that then integral (B(x))^2 f(x) dx would be like integral delta(x)f(x) dx.

Now that I think about it more carefully, especially in light of your comment, perhaps that was naive and that wouldn't actually work. (Yeah, I can see now my reasoning wasn't actually valid there. Whoops.)

Ah well. thank you for correcting me then. :)

Comment by Psy-Kosh on Omission vs commission and conservation of expected moral evidence · 2014-09-10T19:39:12.981Z · LW · GW

I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:

Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?

Comment by Psy-Kosh on The Quantum Arena · 2014-09-10T19:36:23.627Z · LW · GW

Aaaaarggghh! (sorry, that was just because I realized I was being stupid... specifically that I'd been thinking of the deltas as orthonormal because the integral of a delta = 1.)

Though... it occurs to me that one could construct something that acted like a "square root of a delta", which would then make an orthonormal basis (though still not part of the hilbert space).

(EDIT: hrm... maybe not)

Anyways, thank you.

Comment by Psy-Kosh on The Quantum Arena · 2014-08-16T01:05:22.959Z · LW · GW

Meant to reply to this a bit back, this is probably a stupid question, but...

The uncountable set that you would intuitively think is a basis for Hilbert space, namely the set of functions which are zero except at a single value where they are one, is in fact not even a sequence of distinct elements of Hilbert space, since all these functions are elements of , and are therefore considered to be equivalent to the zero function.

What about the semi intuitive notion of having the dirac delta distributions as a basis? ie, a basis delta(X - R) parameterized by the vector R? How does that fit into all this?

Comment by Psy-Kosh on Proper value learning through indifference · 2014-06-23T18:42:35.151Z · LW · GW

Ah, alright.

Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1/2 to the proposition that tomorrow they'd transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?

I'm not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I'm not sure what the Right Way for a "conserves expected moral evidence" agent is. There're some special cases that seem to be well specified, but I'm not sure how I'd want it to behave in the general case.

Comment by Psy-Kosh on Open Thread April 8 - April 14 2014 · 2014-06-22T20:16:38.636Z · LW · GW

Not sure. They don't actually tell you that.

Comment by Psy-Kosh on Proper value learning through indifference · 2014-06-22T20:14:32.906Z · LW · GW

Really interesting, but I'm a bit confused about something. Unless I misunderstand, you're claiming this has the property of conservation of moral evidence... But near as I can tell, it doesn't.

Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.

Comment by Psy-Kosh on Open Thread April 8 - April 14 2014 · 2014-06-07T00:46:55.068Z · LW · GW

Yeah, found that out during the final interview. Sadly, found out several days ago they rejected me, so it's sort of moot now.

Comment by Psy-Kosh on Absence of Evidence Is Evidence of Absence · 2014-05-16T22:28:23.709Z · LW · GW

Alternately, you might have alternative hypothesis that explain the absence equally well, but with a much higher complexity cost.

Comment by Psy-Kosh on Open Thread April 8 - April 14 2014 · 2014-05-16T18:57:28.397Z · LW · GW

Hey there, I'm mid application process. (They're having me do the prep work as part of the application). Anyways,,,

B) If you don't mind too much: stay at App Academy. It isn't comfortable but you'll greatly benefit from being around other people learning web development all the time and it will keep you from slacking off.

I'm confused about that. App Academy has housing/dorms? I didn't see anything about that. Or did I misunderstand what you meant?

Comment by Psy-Kosh on [deleted post] 2014-04-03T04:41:00.956Z

Cool! (Though does seem that a license would be useful for longer trips, so you'd at least have the option of renting a vehicle if needed.)

And interesting point re social environment.

Comment by Psy-Kosh on [deleted post] 2014-04-02T05:47:59.773Z

I'm just going to say I particularly liked the idea of the house cable transport system.

Comment by Psy-Kosh on [deleted post] 2014-04-02T05:16:00.113Z

Yeah, that was my very first thought re the tunnels. Excavation is expensive. (and maintenance costs would be rather higher as well.)

OTOH, we don't even need full solution (including legal solution) to self driving cars to improve stuff. The obvious solution to the "but I might need to go on a 200 mile trip" is "rent a long distance car as needed, and otherwise own a commuter car."

That needs far less of coordination problems, because that's something that one can pretty much do right now. Next time one goes to purchase/lease/whatever a vehicle, get one appropriate/efficient/etc for short distances, and just rent a long haul vehicle as needed.

(Or, if living in place with decent public transport, potentially no need to own a vehicle at all, of course.)

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2014-01-04T18:53:57.094Z · LW · GW

Running a bit late, but still coming, just about to head out.

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2014-01-04T02:17:18.041Z · LW · GW

Cool! In that case, as of now at least, I'm still planning on showing up.

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2014-01-03T18:07:53.710Z · LW · GW

Well, I could bring a few extra chairs if wanted. (Although are we even still on for tomorrow given how the roads are? (Admittedly, sunday will probably be worse...))

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2014-01-02T17:27:17.917Z · LW · GW

Well, as I said, anything else needed? (more chairs? other stuff?)

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2013-12-28T19:11:59.780Z · LW · GW

As of now, I'm planning on coming.

Anything I should be bringing? (ie, extra chairs, whatever?)

Comment by Psy-Kosh on Open Thread, November 1 - 7, 2013 · 2013-12-02T22:09:23.411Z · LW · GW

Hrm... The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note "there exists an algorithm doing/perceiving X", where X represents whatever it itself is doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference between 1 and N of them as far as that.

Comment by Psy-Kosh on Open Thread, November 1 - 7, 2013 · 2013-11-09T21:59:30.276Z · LW · GW

That seems to be seriously GAZP violating. Trying to figure out how to put my thoughts on this into words but... There doesn't seem to be anywhere that the data is stored that could "notice" the difference. The actual program that is being the person doesn't contain a "realness counter". There's nowhere in the data that could "notice" the fact that there's, well, more of the person. (Whatever it even means for there to be "more of a person")

Personally, I'm inclined in the opposite direction, that even N separate copies of the same person is the same as 1 copy of the same person until they diverge, and how much difference between is, well, how separate they are.

(Though, of course, those funky Born stats confuse me even further. But I'm fairly inclined toward the "extra copies of the exact same mind don't add more person-ness. But as they diverge from each other, there may be more person-ness. (Though perhaps it may be meaningful to talk about additional fractions of personness rather than just one then suddenly two hole persons. I'm less sure on that.)

Comment by Psy-Kosh on Open Thread: November 2009 · 2013-10-27T00:15:48.162Z · LW · GW

I don't think I was implying physicists to be anti-MWI, but merely not as a whole considering it to be slam dunk already settled.

Comment by Psy-Kosh on Completeness, incompleteness, and what it all means: first versus second order logic · 2013-10-27T00:10:22.614Z · LW · GW

I've been thinking... How is it that we can meaningfully even think about full semantics second order logic of physics is computable?

What I mean is... if we think we're talking about or thinking about full semantics? That is, if no explicit rule following computable thingy can encode rules/etc that pin down full semantics uniquely, what are our brains doing when we think we mean something when we mean "every" subset?

I'm worried that it might be one of those things that feels/seems meaningful, but isn't. That our brains cannot explicitly "pin down" that model. So... what are we actually thinking we're thinking about when we're thinking we're thinking about full semantics/"every possible subset"?

(Does what I'm asking make sense to anyone? And if so... any answers?)

Comment by Psy-Kosh on The Apologist and the Revolutionary · 2013-09-08T02:57:56.676Z · LW · GW

Odd indeed, but if it works for you, that's good. (How long does the effect last?)

Comment by Psy-Kosh on How I Lost 100 Pounds Using TDT · 2013-07-17T20:48:11.015Z · LW · GW

Thermodynamics is not any more useful than quantum mechanics in understanding obesity. It is moralizing disguised as an invocation of natural law.

Mm... I guess what this would be a case of I agree with the connotations of what you're saying, but not with the explicitly stated form, which I'd say goes a bit too far. It's probably more fair to say "energy-in - energy-spent - energy-out-without-being-spent = net delta energy" is part of the story, simply not the whole story.

It doesn't illustrate the ways in which, say, one might become unwell/faint without sufficient energy-in of certain forms, even if one already has a reserve of energy that is theoretically available to their metabolism, for example.

It's probably a useful thing to keep in mind when trying to diet, for those that can usefully diet that way, but it's not the whole story, and other info (much of it perhaps not yet discovered) is also needed. (And certainly using it as an excuse to moralize/shame is completely invalid.)

But I wouldn't call it useless, merely insufficient. What is useless is to pretend that there aren't really important variables that can influence the extent to which one can usefully directly apply the thermodynamics. (People who ignore the ways that other variables can influence the ability to usefully apply the thermodynamic facts and thus condescendingly say "feh, just eat less and exercise more, this is sufficient advice for all people in all circumstances" are, of course, being poopyheads.)

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2013-07-17T02:47:42.226Z · LW · GW

Oh, incidentally, just commenting that's a good date, it's the anniversary of a certain One Small Step. :)

Comment by Psy-Kosh on Meetup : Southeast Michigan · 2013-07-16T15:19:32.287Z · LW · GW

I intend to attend.

Comment by Psy-Kosh on Start Under the Streetlight, then Push into the Shadows · 2013-06-26T00:47:31.945Z · LW · GW

Science itself would be a major "flashlight", I guess?

Comment by Psy-Kosh on Can we dodge the mindkiller? · 2013-06-16T21:50:44.167Z · LW · GW

Alternative vote is Instant Runoff Voting, right? If so, then it's bad, for it fails the monotonicity criterion. That means that raising one's vote re a particular candidate doesn't necessarally do the obvious thing.

Personally, I favor Approval Voting, since it seems to be the simplest possible change to our voting system that would still produce large gains.

(Also, would be nice if we (the US, that is) could switch to algorithmic redistricting and completely get rid of the whole gerrymandering nonsense.)

Comment by Psy-Kosh on Rationality Quotes June 2013 · 2013-06-07T15:48:56.851Z · LW · GW

Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.

Comment by Psy-Kosh on Rationality Quotes June 2013 · 2013-06-06T00:31:55.904Z · LW · GW

Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?

Comment by Psy-Kosh on Sorting Pebbles Into Correct Heaps · 2013-03-07T00:34:08.642Z · LW · GW

Ah, whoops.

Comment by Psy-Kosh on Causal Universes · 2013-03-07T00:29:45.139Z · LW · GW

Re your checking method to construct/simulate an acausal universe, won't work near as I can tell.

Specifically, the very act of verifying a string to be a life (or life + time travel or whatever) history requires actually computing the CA rules, doesn't it? So in the act of verification, if nothing else, all the computing needed to make a string that contains minds actually contain the minds would have to occur, near as I can make out.

Comment by Psy-Kosh on Sorting Pebbles Into Correct Heaps · 2013-03-02T05:30:38.289Z · LW · GW

He wasn't endorsing that position. He was saying "pebblesorters should not do so, but they pebblesorter::should do so."

ie, "should" and "pebblesorter::should" are two different concepts. "should" appeals to that which is moral, "pebblesorter::should" appeals to that which is prime. The pebblesorters should not have killed him, but they pebblesorter::should have killed them.

Think of it this way: imagine the murdermax function that scores states/histories of reality based on how many people were murdered. Then people shouldn't be murdered, but they murdermax::should be murdered. This is not an endorsement of doing what one murdermax::should do. Not at all. Doing the murdermax thing is bad.

Comment by Psy-Kosh on A Series of Increasingly Perverse and Destructive Games · 2013-02-16T23:58:12.472Z · LW · GW

Looking down the thread, I think one or two others may have beat me to it too. But yes, It seems at least that Omega would be handing the programmers a really nice toy and (conditional on the programmers having the skill to wield it), well..

Yes, there is that catch, hrm... Could put something into the code that makes the inhabitants occasionally work on the problem, thus really deeply intertwining the two things.

Comment by Psy-Kosh on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T06:00:24.222Z · LW · GW

Game3 has an entirely separate strategy available to it: Don't worry initially about trying to win... instead code a nice simulator/etc for all the inhabitants of the simulation, one that can grow without bound and allows them to improve (and control the simulation from inside).

You might not "win", but a version of three players will go on to found a nice large civilization. :) (Take that Omaga.)

(In the background, have it also running a thread computing increasingly large numbers and some way to randomly decide which of some set of numbers to output, to effectively randomize which one of the three original players wins. Of course, that's a small matter compared to the simulated world which, by hypothesis, has unbounded computational power available to it.)

Comment by Psy-Kosh on The Level Above Mine · 2013-01-24T00:37:41.255Z · LW · GW

You know, I want to say you're completely and utterly wrong. I want to say that it's safe to at least release The Actual Explanation of Consciousness if and when you should solve such a thing.

But, sadly, I know you're absolutely right re the existence of trolls which would make a point of using that to create suffering. Not just to get a reaction, but some would do it specifically to have a world they could torment beings.

My model is not that all those trolls are identical (In that I've seen some that will explicitly unambiguously draw the line and recognize that egging on suicidal people is something that One Does Not Do, but I also know (seen) that all too many gleefully do do that.)

Comment by Psy-Kosh on Lifeism in the midst of death · 2012-12-10T00:32:54.569Z · LW · GW

I'm sorry. *offers a hug* Not sure what else to say.

For what it's worth, in response to this, I just sent 20$ to each of SENS and SIAI.

Comment by Psy-Kosh on The Evil AI Overlord List · 2012-11-21T05:43:22.820Z · LW · GW

I was imagining that a potential blackmailer would self modify/be an Always-Blackmail-bot specifically to make sure there would be no incentive for potential victims to be a "never-give-in-to-blackmail-bot"

But that leads to stupid equilibrium of plenty of blackmailers and no participating victims. Everyone loses.

Yes, I agree that no blackmail seems to be the Right Equilibrium, but it's not obvious to me exactly how to get there without the same reasoning that leads to becoming a never-give-in-bot also leading potential blackmailers to becoming always-blackmail-bots.

I find I am somewhat confused on this matter. Well, frankly I suspect I'm just being stupid, that there's some obvious extra step in the reasoning I'm being blind to. It "feels" that way, for lack of better terms.

Comment by Psy-Kosh on The Evil AI Overlord List · 2012-11-21T05:39:29.937Z · LW · GW

I was thinking along the lines of the blackmailer using the same reasoning to decide that whether or not the potential victim of blackmail would be a blackmail ignorer or not, the blackmailer would still blackmail regardless.

ie, Blackmailer, for similar reasoning to the potential Victim, decides that they should make sure that the victim has nothing to gain by choosing ignore by making sure that they themselves (Blackmailer) would precommit to ignoring whether or not. ie, in this sense the blackmailer is also taking a "do nothing" thing in the sense that there's nothing the victim can do to stop them from blackmailing.

This sort of thing would seem to lead to an equilibrium of lots of blackmailers blackmailing victims that will ignore them. Which is, of course, a pathalogical outcome, and any sane decision theory should reject it. No blackmail seems like the "right" equilibrium, but it's not obvious to me exactly how TDT would get there.

Comment by Psy-Kosh on The Evil AI Overlord List · 2012-11-20T23:35:19.919Z · LW · GW

Wouldn't the blackmailer reason along the lines of "If I let my choice of whether to blackmail be predicated on whether or not the victim would take my blackmailing into account, wouldn't that just give them motive to predict and self modify to not allow themselves to be influenced by that?" Then, by the corresponding reasoning, the potential blackmail victims might reason "I have nothing to gain by ignoring it"

I'm a bit confused on this matter.

Comment by Psy-Kosh on What does the world look like, the day before FAI efforts succeed? · 2012-11-18T07:47:54.773Z · LW · GW

The idea is not "take an arbitrary superhuman AI and then verify it's destined to be well behaved" but rather "develop a mathematical framework that allows you from the ground up to design a specific AI that will remain (provably) well behaved, even though you can't, for arbitrary AIs, determine whether or not they'll be well behaved."

Comment by Psy-Kosh on Causal Reference · 2012-10-28T07:03:49.300Z · LW · GW

How, precisely, does one formalize the concept of "the bucket of pebbles represents the number of sheep, but it is doing so inaccurately." ie, that it's a model of the number of sheep rather than about something else, but a bad/inaccurate model?

I've fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.

The best I can come up with is something like "I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don't match the links that exist in reality."

Comment by Psy-Kosh on [Link] Offense 101 · 2012-10-28T02:48:05.786Z · LW · GW

But you can argue for anything. You might refuse to do so but the possibility is always there.

Presumably one would wand to define "strong argument" in such a way that tend to to be more available for true things than for false things.

Comment by Psy-Kosh on The Fabric of Real Things · 2012-10-13T01:08:24.641Z · LW · GW

Koan 4: How well do mathematical truths fit into this rule of defining what sort of things can be meaningful?

Comment by Psy-Kosh on Random LW-parodying Statement Generator · 2012-09-14T00:27:34.522Z · LW · GW

"what is true is already so. the statement that "a/an upload of Pinkie Pie will kill you because you are made of the utility function of the Society for Rare Diseases in Cute Puppies that it could use for something else." doesn't make it worse" is obviously false? Have a lot of caring!

hrm...

You make a compelling argument that a/an babyeater is the art of winning at infanticide.

I guess that one works.

Comment by Psy-Kosh on Irrationality Game II · 2012-07-10T03:22:43.736Z · LW · GW

2% is way way way WAY too high for something like that. You shouldn't be afraid to assign a probability much closer to 0.

Comment by Psy-Kosh on Backward Reasoning Over Decision Trees · 2012-07-06T02:22:01.174Z · LW · GW

Ouch.

Comment by Psy-Kosh on Open Thread: November 2009 · 2012-07-06T02:19:59.248Z · LW · GW

Fair enough. (Well, technically both should move at least a little bit , of course, but I know what you mean.)

It would cause me to update in the direction of believing that more physicists probably see MWI as slam-dunk.

Hee hee. :)

Comment by Psy-Kosh on Less Wrong Product & Service Recommendations · 2012-07-06T01:31:46.765Z · LW · GW

Touch 3G is limited to stuff like Wikipedia and Amazon. (I have a Touch, and I like it, btw.) More general net access via Kindle Touch is only via wifi)

Comment by Psy-Kosh on Backward Reasoning Over Decision Trees · 2012-07-06T01:22:41.887Z · LW · GW

But... even given them not being that clever, you'd think they'd know that the ability to arbitrarily slice and dice a bill would be too much. (I know I may be displaying hindsight bias, but... they're politicians! They have to have had experience with, say, people taking their (or colleagues') words out of context and making it sound like something else, or they themselves doing it to an opponent, right?

ie, the ability to slice and dice some communication into something entirely different would be something you'd think they'd already have personal experience with. At least, that's what I'd imagine. Though, still, Hanlon's Razor and all that.