Posts
Comments
Ah, nevermind then. I was thinking something like let b(x,k) = 1/sqrt(2k) when |x| < k and 0 otherwise
then define integral B(x)f(x) dx as the limit as k->0+ of integral b(x,k)f(x) dx
I was thinking that then integral (B(x))^2 f(x) dx would be like integral delta(x)f(x) dx.
Now that I think about it more carefully, especially in light of your comment, perhaps that was naive and that wouldn't actually work. (Yeah, I can see now my reasoning wasn't actually valid there. Whoops.)
Ah well. thank you for correcting me then. :)
I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:
Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?
Aaaaarggghh! (sorry, that was just because I realized I was being stupid... specifically that I'd been thinking of the deltas as orthonormal because the integral of a delta = 1.)
Though... it occurs to me that one could construct something that acted like a "square root of a delta", which would then make an orthonormal basis (though still not part of the hilbert space).
(EDIT: hrm... maybe not)
Anyways, thank you.
Meant to reply to this a bit back, this is probably a stupid question, but...
The uncountable set that you would intuitively think is a basis for Hilbert space, namely the set of functions which are zero except at a single value where they are one, is in fact not even a sequence of distinct elements of Hilbert space, since all these functions are elements of , and are therefore considered to be equivalent to the zero function.
What about the semi intuitive notion of having the dirac delta distributions as a basis? ie, a basis delta(X - R) parameterized by the vector R? How does that fit into all this?
Ah, alright.
Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1/2 to the proposition that tomorrow they'd transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?
I'm not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I'm not sure what the Right Way for a "conserves expected moral evidence" agent is. There're some special cases that seem to be well specified, but I'm not sure how I'd want it to behave in the general case.
Not sure. They don't actually tell you that.
Really interesting, but I'm a bit confused about something. Unless I misunderstand, you're claiming this has the property of conservation of moral evidence... But near as I can tell, it doesn't.
Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.
Yeah, found that out during the final interview. Sadly, found out several days ago they rejected me, so it's sort of moot now.
Alternately, you might have alternative hypothesis that explain the absence equally well, but with a much higher complexity cost.
Hey there, I'm mid application process. (They're having me do the prep work as part of the application). Anyways,,,
B) If you don't mind too much: stay at App Academy. It isn't comfortable but you'll greatly benefit from being around other people learning web development all the time and it will keep you from slacking off.
I'm confused about that. App Academy has housing/dorms? I didn't see anything about that. Or did I misunderstand what you meant?
Cool! (Though does seem that a license would be useful for longer trips, so you'd at least have the option of renting a vehicle if needed.)
And interesting point re social environment.
I'm just going to say I particularly liked the idea of the house cable transport system.
Yeah, that was my very first thought re the tunnels. Excavation is expensive. (and maintenance costs would be rather higher as well.)
OTOH, we don't even need full solution (including legal solution) to self driving cars to improve stuff. The obvious solution to the "but I might need to go on a 200 mile trip" is "rent a long distance car as needed, and otherwise own a commuter car."
That needs far less of coordination problems, because that's something that one can pretty much do right now. Next time one goes to purchase/lease/whatever a vehicle, get one appropriate/efficient/etc for short distances, and just rent a long haul vehicle as needed.
(Or, if living in place with decent public transport, potentially no need to own a vehicle at all, of course.)
Running a bit late, but still coming, just about to head out.
Cool! In that case, as of now at least, I'm still planning on showing up.
Well, I could bring a few extra chairs if wanted. (Although are we even still on for tomorrow given how the roads are? (Admittedly, sunday will probably be worse...))
Well, as I said, anything else needed? (more chairs? other stuff?)
As of now, I'm planning on coming.
Anything I should be bringing? (ie, extra chairs, whatever?)
Hrm... The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note "there exists an algorithm doing/perceiving X", where X represents whatever it itself is doing/perceiving/thinking/etc. But there doesn't seem there'd be any difference between 1 and N of them as far as that.
That seems to be seriously GAZP violating. Trying to figure out how to put my thoughts on this into words but... There doesn't seem to be anywhere that the data is stored that could "notice" the difference. The actual program that is being the person doesn't contain a "realness counter". There's nowhere in the data that could "notice" the fact that there's, well, more of the person. (Whatever it even means for there to be "more of a person")
Personally, I'm inclined in the opposite direction, that even N separate copies of the same person is the same as 1 copy of the same person until they diverge, and how much difference between is, well, how separate they are.
(Though, of course, those funky Born stats confuse me even further. But I'm fairly inclined toward the "extra copies of the exact same mind don't add more person-ness. But as they diverge from each other, there may be more person-ness. (Though perhaps it may be meaningful to talk about additional fractions of personness rather than just one then suddenly two hole persons. I'm less sure on that.)
I don't think I was implying physicists to be anti-MWI, but merely not as a whole considering it to be slam dunk already settled.
I've been thinking... How is it that we can meaningfully even think about full semantics second order logic of physics is computable?
What I mean is... if we think we're talking about or thinking about full semantics? That is, if no explicit rule following computable thingy can encode rules/etc that pin down full semantics uniquely, what are our brains doing when we think we mean something when we mean "every" subset?
I'm worried that it might be one of those things that feels/seems meaningful, but isn't. That our brains cannot explicitly "pin down" that model. So... what are we actually thinking we're thinking about when we're thinking we're thinking about full semantics/"every possible subset"?
(Does what I'm asking make sense to anyone? And if so... any answers?)
Odd indeed, but if it works for you, that's good. (How long does the effect last?)
Thermodynamics is not any more useful than quantum mechanics in understanding obesity. It is moralizing disguised as an invocation of natural law.
Mm... I guess what this would be a case of I agree with the connotations of what you're saying, but not with the explicitly stated form, which I'd say goes a bit too far. It's probably more fair to say "energy-in - energy-spent - energy-out-without-being-spent = net delta energy" is part of the story, simply not the whole story.
It doesn't illustrate the ways in which, say, one might become unwell/faint without sufficient energy-in of certain forms, even if one already has a reserve of energy that is theoretically available to their metabolism, for example.
It's probably a useful thing to keep in mind when trying to diet, for those that can usefully diet that way, but it's not the whole story, and other info (much of it perhaps not yet discovered) is also needed. (And certainly using it as an excuse to moralize/shame is completely invalid.)
But I wouldn't call it useless, merely insufficient. What is useless is to pretend that there aren't really important variables that can influence the extent to which one can usefully directly apply the thermodynamics. (People who ignore the ways that other variables can influence the ability to usefully apply the thermodynamic facts and thus condescendingly say "feh, just eat less and exercise more, this is sufficient advice for all people in all circumstances" are, of course, being poopyheads.)
Oh, incidentally, just commenting that's a good date, it's the anniversary of a certain One Small Step. :)
I intend to attend.
Science itself would be a major "flashlight", I guess?
Alternative vote is Instant Runoff Voting, right? If so, then it's bad, for it fails the monotonicity criterion. That means that raising one's vote re a particular candidate doesn't necessarally do the obvious thing.
Personally, I favor Approval Voting, since it seems to be the simplest possible change to our voting system that would still produce large gains.
(Also, would be nice if we (the US, that is) could switch to algorithmic redistricting and completely get rid of the whole gerrymandering nonsense.)
Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.
Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?
Ah, whoops.
Re your checking method to construct/simulate an acausal universe, won't work near as I can tell.
Specifically, the very act of verifying a string to be a life (or life + time travel or whatever) history requires actually computing the CA rules, doesn't it? So in the act of verification, if nothing else, all the computing needed to make a string that contains minds actually contain the minds would have to occur, near as I can make out.
He wasn't endorsing that position. He was saying "pebblesorters should not do so, but they pebblesorter::should do so."
ie, "should" and "pebblesorter::should" are two different concepts. "should" appeals to that which is moral, "pebblesorter::should" appeals to that which is prime. The pebblesorters should not have killed him, but they pebblesorter::should have killed them.
Think of it this way: imagine the murdermax function that scores states/histories of reality based on how many people were murdered. Then people shouldn't be murdered, but they murdermax::should be murdered. This is not an endorsement of doing what one murdermax::should do. Not at all. Doing the murdermax thing is bad.
Looking down the thread, I think one or two others may have beat me to it too. But yes, It seems at least that Omega would be handing the programmers a really nice toy and (conditional on the programmers having the skill to wield it), well..
Yes, there is that catch, hrm... Could put something into the code that makes the inhabitants occasionally work on the problem, thus really deeply intertwining the two things.
Game3 has an entirely separate strategy available to it: Don't worry initially about trying to win... instead code a nice simulator/etc for all the inhabitants of the simulation, one that can grow without bound and allows them to improve (and control the simulation from inside).
You might not "win", but a version of three players will go on to found a nice large civilization. :) (Take that Omaga.)
(In the background, have it also running a thread computing increasingly large numbers and some way to randomly decide which of some set of numbers to output, to effectively randomize which one of the three original players wins. Of course, that's a small matter compared to the simulated world which, by hypothesis, has unbounded computational power available to it.)
You know, I want to say you're completely and utterly wrong. I want to say that it's safe to at least release The Actual Explanation of Consciousness if and when you should solve such a thing.
But, sadly, I know you're absolutely right re the existence of trolls which would make a point of using that to create suffering. Not just to get a reaction, but some would do it specifically to have a world they could torment beings.
My model is not that all those trolls are identical (In that I've seen some that will explicitly unambiguously draw the line and recognize that egging on suicidal people is something that One Does Not Do, but I also know (seen) that all too many gleefully do do that.)
I'm sorry. *offers a hug* Not sure what else to say.
For what it's worth, in response to this, I just sent 20$ to each of SENS and SIAI.
I was imagining that a potential blackmailer would self modify/be an Always-Blackmail-bot specifically to make sure there would be no incentive for potential victims to be a "never-give-in-to-blackmail-bot"
But that leads to stupid equilibrium of plenty of blackmailers and no participating victims. Everyone loses.
Yes, I agree that no blackmail seems to be the Right Equilibrium, but it's not obvious to me exactly how to get there without the same reasoning that leads to becoming a never-give-in-bot also leading potential blackmailers to becoming always-blackmail-bots.
I find I am somewhat confused on this matter. Well, frankly I suspect I'm just being stupid, that there's some obvious extra step in the reasoning I'm being blind to. It "feels" that way, for lack of better terms.
I was thinking along the lines of the blackmailer using the same reasoning to decide that whether or not the potential victim of blackmail would be a blackmail ignorer or not, the blackmailer would still blackmail regardless.
ie, Blackmailer, for similar reasoning to the potential Victim, decides that they should make sure that the victim has nothing to gain by choosing ignore by making sure that they themselves (Blackmailer) would precommit to ignoring whether or not. ie, in this sense the blackmailer is also taking a "do nothing" thing in the sense that there's nothing the victim can do to stop them from blackmailing.
This sort of thing would seem to lead to an equilibrium of lots of blackmailers blackmailing victims that will ignore them. Which is, of course, a pathalogical outcome, and any sane decision theory should reject it. No blackmail seems like the "right" equilibrium, but it's not obvious to me exactly how TDT would get there.
Wouldn't the blackmailer reason along the lines of "If I let my choice of whether to blackmail be predicated on whether or not the victim would take my blackmailing into account, wouldn't that just give them motive to predict and self modify to not allow themselves to be influenced by that?" Then, by the corresponding reasoning, the potential blackmail victims might reason "I have nothing to gain by ignoring it"
I'm a bit confused on this matter.
The idea is not "take an arbitrary superhuman AI and then verify it's destined to be well behaved" but rather "develop a mathematical framework that allows you from the ground up to design a specific AI that will remain (provably) well behaved, even though you can't, for arbitrary AIs, determine whether or not they'll be well behaved."
How, precisely, does one formalize the concept of "the bucket of pebbles represents the number of sheep, but it is doing so inaccurately." ie, that it's a model of the number of sheep rather than about something else, but a bad/inaccurate model?
I've fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.
The best I can come up with is something like "I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don't match the links that exist in reality."
But you can argue for anything. You might refuse to do so but the possibility is always there.
Presumably one would wand to define "strong argument" in such a way that tend to to be more available for true things than for false things.
Koan 4: How well do mathematical truths fit into this rule of defining what sort of things can be meaningful?
"what is true is already so. the statement that "a/an upload of Pinkie Pie will kill you because you are made of the utility function of the Society for Rare Diseases in Cute Puppies that it could use for something else." doesn't make it worse" is obviously false? Have a lot of caring!
hrm...
You make a compelling argument that a/an babyeater is the art of winning at infanticide.
I guess that one works.
2% is way way way WAY too high for something like that. You shouldn't be afraid to assign a probability much closer to 0.
Ouch.
Fair enough. (Well, technically both should move at least a little bit , of course, but I know what you mean.)
It would cause me to update in the direction of believing that more physicists probably see MWI as slam-dunk.
Hee hee. :)
Touch 3G is limited to stuff like Wikipedia and Amazon. (I have a Touch, and I like it, btw.) More general net access via Kindle Touch is only via wifi)
But... even given them not being that clever, you'd think they'd know that the ability to arbitrarily slice and dice a bill would be too much. (I know I may be displaying hindsight bias, but... they're politicians! They have to have had experience with, say, people taking their (or colleagues') words out of context and making it sound like something else, or they themselves doing it to an opponent, right?
ie, the ability to slice and dice some communication into something entirely different would be something you'd think they'd already have personal experience with. At least, that's what I'd imagine. Though, still, Hanlon's Razor and all that.