Posts
Comments
I think these are sufficient evidence that this is the real Dumbledore, not the mirror showing Quirrell what he wants.
The sense of doom. I thought the magic-can't-interact was mostly just the strongest edge of that--e.g. (maybe "i.e." too) their magic could interact but it would hurt them enough that they don't try.
That could just be a feature of the True Patronus, which is pretty anti-death and especially anti-indifference-to-other-people's-lives.
Your point 2 is another thing I'm getting pretty suspicious of. Quirrell has set up a very long plan, and could easily have faked this effect with wandless magic, or an enchantment he could later dispel, all along.
Certainly, and in the actual situation, I would have done worse than he actually did. But, this kind of armchair analysis is extremely enjoyable, and a good way to improve your in situ skills.
Harry made some serious mistakes in chapter 105.
First, the parseltongue honesty-binding could just be Quirrell's (selective!) wandless magic--I mean, he just forged a note "from yourself" (and why do you even MAKE a self-recognition ("I am a potato") policy if you just forget all about it once you're in a life-stakes intrigue) so you need a lot of extra suspicions going forward. But assuming it's real... there are crucial questions Harry can now profitably ask, with his help conditional on getting immediate Parseltongue answers, along the lines of:
"Why did you set up this elaborate ruse instead of just asking me? Most of what you're saying right now sounds like something I would've probably agreed to if you were open about it, but no, you had to pretend you were dying and kill my friend, so it sure seems like you're planning nefarious things I'd rather not aid even at the cost of my life and the hostages' lives... does my CURRENT utility function actually prefer your planned results to the death of me and the hostages?"
(This isn't the perfect phrasing; for one thing Quirrell doesn't necessarily know Harry's utility function to high accuracy, for another Harry might have disagreed to the "open" proposal at weaker dispreference than "this is worse than my death". But something similar...)
Iff Quirrell is at all "innocent" at this point, he'd want to answer these, and never mind the "my policy is never to reveal that much or people will know I'm guilty later when I actually need to keep mum" stuff; these stakes seem high enough to outweigh any future similar dealings. If he's guilty, then just die like you'd apparently prefer.
[the only edits I made here after getting responses were to correct my spelling of "Quirrell", and this note]
This is similar to choosing strict determinism over compatibilism. Which players are the "best" depends on each of those players' individual efforts during the game. You could extend the idea to the executives too, anyway--which groups of executives acquire better players is largely a function of which have the best executives.
Efforts are only one variable here, and the quote did say "largely a function of". Those being said, look at how often teams replay each other during a season with a different winner.
Mentioning a similarity to past successful decisions seems like it qualifies as "constructing a more contextually specific argument than 'you'll understand when you're older'".
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of "rationality".
But the map is the map...
Done! The length is fine; the questions are interesting and fun to consider.
EDIT: removed concerns about "cryivf" if. "srzhe" nf ynetrfg obar (znff if. yratgu); gur cryivf nccneragyl vfa'g n "fvatyr obar".
I would've entered! I loved the one-shot PD tournament last summer. In the future, please move popular tournament announcements to Main!
This matches my experience extremely well.
(If there is something called "Chelston's Fence" (which my searches did not turn up), apologies.)
Chesterton's Fence isn't about inertia specifically, but about suspecting that other people had reasons for their past actions even though you currently can't see any, and finding out those reasons before countering their actions. In Christianity's case the reasons seem obvious enough (one of the main ones: trust in a line of authority figures going back to antiquity + antiquity's incompetence at understanding the universe) that Chesterton's Fence is not very applicable. Willpower and other putative psychological benefits of Christianity are nowhere in the top 100 reasons Taleb was born Christian.
That sounds even more formal than "person" to me, actually.
Edit: how about "someone who acts"?
It sort of fits an (not very common) idiomatic pattern where the compliment is empty-to-sarcastic, but it seems pretty obvious that you didn't intend it that way, and I can't actually think of any examples I learned the idiom from.
Unless Quirrell isn't interested in the stone primarily here, but in tricking Harry into doing something else trying to get the stone.
Then, there was the thing where I would leave plastic syringe caps and bits of paper from wrappers in patients’ beds. This incurred approximately equal wrath to the med errors–in practice, a lot more, because she would catch me doing it around once a shift. I agreed with her on the possible bad consequences. Patients might get bedsores, and that was bad. But there were other problems I hadn’t solved, and they had worse consequences. I had, correctly I think, decided to focus on those first.
When I do this kind of triaging (the example that comes to mind first is learning competitive fighting games), I often (certainly not always) do end up trying to fix some of my lower-priority common mistakes at the same time, but just not caring about them as much. This often seems to make them easier to fix than if I had prioritized them, which seems related to the main point of your post.
This seems a bit more like an Ayn Rand joke than a Less Wrong joke.
It's all well and good to say you don't maximize utility for one reason or another, but when somebody tells me that they actually maximize "minimum expected utility", my first inclination is to tell them that they've misplaced their "utility" label.
My first inclination when somebody says they don't maximize utility is that they've misplaced their "utility" label... can you give an example of a (reasonable?) agent which really couldn't be (reasonably?) reframed to some sort of utility maximizer?
i.e., young people are most likely to have the least complicated views on abortion!
The next morning, both stations charge $1.52. The morning after that, $1.53. The morning after that, $1.54, and so on. Later that year, CF reasons as follows: If I keep my current price of $20...
How long are years on Townton's planet? Or is the Schelling price increase path nonlinear?
I'm sure this has been discussed before, but my attempts at searches for those discussions failed, so...
Why is this thread in Main and not Discussion?
"We have what replicated better; noise permanently affects replicative ability"?
Between people like us, this is somewhere between a failure to allow for the looseness of speech and the kind of interesting contradiction we like because it's evidence-rich, and probably closer to the former. To a religious person, this is just a pretty combative trap. There are a large number of such traps you can run on religious people, and they very rarely accomplish anything, because these almost always aren't the kind of people who take logic and rationality seriously enough to change their beliefs due to contradictions, but they normally are the kind of people who fail to Keep Their Identity Small and hence become personally offended when you try to bring up contradictions. Asking the philosophers he's going to see trap questions like this will just annoy them (they'll probably even see the "looseness of speech" explanation for this one), provoke useless stock answers, and waste the potential of the conversations.
There's the obvious "Harry appears to be about to destroy the universe; Voldemort might be trying to stop him" one. But I don't know any real answers to your question.
One time when I had a particularly large amount of biochemistry facts to study for a test the next morning I thought it might help my memory if I kept re-transcribing them, rephrasing them completely each time. I did well on the test, but not above my usual performance (then again, it was over more material than usual). I never tried this again; it was never necessary... but I am kind of curious if it really works.
Then I'd say that while your education and medical care ideas are good, they fail to account for the world-as-it-is. It's easy to imagine a world-as-it-should-be, especially if you assume that people/society are different. It's not enough. You have to design a smooth transition from a world-as-it-is to the world-as-it-should-be.
This seems unfair; it's an April Fool's joke/rant. It wasn't intended to lay out a complete path to fixing the world. (Also, "I had to quixotically try to start Earth down the 200-year road to the de'a'na est shadarak"...)
This post lowers my estimate of your sanity waterline
Mine too, but not significantly. Everyone's allowed a few mistakes, and I kinda dismissed the specifics of the real estate system as not the main point--the main point is that a world run by people who approached actual rationality, looking closely at what would actually benefit people and actively trying to avoid suboptimal Nash equilibria, would be pretty damn good compared to what we have now.
I decided I should actually read the paper myself, and... as of page 7, it sure looks like I was misrepresenting Aaronson's position, at least. (I had only skimmed a couple Less Wrong threads on his paper.)
Again, I'm not a good choice for an explainer of this stuff, but you could try http://www.scottaaronson.com/blog/?p=1438
Knightian uncertainty is uncertainty where probabilities can't even be applied. I'm not convinced it exists. Some people seem to think free will is rescued by it; that the human mind could be unpredictable even in theory, and this somehow means it's "you" "making choices". This seems like deep confusion to me, and so I'm probably not expressing their position correctly.
Reductionism could be consistent with that, though, if you explained the mind's workings in terms of the simplest Knightian atomic thingies you could.
This doesn't seem related to reductionism to me, except in that most reductionists don't believe in Knightian free will.
While this is also what came to my mind, the next thing that came to my mind was that this is exactly what the kind of communication failure So8res was worried about would look like.
This is a bit off-topic/already guarded against by your disclaimer, but if black takes the pawn after white takes the knight (the single move from that node of the given tree), black will lose the bishop and the rook immediately, and the game very soon.
I used to do this, but the crises made it exceptionally difficult to get the checkerboard pattern right.
I'm having trouble parsing the version with "agree" to anything simultaneously non-tautologous (i.e. when we use a name, we generally agree with our own usage) and reasonable; what reading did you notice?
This is the reason my bot attempted to recognize itself. Attention anyone who plays in a tournament that continues to use this bot pool: put the token LightninRoy in your source!
I still don't see where set! is relevantly different from define... e.g.
(define result (if (random 100) 'C ((eval opponent) self)))
Ah, mutation in the "mutable state" sense. When I was doing some light experimenting with static analysis in the early days of the contest, I looked for variables storing anything involving eval (or involving another variable storing anything involving eval, etc.), and just treated (sounds like that should be "trought" or something) set! calls as define calls--another chance for a variable to become contaminated with evalness. Could you give an example of a case where set! makes things harder to analyze?
In a pool with a sufficient number of close-to-mimicbots, considering use of language features you didn't anticipate or can't handle to be defections seems like a good way to get too many mutual defections to win.
Also, not sure what "mutation" means in context. If you mean "almost quine yourself, but write in a few differences", N did that; I can't think of any other reasonable meanings which N does. I used namespace-set-variable-value! merely so that eval could see my variables; in most lisps I've used in the past I could have omitted it entirely.
Your summary seems pretty accurate. I don't think there were many programming errors outside of P's meltdown, though. Also, as has been touched upon elsewhere in these comments, some of the failures to maximally exploit simple bots were necessary side effects of the attempts to trick complex bots, not just failures to anticipate there being a significant number of simple bots at all. (Sort of a quantitative instead of qualitative prediction mistake--we just thought there'd be more complex bots than simple bots).
One clue towards the general simplicity of the field is the generally atrocious formatting (e.g. close-parens on their own lines)--not many people seem to have had too much experience with Lisp, let alone Lisp projects as complex as this game can get. That's a shame, but it's not like any non-Lisp languages are remotely suitable for the kind of source analysis we all wanted to see in this game.
Most of these suggestions are okay, but don't move away from Lisp. A very restricted Scheme (e.g. one way to set variables, one looping construct (I really like foldl, as N shows), etc.) would be good; the lexical scoping and general immutability make it one of the best Lisps for our purposes.
Forget sanity--if your opponent fails to play, the most you can get is one point. Hawkbot is awful.
I agree, my fellow top-ranking-non-source-ignoring player. Saying "nobody could do any better than randomness in this tournament" is strictly true but a bit misleading; the tiny, defect-happy pool with almost 20% random players (the top 3 and also G; he just obfuscated his somewhat) didn't provide a very favorable structure for more intelligent bots to intelligently navigate, but there was still certainly some navigation.
I'm pretty pleased with how my bot performed; it never got deterministically CD'd and most of its nonrandom mutual defections were against bots who had some unusual trigger condition for defecting based on source composition, not performance, or had very confused performance triggers (e.g. O--why would you want to play your opponent's anti-defectbot move when you determine they cooperate with cooperatebot?). Some of its mutual defections were certainly due to my detect-size-changes exploit, but so were its many DCs.
I think that kind of deviation was probably a large part of the motivation for those who submitted random-playing bots, since the contest rules specified one round.
I haven't looked too closely at K, nor ran any tests, but I have a slight suspicion it sometimes cooperated in simulation, despite always defecting on its turn.
As for the cooperatebots, there were multiple reasons I didn't write N to exploit them, but not source complexity--they don't even do anything besides cooperate; the middle term is just the function's argument list.
Hmm, three way tie for fourth place.
I thought So8res was trying to set up subtle vulnerabilities for the mimicbot clique to exploit himself! My own exploit (store their size and see if it changes; if so defect, on the assumption that most who change their size will only change it once; play small-n mimicbot so you can induce cooperation on their turn and when the exploit doesn't fire at all) seems to have cut the problem a little cleaner, picking up more nonrandom DCs. I had essentially thought of this exploit in the first few days of the contest, but I had thought it would set me up for a few too many CDs (ended up getting one, but it was random) until So8res posted his tutorial, at which point it seemed like there'd be enough paranoid mimicbotters choosing a random rank on their first move (for example) and leaving the rank-choosing machinery out of their later versions for my exploit to be profitable...
...and instead my DCs were on non-quiney bots which ran simple bot tests, like the one in the contest rules' example. I did anticipate I'd pick these up, but I didn't think there'd be so many of them.
Anyway, this was good fun. Thanks for hosting, AlexMennen, and thanks for playing, everyone else.
I don't think this is quite where the analogy was. The brain's information-processing features you describe seem to be analogous to the radio's volume and clarity... it seems Eagleman was trying to compare the radio's content not to the brain's content, but to consciousness or something. At least, that's the best steelmanning attempt I've got.
I'd guess the quotee wouldn't call generic space opera "science fiction" either. I sure wouldn't, myself.
More specifically, in chapter 56:
Kill her and then bring her back, came the next suggestion. Use Frigideiro to cool Bellatrix down to the point where her brain activity stops, then warm her up afterward using Thermos, just like people who fall into very cold water can be successfully revived half-an-hour later without noticeable brain damage. Harry considered this. Bellatrix might not survive in her debilitated state. And it might not stop Death from seeing her. And he'd have trouble carrying a cold unconscious Bellatrix very far. And Harry couldn't remember the research on which exact body temperature was supposed to be nonfatal but temporarily-brain-halting.
He forgot to get his time-turner unlocked, but he remembered to look this up, evidently.
He said he didn't want to use sleep, since the argument is only a lower bound on the amount of time it takes.