Posts

Comments

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-08T03:14:04.629Z · LW · GW

Yes, yes there is :). http://boardgamegeek.com/boardgame/37111/battlestar-galactica

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-05T18:45:30.894Z · LW · GW

I don't see how this is a problem. Do you think it is a problem ? If so, then why specifically and do you have any ideas for a solution?

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-05T14:30:51.462Z · LW · GW

To be fair, it's really hard to figure out WTF is going on when humans are involved. Their reasoning is the result of multiple motivations and a vast array of potential reasoning errors. If you don't believe me try the following board games with your friends: Avalon, Coup, Sheriff of Nottingham, Battlestar Galactica, or any that involve secrets and lying.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-05T14:16:22.661Z · LW · GW

Edited phrasing to make it more clear....

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T21:18:47.241Z · LW · GW

Your phrasing makes it also look like a plausible mistake for someone in a new situation with little time to consider things.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-04T21:16:21.765Z · LW · GW

A story for the masses is necessary and this doesn't appear to be a bad stab at one. Harry can always bring trusted others on board by telling them what actually happened. He might have actually done that already and this is their plan. How much time did Harry have to do stuff before needing to show up anyhow (40m? 50m?)? Also, Prof. McGonagall is terrible at faking anything so telling her the truth before this seems like a bad idea.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-04T21:10:36.913Z · LW · GW

Lucius is both dead and warm. I think he's dead dead unless Eliezer has someone like Harry does something in a very narrow time window. Dumbledore is a much easier problem to solve (story wise) and can be solved at the same time as the Atlantis story thread if that is what the author plans.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T21:06:18.078Z · LW · GW

If you want to make the scenario more realistic then put more time pressure on Voldemort or put him under more cognitive stress some other way. The hardest part for Voldemort is solving this problem in a short time span and NOT coming up with a solution that foils Harry. The reason experienced soldiers/gamers with little to no intelligence still win against highly intelligent combatants with no experience is that TIME matters when you're limited to a single human's processing power. In virtually every combat situation one is forced to make decisions faster than one can search the solution space. Only experience compensates for this deficit to any measurable degree. In this situation there are several aspects that Voldemort does not have experience with. If he must spends his cognitive resources considering these aspects and cannot draw from experience it makes mistakes much more likely.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-04T20:51:35.128Z · LW · GW

I begin to wonder exactly how the story will be wrapped up. I had thought the source of magic would be unlocked or the Deathly Hallows riddle would be tied up. However, I wonder if there are enough chapters to do these things justice. I also wonder whether Eliezer will do anything like was done for Worm where the author invited suggestions for epilogs for specific characters.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T14:59:02.849Z · LW · GW

I see your point, but Voldemort hasn't encountered the AI Box problem has he? Further, I don't think Voldemort has encountered a problem where he's arguing with someone/something he knows is far smarter than himself. He still believes Harry isn't as smart yet.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T14:43:54.944Z · LW · GW

You should look at reddit to coordinate your actions with others. One idea I like is to organize the proposal of all reasonable ideas and minimize duplication. Organization thread here: http://www.reddit.com/r/HPMOR/comments/2xiabn/spoilers_ch_113_planning_thread/

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T05:45:52.148Z · LW · GW

I agree that this task is far "easier task than a standard AI box experiment". I attacked it from a different angle though (HarryPrime can easily and honestly convince Voldemort he is doomed unless HarryPrime helps him).:

http://lesswrong.com/r/discussion/lw/lsp/harry_potter_and_the_methods_of_rationality/c206

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T04:56:47.212Z · LW · GW

Quirrelmort would be disgusted with us if we refused to consider 'cheating' and would certainly kill us for refusing to 'cheat' if that was likely to be extremely helpful.

"Cheating is technique, the Defense Professor had once lectured them. Or rather, cheating is what the losers call technique, and will be worth extra Quirrell points when executed successfully."

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T02:45:28.509Z · LW · GW

Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T02:25:49.989Z · LW · GW

Why hasn't Voldemort suspended Harry in air? He floated himself into the air as a precaution against proximity, line of sight problems, and probably magics that require a solid substance to transmit through. If Harry were suspended in air partial transfiguration options would be vastly reduced.

Why hasn't Voldemort rendered Harry effectively blind/deaf/etc. - Harry is gaining far more information in real time than necessary for Voldemort's purposes?

Also, it seems prudent not to let Harry get all over the place by shooting him, smashing him, etc. without some form of containment. I don't know how some part of Harry could cause problems, but it seems prudent to eliminate every part of him with Fiendfyre (blood, guts, and all) if that is what Voldemort is aiming for.

Can Fawkes be summoned to extract Harry? If it helps Harry can decide to go to Azkaban.

Harry should be aware that reality is basically doomed to repeat the Atlantis mistake by now (either via AGI or whatever Atlantis unlocked). With the vow that Voldemort made him take he can honestly say that he is the best bet to avoid that fate. That is, Voldemort now needs Harry (and Hermione) to save reality. This seems like the most straight forward method get out of the current annoyance.

Some partial transfiguration options I haven't seen mentioned:

  • Colorless / odorless neuro toxins (Harry should have researched these as he is in 'serious mode' now that Hermione died). Delivered via the ground directly into each death eater and/or into the air in specific areas.
  • Nanobots - I can't recall at this time if this would work or if Harry needs to have the design already in his head. It is possible Atlantis tech. may utilize a vast array of these already.
  • Transfiguration may allow one to exploit quantum weirdness. Many things can happen at very small scales that could happen at large scales if everything is lined up just so (which never happens in reality, but transfiguration may make possible).
Comment by Duncan on Meetup : Berkeley: Hypothetical Apostasy · 2013-06-20T15:14:13.274Z · LW · GW

I like this exercise. It is useful in at least two ways.

  1. Help me take a critical look at my current cherished views. Here's one: work hard now and save for retirement; it is still cherished, but I already know of several lines of attack that might work if I think them through.
  2. Help me take time to figure out how I'd hack myself.

It might also be interesting to come up with a cherished group view and try to take that apart (e.g., cryonics after death is a good idea - perhaps start with the possibility that the future likely to be hostile to you such as unfriendly AI).

Comment by Duncan on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-20T14:59:43.479Z · LW · GW

Anecdotal evidence amongst people I've questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no 'free will / soul'). Many conversations simply devolve into 'Omega can't actually make a such an accurate prediction about my choice therefore or I'd normally 2 box so I'm not getting my million anyhow'.

Comment by Duncan on How to Write Deep Characters · 2013-06-16T19:35:53.680Z · LW · GW

Game of Thrones and the new Battlestar Galactica appear to me to have characters that are either shallow and/or conflicted by evil versus evil. Yet they are very popular and as far as I can tell, character driven. I was wondering what it means. One thought I had was that many people are interested in relationship conflicts and that the characters don't need to be deep, they just need to reflect, between the main character cast, the personalities of the audience (as messed up as the audience might be).

Comment by Duncan on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-06T18:57:11.911Z · LW · GW

It is not as much that they haven't given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP's conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible - assuming the discussion hasn't reached a dead end, which is often the case). It is very human and tempting to rush to the 'Are you crazy?!! You should __.' and skip all the hard thinking.

Comment by Duncan on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-06T18:46:33.910Z · LW · GW

Given the 'Sorry if it offends you' and the 'Like... no' I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said "... you should ...."

That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn't good either.

Comment by Duncan on Rationality when it's somewhat hard · 2013-02-06T14:31:33.788Z · LW · GW

Do you have any suggestions on how to limit this? I find meetings often meander from someone's pet issue to trivial / irrelevant details while the important broader topic withers and dies despite the meeting running 2-3x longer than planned.

In meetings where I have some control, I try to keep people on topic, but it's quite hard. In meetings where I'm the 'worker bee' it's often hopeless (don't want to rub the boss the wrong way).

Comment by Duncan on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-06T14:23:59.635Z · LW · GW

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.

Comment by Duncan on Thoughts on the January CFAR workshop · 2013-01-31T18:15:05.735Z · LW · GW

I agree that they should uphold strict standards for numerous reasons. That doesn't prevent CFAR from discussing potential benefits (and side effects) of different drugs (caffeine, aspirin, modafinil, etc.). They could also recommend discussing such things with a person's doctor as well as what criteria are used to prescribe such drugs (they might already for all I know).

Comment by Duncan on Thoughts on the January CFAR workshop · 2013-01-31T17:08:28.070Z · LW · GW

Ah, I thought it was an over the counter drug.

Comment by Duncan on Thoughts on the January CFAR workshop · 2013-01-31T15:07:45.701Z · LW · GW

I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?

What about trying bright lighting?: http://lesswrong.com/lw/gdl/my_simple_hack_for_increased_alertness_and/

Comment by Duncan on Thoughts on the January CFAR workshop · 2013-01-31T15:00:44.012Z · LW · GW

I'm glad to hear it is working well and is well received!

Once there has been some experience running these workshops I really hope there is something that CFAR can design for meetup groups to try / implement and/or an online version.

Is there a CFAR webpage that covers this particular workshop and how it went?

Comment by Duncan on Isolated AI with no chat whatsoever · 2013-01-28T21:48:19.795Z · LW · GW

It is useful to consider because if AI isn't safe when contained to the best of our ability then no method reliant on AI containment is safe (i.e., chat boxing and all the other possibilities).

Comment by Duncan on [LINK] NYT Article about Existential Risk from AI · 2013-01-28T21:26:35.723Z · LW · GW

My draft attempt at a comment. Please suggest edits before I submit it.:

The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).

Here are two websites that go into much greater detail about the problem:

AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/

Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/

Comment by Duncan on Cryonics priors · 2013-01-25T06:52:09.811Z · LW · GW

"1. Life is better than death. For any given finite lifespan, I'd prefer a longer one, at least within the bounds of numbers I can reasonably contemplate."

Have you included estimates of possible negative utilities? One thing we can count on is that if you are revived you will certainly be at the mercy of whatever revived you. How do you estimate the probability that what wakes you will be friendly? Is the chance at eternal life worth the risk of eternal suffering?

Comment by Duncan on Meetup : Berkeley: CFAR focus group · 2013-01-24T15:52:48.697Z · LW · GW

I think the CFAR is a great idea with tons of potential so I'm curious if there are any updates on how the meetup went and what sorts of things were suggested?

Comment by Duncan on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-24T04:09:36.366Z · LW · GW

I'm confused as to what the point of the gate keeper is. Let us assume (for the sake of argument) everything is 'safe' except the gate keeper who may be tricked/convinced/etc. into letting the AI out.

  1. If the point of the gate keeper is to keep the AI in the box then why has the gate keeper been given the power to let the AI out? It would be trivial to include 'AI DESTROYED' functionality as part of the box.
  2. If the gate keeper has been given the power to let the AI out then isn't the FUNCTION of the gate keeper to decide whether to let the AI out or not?
  3. Is the point simply to have a text communication with the AI? If this is the case why bother stipulating that the gate keeper can let the AI out. If humans can be subverted by text there is no need to utilize a built in gate it seems to me.
Comment by Duncan on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T19:26:06.636Z · LW · GW

With the understanding that I only have a few minutes to check for research data:

http://www.ncbi.nlm.nih.gov/pubmed/1801013

http://www.ncbi.nlm.nih.gov/pubmed/21298068 - "cognitive response ... to light at levels as low as 40 lux, is blue-shifted"

Comment by Duncan on Partial Transcript of the Hanson-Yudkowsky June 2011 Debate · 2012-04-20T04:10:53.989Z · LW · GW

In the context of "what is the minimal amount of information it takes to build a human brain," I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.

Do we know how much of the human genome can definitely be replaced with random code without problem?

In addition, do we know how much information is contained in the structure of a cell? You can't just put the DNA of our genome in water and expect to get a brain. Our DNA resides in an enormously complex sea of nano machines and structures. You need some combination of both to get a brain.

Honestly, I think the important take away is that there are probably a number of deep or high level insights that we need to figure out. Whether it's 75 mb, 750 mb, or a petabyte doesn't really matter if most of that information just describes machine parts or functions (e.g., a screw, a bolt, a wheel, etc.). Simple components often take up a lot of information. Frankly, I think 1 mb containing 1000 deep insights at maximum compression would be far more difficult to comprehend than a petabyte containing loads of parts descriptions and only 10 deep insights.

Comment by Duncan on Partial Transcript of the Hanson-Yudkowsky June 2011 Debate · 2012-04-19T16:19:50.324Z · LW · GW

If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.

Unless this is a standard definition for describing DNA, I do not agree that such DNA is 'junk'. If the DNA serves a purpose it is not junk. There was a time when it was believed (as many still do) that the nucleus was mostly a disorganized package of DNA and associated 'stuff'. However, it is becoming increasing clear that it is highly structured and that structure is critical for proper cell regulation including epigenetics.

If it can be shown that outright removal of most of our DNA does not have adverse affects I would agree with the junk description. However, I am not aware that this has been shown in humans (or human cell lines at least).

Comment by Duncan on Partial Transcript of the Hanson-Yudkowsky June 2011 Debate · 2012-04-19T14:18:52.171Z · LW · GW

If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.

This is false. Just because we do not know what role a lot of DNA performs does not mean it is 'almost certainly junk'. There is far more DNA that is critical than just the 30,000 gene coding regions. You also have: genetic switches, regulation of gene expression, transcription factor binding sites, operators, enhancers, splice sites, DNA packaging sites, etc. Even in cases where the DNA isn't currently 'in use' that DNA may be critical to the ongoing stability of our genome over multiple generations or have other unknown functions.

Comment by Duncan on [link]Mass replication of Psychology articles planed. · 2012-04-19T13:58:52.671Z · LW · GW

I would bet this is totally impractical for most studies. In the medical sciences the cost is prohibitive and for many other studies you need permission to experiment on organisms (especially hard when humans or human tissues are involved). Perhaps it would be easier for some of the soft sciences, but even psychology studies often work with human subjects and that would require non-trivial approval.

Comment by Duncan on [link]Mass replication of Psychology articles planed. · 2012-04-18T20:50:13.388Z · LW · GW

I look forward to the results of this study. Quite frankly, most soft science fields could use this sort of scrutiny. I'd also love to see how reproducible the studies done by medical doctors (as opposed to research scientists) are. Quite frankly, even the hard sciences have a lot of publications with problems, however, these erroneous results, especially if they are important to current topics of interest, are relatively quickly discovered since other labs often need to reproduce the results before moving forward.

I would add one caution. Failure to replicate an article's results does not necessarily mean the results are wrong. It could simply mean the group trying to reproduce the results had any number of other problems.

Comment by Duncan on How does long-term use of caffeine affect productivity? · 2012-04-16T17:05:23.893Z · LW · GW

Long term caffeine tolerance can be problematic. To combat this problem, every 2-4 months I stop taking caffeine for about 2 weeks (carefully planned for less hectic weeks). In my experience and that of at least one other colleague this method significantly lowers and possibly removes the caffeine tolerance. Two people does not make a study, but if you need to combat caffeine tolerance it may be worth a try.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 12 · 2012-03-28T04:56:14.772Z · LW · GW

How do you propose organizing a 'master list' of solutions, relevant plot pieces, etc. given the current forum format? Some people have made some lists, but they are often quickly buried beneath other comments. I'm also not familiar enough with how things work to know if a post can be edited days after it has been posted. One obvious solution is that a HPMOR reader who likes making webpages puts up wiki page for this. Can this be done on Lesswrong.com?

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 12 · 2012-03-28T04:31:36.934Z · LW · GW

Eliezer Yudkowsky's Author Notes, Chp. 81
This makes me worry that the actual chapter might’ve come as an anticlimax, especially with so many creative >suggestions that didn’t get used. I shall poll the Less Wrong discussants and see how they felt before I decide whether >to do this again. This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle – >like I used in Part 5 of my earlier story Three Worlds Collide - but I’ll have to take the current outcome into account when >deciding whether to go through with that.

Let me argue that this chapter was in no way an anticlimax:

  • We had no way to know which solution Harry might have come up with or picked (I like the hat trick still even though I figured that was not likely the solution of choice).
  • Neither Harry nor Eliezer are omniscient
  • Harry was under a lot of time pressure and had less information to work with than the readers
  • There is a lot of 'motivation' to keep the story interesting which limits the available solution space (i.e., any solution that results in a terrible story is not really an option)
  • A lot of entertainment is derived from 'watching' HOW things play out. History, movies, fiction, etc. are interesting to me for this reason and not because the main character did everything flawlessly.
  • If Eliezer does want to do a puzzle plot piece I strongly recommend accounting for fact that many of us read this story to relax and do NOT exercise our full investigative powers on solving story problems as that involves a lot of boring (for me at least) mundane work when done properly.
Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T13:49:44.314Z · LW · GW

If that is the case then the hat didn't actually say "it couldn't tell if Harry had any false memories." It said it couldn't detect deleted memories and seems to imply that 'sophisticated analysis' of all of his memories for 'inconsistencies' would be required to do so. The false memory given to Hermione is at the forefront of her mind and doesn't require the hat to scan her memories (though Hermione could replay memories of event for the hat presumably). In addition the false memory is entirely out of character with Hermione's personality which is something the hat, at a minimum, should be able to verify. Considering the quote specifically addressing foreign memory, it seems entirely possibly the hat may immediately detect the false memory for what it is.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T04:49:47.269Z · LW · GW

The hat says specifically: "I can go ahead and tell you that there is definitely nothing like a ghost - mind, intelligence, memory, personality, or feelings - in your scar. Otherwise it would be participating in this conversation, being under my brim." It says memory specifically. Both a false memory and a 'scar memory' could at this point be treated as 'foreign' to Hermione.

Are you referring to this slightly earlier quote: "Anyway, I have no idea whether or not you've been Obliviated. I'm looking at your thoughts as they form, not reading out your whole memory and analyzing it for inconsistencies in a fraction of second. I'm a hat, not a god."? Here the hat says it cannot detect deleted memories or do the sophisticated analysis required to discover tell tale 'inconsistencies'.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T04:35:17.948Z · LW · GW

If the sorting hat has enough access / ability to one's mind to sort children into their appropriate house then it seems entirely possible that it has enough access / ability to identify a false memory. The sorting hat is an extremely powerful artifact which implies that the false memory would have to be a significantly greater power for us to conclude at this point that it can remain hidden from the sorting hat.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-14T04:38:58.354Z · LW · GW

I'd like to "Hold Off on Proposing Solutions" or in this case hold off on advocating answers. I don't have time to list all the important bits of data we should be considering or enumerate all the current hypotheses, but I think both would be quite valuable.

Some quick hypothesis:

-Mr. Hat & Cloak is Quirrellmort & responsible for Hermione's 'condition'

-Mr. Hat & Cloak is Lucious & responsible for Hermione's 'condition'

-Mr. Hat & Cloak is Voldemort, but not the Quirrell body.

-Mr. Hat & Cloak is Quirrellmort and trying to take out Hermione as Harry's good side anchor.

-Mr. Hat & Cloak is NOT Quirrellmort.

-Mr. Hat & Cloak is Grindelwald

-Author has made massive continuity alterations many of which are unclear thus it follows that relying on continuity is difficult at best. This most severely impacts character personalities and motivations.

Some quick puzzle bits :

-Hermione thinks Draco and Snape are doing / plotting something evil and hates Draco.

-Hermione was subject to a "Groundhog Day Attack" (I'm not even sure what this in the context of this story).

-Hermione's tone of voice was all satisfaction just before being arrested for allegedly trying to kill Draco.

-Draco has no idea what's going on.

-Hermione defeated Draco's spell which was supposed to be extremely hard to do.

-Black Hat and Cloak did something to Hermione's mind / memory.

-Prof. Q. has some major plotting going.

-Prof. Q.'s goals are not very clear, but his speech in the dinning hall may have shed some light on it and is consistent w/ convo's w/ Harry.

-Author went out of his way to 'rationalize' how Harry didn't go talk to Hermione. Not entirely clear that's consistent w/ Harry or Hermione.

Okay, that's all I've got time for...

Comment by Duncan on Outreach to probably compatible groups? · 2011-07-12T14:28:10.018Z · LW · GW

One of the primary problems with the rationalists, humanists, atheist, skeptics, etc. is that there is no higher level organization and thus we tend to accomplish very little compared to most other organizations. I fully support efforts to fix this problem.

Comment by Duncan on The Friendly AI Game · 2011-07-10T04:21:47.954Z · LW · GW

If I understand this correctly your 'AI' is biased to do random things, but NOT as a function of its utility function. If that is correct then your 'AI' simple does random things (according to its non-utility bias) since its utility function has no influence on its actions.

Comment by Duncan on The Blue-Minimizing Robot · 2011-07-10T03:16:52.760Z · LW · GW

I consider all of the behaviors you describe as basically transform functions. In fact, I consider any decision maker a type of transform function where you have input data that is run through a transform function (such as a behavior-executor, utility-maximizer, weighted goal system, a human mind, etc.) and output data is generated (and in the case of humans sent to our muscles, organs, etc.). The reason I mention this is that trying to describe a human's transform function (i.e., what people normally call their mind) as mostly a behavior-executor or just a utility-maximizer leads to problems. A human's transform function is enormously complex and includes both behavior execution aspects and utility maximization aspects. I also find that attempts to describe a human's transform function as 'basically a __' results in a subsequent failure to look at the actual transform function when trying to figure out how people will behave.

Comment by Duncan on Harry Potter and the Methods of Rationality discussion thread, part 7 · 2011-05-31T00:47:59.034Z · LW · GW

I am having trouble scanning the HPMoR thread for topics I'm interested in due to both it's length and the lack of a hierarchical organization by topic. I would appreciate any help with this problem since I do not want to make comments that are simple duplicates of previous comments I failed to notice. With that in mind, is there a discussion forum or some method to scan the HPMoR discussion thread that doesn't involve a lot of effort? I have not found organizing comments by points to be useful in this respect.

Edit: I'm new and this is my 1st comment. I've read a lot of the sequences, but I don't know my way around yet. It's quite possible I'm missing a lot about how things work here.