Posts
Comments
Yes, yes there is :). http://boardgamegeek.com/boardgame/37111/battlestar-galactica
I don't see how this is a problem. Do you think it is a problem ? If so, then why specifically and do you have any ideas for a solution?
To be fair, it's really hard to figure out WTF is going on when humans are involved. Their reasoning is the result of multiple motivations and a vast array of potential reasoning errors. If you don't believe me try the following board games with your friends: Avalon, Coup, Sheriff of Nottingham, Battlestar Galactica, or any that involve secrets and lying.
Edited phrasing to make it more clear....
Your phrasing makes it also look like a plausible mistake for someone in a new situation with little time to consider things.
A story for the masses is necessary and this doesn't appear to be a bad stab at one. Harry can always bring trusted others on board by telling them what actually happened. He might have actually done that already and this is their plan. How much time did Harry have to do stuff before needing to show up anyhow (40m? 50m?)? Also, Prof. McGonagall is terrible at faking anything so telling her the truth before this seems like a bad idea.
Lucius is both dead and warm. I think he's dead dead unless Eliezer has someone like Harry does something in a very narrow time window. Dumbledore is a much easier problem to solve (story wise) and can be solved at the same time as the Atlantis story thread if that is what the author plans.
If you want to make the scenario more realistic then put more time pressure on Voldemort or put him under more cognitive stress some other way. The hardest part for Voldemort is solving this problem in a short time span and NOT coming up with a solution that foils Harry. The reason experienced soldiers/gamers with little to no intelligence still win against highly intelligent combatants with no experience is that TIME matters when you're limited to a single human's processing power. In virtually every combat situation one is forced to make decisions faster than one can search the solution space. Only experience compensates for this deficit to any measurable degree. In this situation there are several aspects that Voldemort does not have experience with. If he must spends his cognitive resources considering these aspects and cannot draw from experience it makes mistakes much more likely.
I begin to wonder exactly how the story will be wrapped up. I had thought the source of magic would be unlocked or the Deathly Hallows riddle would be tied up. However, I wonder if there are enough chapters to do these things justice. I also wonder whether Eliezer will do anything like was done for Worm where the author invited suggestions for epilogs for specific characters.
I see your point, but Voldemort hasn't encountered the AI Box problem has he? Further, I don't think Voldemort has encountered a problem where he's arguing with someone/something he knows is far smarter than himself. He still believes Harry isn't as smart yet.
You should look at reddit to coordinate your actions with others. One idea I like is to organize the proposal of all reasonable ideas and minimize duplication. Organization thread here: http://www.reddit.com/r/HPMOR/comments/2xiabn/spoilers_ch_113_planning_thread/
I agree that this task is far "easier task than a standard AI box experiment". I attacked it from a different angle though (HarryPrime can easily and honestly convince Voldemort he is doomed unless HarryPrime helps him).:
http://lesswrong.com/r/discussion/lw/lsp/harry_potter_and_the_methods_of_rationality/c206
Quirrelmort would be disgusted with us if we refused to consider 'cheating' and would certainly kill us for refusing to 'cheat' if that was likely to be extremely helpful.
"Cheating is technique, the Defense Professor had once lectured them. Or rather, cheating is what the losers call technique, and will be worth extra Quirrell points when executed successfully."
Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').
Why hasn't Voldemort suspended Harry in air? He floated himself into the air as a precaution against proximity, line of sight problems, and probably magics that require a solid substance to transmit through. If Harry were suspended in air partial transfiguration options would be vastly reduced.
Why hasn't Voldemort rendered Harry effectively blind/deaf/etc. - Harry is gaining far more information in real time than necessary for Voldemort's purposes?
Also, it seems prudent not to let Harry get all over the place by shooting him, smashing him, etc. without some form of containment. I don't know how some part of Harry could cause problems, but it seems prudent to eliminate every part of him with Fiendfyre (blood, guts, and all) if that is what Voldemort is aiming for.
Can Fawkes be summoned to extract Harry? If it helps Harry can decide to go to Azkaban.
Harry should be aware that reality is basically doomed to repeat the Atlantis mistake by now (either via AGI or whatever Atlantis unlocked). With the vow that Voldemort made him take he can honestly say that he is the best bet to avoid that fate. That is, Voldemort now needs Harry (and Hermione) to save reality. This seems like the most straight forward method get out of the current annoyance.
Some partial transfiguration options I haven't seen mentioned:
- Colorless / odorless neuro toxins (Harry should have researched these as he is in 'serious mode' now that Hermione died). Delivered via the ground directly into each death eater and/or into the air in specific areas.
- Nanobots - I can't recall at this time if this would work or if Harry needs to have the design already in his head. It is possible Atlantis tech. may utilize a vast array of these already.
- Transfiguration may allow one to exploit quantum weirdness. Many things can happen at very small scales that could happen at large scales if everything is lined up just so (which never happens in reality, but transfiguration may make possible).
I like this exercise. It is useful in at least two ways.
- Help me take a critical look at my current cherished views. Here's one: work hard now and save for retirement; it is still cherished, but I already know of several lines of attack that might work if I think them through.
- Help me take time to figure out how I'd hack myself.
It might also be interesting to come up with a cherished group view and try to take that apart (e.g., cryonics after death is a good idea - perhaps start with the possibility that the future likely to be hostile to you such as unfriendly AI).
Anecdotal evidence amongst people I've questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no 'free will / soul'). Many conversations simply devolve into 'Omega can't actually make a such an accurate prediction about my choice therefore or I'd normally 2 box so I'm not getting my million anyhow'.
Game of Thrones and the new Battlestar Galactica appear to me to have characters that are either shallow and/or conflicted by evil versus evil. Yet they are very popular and as far as I can tell, character driven. I was wondering what it means. One thought I had was that many people are interested in relationship conflicts and that the characters don't need to be deep, they just need to reflect, between the main character cast, the personalities of the audience (as messed up as the audience might be).
It is not as much that they haven't given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP's conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible - assuming the discussion hasn't reached a dead end, which is often the case). It is very human and tempting to rush to the 'Are you crazy?!! You should __.' and skip all the hard thinking.
Given the 'Sorry if it offends you' and the 'Like... no' I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said "... you should ...."
That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn't good either.
Do you have any suggestions on how to limit this? I find meetings often meander from someone's pet issue to trivial / irrelevant details while the important broader topic withers and dies despite the meeting running 2-3x longer than planned.
In meetings where I have some control, I try to keep people on topic, but it's quite hard. In meetings where I'm the 'worker bee' it's often hopeless (don't want to rub the boss the wrong way).
"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.
I agree that they should uphold strict standards for numerous reasons. That doesn't prevent CFAR from discussing potential benefits (and side effects) of different drugs (caffeine, aspirin, modafinil, etc.). They could also recommend discussing such things with a person's doctor as well as what criteria are used to prescribe such drugs (they might already for all I know).
Ah, I thought it was an over the counter drug.
I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?
What about trying bright lighting?: http://lesswrong.com/lw/gdl/my_simple_hack_for_increased_alertness_and/
I'm glad to hear it is working well and is well received!
Once there has been some experience running these workshops I really hope there is something that CFAR can design for meetup groups to try / implement and/or an online version.
Is there a CFAR webpage that covers this particular workshop and how it went?
It is useful to consider because if AI isn't safe when contained to the best of our ability then no method reliant on AI containment is safe (i.e., chat boxing and all the other possibilities).
My draft attempt at a comment. Please suggest edits before I submit it.:
The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone else, could compute math equations perfectly in an instant, etc. No one on this planet could compete with you and with a little time no one could stop you (and that is just a crude brain simulation).
Here are two websites that go into much greater detail about the problem:
AI Risk & Friendly AI Research: http://singularity.org/research/ http://singularity.org/what-we-do/
Facing the Singulatiry: http://facingthesingularity.com/2012/ai-the-problem-with-solutions/
"1. Life is better than death. For any given finite lifespan, I'd prefer a longer one, at least within the bounds of numbers I can reasonably contemplate."
Have you included estimates of possible negative utilities? One thing we can count on is that if you are revived you will certainly be at the mercy of whatever revived you. How do you estimate the probability that what wakes you will be friendly? Is the chance at eternal life worth the risk of eternal suffering?
I think the CFAR is a great idea with tons of potential so I'm curious if there are any updates on how the meetup went and what sorts of things were suggested?
I'm confused as to what the point of the gate keeper is. Let us assume (for the sake of argument) everything is 'safe' except the gate keeper who may be tricked/convinced/etc. into letting the AI out.
- If the point of the gate keeper is to keep the AI in the box then why has the gate keeper been given the power to let the AI out? It would be trivial to include 'AI DESTROYED' functionality as part of the box.
- If the gate keeper has been given the power to let the AI out then isn't the FUNCTION of the gate keeper to decide whether to let the AI out or not?
- Is the point simply to have a text communication with the AI? If this is the case why bother stipulating that the gate keeper can let the AI out. If humans can be subverted by text there is no need to utilize a built in gate it seems to me.
With the understanding that I only have a few minutes to check for research data:
http://www.ncbi.nlm.nih.gov/pubmed/1801013
http://www.ncbi.nlm.nih.gov/pubmed/21298068 - "cognitive response ... to light at levels as low as 40 lux, is blue-shifted"
In the context of "what is the minimal amount of information it takes to build a human brain," I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.
Do we know how much of the human genome can definitely be replaced with random code without problem?
In addition, do we know how much information is contained in the structure of a cell? You can't just put the DNA of our genome in water and expect to get a brain. Our DNA resides in an enormously complex sea of nano machines and structures. You need some combination of both to get a brain.
Honestly, I think the important take away is that there are probably a number of deep or high level insights that we need to figure out. Whether it's 75 mb, 750 mb, or a petabyte doesn't really matter if most of that information just describes machine parts or functions (e.g., a screw, a bolt, a wheel, etc.). Simple components often take up a lot of information. Frankly, I think 1 mb containing 1000 deep insights at maximum compression would be far more difficult to comprehend than a petabyte containing loads of parts descriptions and only 10 deep insights.
If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.
Unless this is a standard definition for describing DNA, I do not agree that such DNA is 'junk'. If the DNA serves a purpose it is not junk. There was a time when it was believed (as many still do) that the nucleus was mostly a disorganized package of DNA and associated 'stuff'. However, it is becoming increasing clear that it is highly structured and that structure is critical for proper cell regulation including epigenetics.
If it can be shown that outright removal of most of our DNA does not have adverse affects I would agree with the junk description. However, I am not aware that this has been shown in humans (or human cell lines at least).
If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.
This is false. Just because we do not know what role a lot of DNA performs does not mean it is 'almost certainly junk'. There is far more DNA that is critical than just the 30,000 gene coding regions. You also have: genetic switches, regulation of gene expression, transcription factor binding sites, operators, enhancers, splice sites, DNA packaging sites, etc. Even in cases where the DNA isn't currently 'in use' that DNA may be critical to the ongoing stability of our genome over multiple generations or have other unknown functions.
I would bet this is totally impractical for most studies. In the medical sciences the cost is prohibitive and for many other studies you need permission to experiment on organisms (especially hard when humans or human tissues are involved). Perhaps it would be easier for some of the soft sciences, but even psychology studies often work with human subjects and that would require non-trivial approval.
I look forward to the results of this study. Quite frankly, most soft science fields could use this sort of scrutiny. I'd also love to see how reproducible the studies done by medical doctors (as opposed to research scientists) are. Quite frankly, even the hard sciences have a lot of publications with problems, however, these erroneous results, especially if they are important to current topics of interest, are relatively quickly discovered since other labs often need to reproduce the results before moving forward.
I would add one caution. Failure to replicate an article's results does not necessarily mean the results are wrong. It could simply mean the group trying to reproduce the results had any number of other problems.
Long term caffeine tolerance can be problematic. To combat this problem, every 2-4 months I stop taking caffeine for about 2 weeks (carefully planned for less hectic weeks). In my experience and that of at least one other colleague this method significantly lowers and possibly removes the caffeine tolerance. Two people does not make a study, but if you need to combat caffeine tolerance it may be worth a try.
How do you propose organizing a 'master list' of solutions, relevant plot pieces, etc. given the current forum format? Some people have made some lists, but they are often quickly buried beneath other comments. I'm also not familiar enough with how things work to know if a post can be edited days after it has been posted. One obvious solution is that a HPMOR reader who likes making webpages puts up wiki page for this. Can this be done on Lesswrong.com?
Eliezer Yudkowsky's Author Notes, Chp. 81
This makes me worry that the actual chapter might’ve come as an anticlimax, especially with so many creative >suggestions that didn’t get used. I shall poll the Less Wrong discussants and see how they felt before I decide whether >to do this again. This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle – >like I used in Part 5 of my earlier story Three Worlds Collide - but I’ll have to take the current outcome into account when >deciding whether to go through with that.
Let me argue that this chapter was in no way an anticlimax:
- We had no way to know which solution Harry might have come up with or picked (I like the hat trick still even though I figured that was not likely the solution of choice).
- Neither Harry nor Eliezer are omniscient
- Harry was under a lot of time pressure and had less information to work with than the readers
- There is a lot of 'motivation' to keep the story interesting which limits the available solution space (i.e., any solution that results in a terrible story is not really an option)
- A lot of entertainment is derived from 'watching' HOW things play out. History, movies, fiction, etc. are interesting to me for this reason and not because the main character did everything flawlessly.
- If Eliezer does want to do a puzzle plot piece I strongly recommend accounting for fact that many of us read this story to relax and do NOT exercise our full investigative powers on solving story problems as that involves a lot of boring (for me at least) mundane work when done properly.
If that is the case then the hat didn't actually say "it couldn't tell if Harry had any false memories." It said it couldn't detect deleted memories and seems to imply that 'sophisticated analysis' of all of his memories for 'inconsistencies' would be required to do so. The false memory given to Hermione is at the forefront of her mind and doesn't require the hat to scan her memories (though Hermione could replay memories of event for the hat presumably). In addition the false memory is entirely out of character with Hermione's personality which is something the hat, at a minimum, should be able to verify. Considering the quote specifically addressing foreign memory, it seems entirely possibly the hat may immediately detect the false memory for what it is.
The hat says specifically: "I can go ahead and tell you that there is definitely nothing like a ghost - mind, intelligence, memory, personality, or feelings - in your scar. Otherwise it would be participating in this conversation, being under my brim." It says memory specifically. Both a false memory and a 'scar memory' could at this point be treated as 'foreign' to Hermione.
Are you referring to this slightly earlier quote: "Anyway, I have no idea whether or not you've been Obliviated. I'm looking at your thoughts as they form, not reading out your whole memory and analyzing it for inconsistencies in a fraction of second. I'm a hat, not a god."? Here the hat says it cannot detect deleted memories or do the sophisticated analysis required to discover tell tale 'inconsistencies'.
If the sorting hat has enough access / ability to one's mind to sort children into their appropriate house then it seems entirely possible that it has enough access / ability to identify a false memory. The sorting hat is an extremely powerful artifact which implies that the false memory would have to be a significantly greater power for us to conclude at this point that it can remain hidden from the sorting hat.
I'd like to "Hold Off on Proposing Solutions" or in this case hold off on advocating answers. I don't have time to list all the important bits of data we should be considering or enumerate all the current hypotheses, but I think both would be quite valuable.
Some quick hypothesis:
-Mr. Hat & Cloak is Quirrellmort & responsible for Hermione's 'condition'
-Mr. Hat & Cloak is Lucious & responsible for Hermione's 'condition'
-Mr. Hat & Cloak is Voldemort, but not the Quirrell body.
-Mr. Hat & Cloak is Quirrellmort and trying to take out Hermione as Harry's good side anchor.
-Mr. Hat & Cloak is NOT Quirrellmort.
-Mr. Hat & Cloak is Grindelwald
-Author has made massive continuity alterations many of which are unclear thus it follows that relying on continuity is difficult at best. This most severely impacts character personalities and motivations.
Some quick puzzle bits :
-Hermione thinks Draco and Snape are doing / plotting something evil and hates Draco.
-Hermione was subject to a "Groundhog Day Attack" (I'm not even sure what this in the context of this story).
-Hermione's tone of voice was all satisfaction just before being arrested for allegedly trying to kill Draco.
-Draco has no idea what's going on.
-Hermione defeated Draco's spell which was supposed to be extremely hard to do.
-Black Hat and Cloak did something to Hermione's mind / memory.
-Prof. Q. has some major plotting going.
-Prof. Q.'s goals are not very clear, but his speech in the dinning hall may have shed some light on it and is consistent w/ convo's w/ Harry.
-Author went out of his way to 'rationalize' how Harry didn't go talk to Hermione. Not entirely clear that's consistent w/ Harry or Hermione.
Okay, that's all I've got time for...
One of the primary problems with the rationalists, humanists, atheist, skeptics, etc. is that there is no higher level organization and thus we tend to accomplish very little compared to most other organizations. I fully support efforts to fix this problem.
If I understand this correctly your 'AI' is biased to do random things, but NOT as a function of its utility function. If that is correct then your 'AI' simple does random things (according to its non-utility bias) since its utility function has no influence on its actions.
I consider all of the behaviors you describe as basically transform functions. In fact, I consider any decision maker a type of transform function where you have input data that is run through a transform function (such as a behavior-executor, utility-maximizer, weighted goal system, a human mind, etc.) and output data is generated (and in the case of humans sent to our muscles, organs, etc.). The reason I mention this is that trying to describe a human's transform function (i.e., what people normally call their mind) as mostly a behavior-executor or just a utility-maximizer leads to problems. A human's transform function is enormously complex and includes both behavior execution aspects and utility maximization aspects. I also find that attempts to describe a human's transform function as 'basically a __' results in a subsequent failure to look at the actual transform function when trying to figure out how people will behave.
I am having trouble scanning the HPMoR thread for topics I'm interested in due to both it's length and the lack of a hierarchical organization by topic. I would appreciate any help with this problem since I do not want to make comments that are simple duplicates of previous comments I failed to notice. With that in mind, is there a discussion forum or some method to scan the HPMoR discussion thread that doesn't involve a lot of effort? I have not found organizing comments by points to be useful in this respect.
Edit: I'm new and this is my 1st comment. I've read a lot of the sequences, but I don't know my way around yet. It's quite possible I'm missing a lot about how things work here.