Posts

Washington, D.C.: What If 2018-07-12T04:30:42.247Z
Washington, D.C.: Air & Space Museum 2018-07-05T03:02:19.839Z
Washington, D.C.: Positive and Negative Liberty 2018-06-28T03:15:10.357Z
Washington, D.C.: Fun & Games 2018-06-21T03:21:49.815Z
Washington, D.C.: Anxiety 2018-06-14T01:17:40.914Z
Washington, D.C.: What Have You Read Recently? 2018-06-07T02:30:21.998Z
Washington, D.C.: Definitions/Labels 2018-05-31T00:56:09.190Z
Washington, D.C.: Sharing Pretty Things 2018-05-24T02:50:27.466Z
Washington, D.C.: Disagreeing Productively (LW/EA joint meetup) 2018-05-16T02:57:36.831Z
Washington, D.C.: [Moved Indoors] Outdoor Fun & Games + Picnic 2018-05-09T03:32:32.005Z
Washington, D.C.: To-Do List Hacking 2018-04-30T19:23:40.003Z
Washington, D.C.: Science 2018-04-26T02:27:07.661Z
Washington, D.C.: Meditation 2018-04-19T02:33:59.200Z
Washington, D.C.: Create & Complete 2018-04-12T03:45:26.537Z
Washington, D.C.: Less Wrong 2018-04-05T03:05:55.360Z
Washington, D.C.: Meta-Meta Meetup 2018-03-28T18:54:19.361Z
Meetup : Washington, D.C.: Fun & Games 2017-05-24T14:49:49.313Z
Meetup : Washington, D.C.: Autism 2017-05-18T19:14:13.291Z
Meetup : Washington, D.C.: Survivorship Bias 2017-05-11T23:46:22.804Z
Meetup : Washington, D.C.: Regression to the Mean 2017-05-04T02:09:49.058Z
Meetup : Washington, D.C.: Fun & Games 2017-04-29T13:27:21.668Z
Meetup : Washington, D.C.: Slate Star Codex 2017-04-20T14:57:54.287Z
Meetup : Washington, D.C.: Visiting Museums 2017-04-12T14:45:21.086Z
Meetup : Washington, D.C.: Fun & Games 2017-04-05T02:10:49.097Z
Meetup : Washington, D.C.: Great Filter 2017-03-30T04:38:41.257Z
Meetup : Washington, D.C.: Mini Talks 2017-03-22T20:51:49.474Z
Meetup : Washington, D.C.: Cherry Blossoms 2017-03-13T21:18:11.791Z
Meetup : Washington, D.C.: Pi Day 2017-03-08T16:47:33.210Z
Meetup : Washington, D.C.: Fun & Games 2017-03-02T13:57:46.387Z
Meetup : Washington, D.C.: Create & Complete 2017-02-20T18:24:02.233Z
Meetup : Washington, D.C.: Nonsense 2017-02-13T21:53:38.926Z
Meetup : Washington, D.C.: Fun & Games 2017-02-08T02:16:15.879Z
Meetup : Washington, D.C.: Typical Mind Fallacy 2017-01-30T16:21:51.040Z
Meetup : Washington, D.C.: Meta Meetup 2017-01-25T00:48:31.252Z
Meetup : Washington, D.C.: Fun & Games 2017-01-19T04:41:46.168Z
Meetup : Washington, D.C.: Half-Assing 2017-01-11T01:10:34.088Z
Meetup : Washington, D.C.: Intro to Effective Altruism 2017-01-05T02:03:11.769Z
Meetup : Washington, D.C.: Fun & Games 2016-12-28T14:42:59.528Z
Meetup : Washington, D.C.: Game Theory 2016-12-15T15:35:49.110Z
Meetup : Washington, D.C.: Statistics 2016-12-05T22:37:25.615Z
Meetup : Washington, D.C.: Fun & Games 2016-12-02T03:07:28.771Z
Meetup : Washington, D.C.: Cooking 2016-11-18T00:39:06.446Z
Meetup : Washington, D.C.: Gardening 2016-11-08T02:45:44.894Z
Meetup : Washington, D.C.: Fun & Games 2016-11-04T00:49:12.568Z
Meetup : Washington, D.C.: Halloween Party 2016-10-26T21:37:01.292Z
Meetup : Washington, D.C.: Technology of Communication 2016-10-18T23:14:33.131Z
Meetup : Washington, D.C.: Fun & Games 2016-10-12T00:02:48.074Z
Meetup : Washington, D.C.: Games Discussion 2016-10-07T01:08:31.652Z
Meetup : Washington, D.C.: Outdoor Fun & Games 2016-09-23T15:11:52.505Z
Meetup : Washington, D.C.: Steelmanning 2016-09-12T23:36:29.162Z

Comments

Comment by RobinZ on [deleted post] 2018-02-11T20:17:37.376Z

Heads-up: Meeting starts as normal in the courtyard, but there is an event tomorrow and the preparations might lead to disruptions around 5 p.m. Just for general reference: the backup location is the Luce Center on the third floor - same side of the building as the big spiral staircase, toward the right if you're standing at the top of the staircase facing the outside wall.

Comment by RobinZ on Sequence Exercise: "Extensions and Intensions" from "A Human's Guide to Words" · 2015-03-21T14:45:56.092Z · LW · GW

Belatedly: some more vivid examples of "hope":

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-18T14:46:07.388Z · LW · GW

I continue to endorse being selective in whom one spends time arguing with.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-18T05:48:17.998Z · LW · GW

Is the long form also unclear? If so, could you elaborate on why it doesn't make sense?

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-18T05:46:35.722Z · LW · GW

I didn't propose that you should engage in detailed arguments with anyone - not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave.

Another example of a sufficiently-elaborate downvote explanation: "I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should." One sentence, long enough, no further argument required.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-18T00:06:04.465Z · LW · GW

Glad to hear it. :)

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-17T19:31:33.611Z · LW · GW

I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-17T19:19:42.396Z · LW · GW

I think I see what you're getting at. If I understand you rightly, what "heroic responsibility" is intended to affect is the behavior of people such as [trigger warning: child abuse, rape] Mike McQueary during the Penn State child sex abuse scandal, who stumbled upon Sandusky in the act, reported it to his superiors (and, possibly, the police), and failed to take further action when nothing significant came of it. [/trigger warning] McQueary followed the 'proper' procedure, but he should not have relied upon it being sufficient to do the job. He had sufficient firsthand evidence to justify much more dramatic action than what he did.

Given that, I can see why you object to my "useless". But when I consider the case above, I think what McQueary was lacking was the same thing that Hermione was lacking in HPMoR: a sense of when the system might fail.

Most of the time, it's better to trust the system than it is to trust your ability to outthink the system. The system usually has access to much, much more information than you do; the system usually has people with much, much better training than you have; the system usually has resources that are much, much more abundant than you can draw on. In the vast majority of situations I would expect McQueary or Hermione to encounter - defective equipment, scheduling conflicts, truancy, etc. - I think they would do far worse by taking matters into their own hands than by calling upon the system to handle it. In all likelihood, prior to the events in question, their experiences all supported the idea that the system is sound. So what they needed to know was not that they were somehow more responsible to those in the line of fire than they previously realized, but that in these particular cases they should not trust the system. Both of them had access to enough data to draw that conclusion*, but they did not.

If they had, you would not need to tell them that they had a responsibility. Any decent human being would feel that immediately. What they needed was the sense that the circumstances were extraordinary and awareness of the extraordinary actions that they could take. And if you want to do better than chance at sensing extraordinary circumstances when they really are extraordinary and better than chance at planning extraordinary action that is effective, determination is nice, but preparation and education are a whole lot better.

* The reasons differ: McQueary shouldn't have trusted it because:

  • One cannot rely on any organization to act against any of its members unless that member is either low-status or has acted against the preferences of its leadership.
  • In some situations, one's perceptions - even speculative, gut-feeling, this-feels-not-right perceptions - produce sufficiently reliable Bayesian evidence to overwhelm the combined force of a strong negative prior on whether an event could happen and the absence of supporting evidence from others in the group that said event could happen.

...while Hermione shouldn't have trusted it because:

  • Past students like James Potter got away with much because they were well-regarded.
  • Present employees like Snape got away with much because they were an established part of the system.
Comment by RobinZ on A discussion of heroic responsibility · 2014-11-17T17:14:22.156Z · LW · GW

I confess, it would make sense to me if Harry was unfamiliar with metaethics and his speech about "heroic responsibility" was an example of him reinventing the idea. If that is the case, it would explain why his presentation is as sloppy as it is.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-17T17:07:38.538Z · LW · GW

No, I haven't answered my own question. In what way was Harry's monologue about consequentialist ethics superior to telling Hermione why McGonagall couldn't be counted upon?

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-17T17:02:23.046Z · LW · GW

...huh. I'm glad to have been of service, but that's not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally - "You keep using that word. I do not think it means what you think it means" is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying:

This is a waste of time. You keep claiming that "heroic responsibility" says this or "heroic responsibility" demands that, but you're fundamentally mistaken about what heroic responsibility is and you can't seem to understand anything we say to correct you. I'm downvoting the rest of this conversation.

...would have been more like what I wanted to encourage.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-14T22:24:25.328Z · LW · GW

If I believed you to be a virtue ethicist, I might say that you must be mindful of your audience when dispensing advice. If I believed you to be a deontologist, I might say that you should tailor your advice to the needs of the listener. Believing you to be a consequentialist, I will say that advice is only good if it produces better outcomes than the alternatives.

Of course, you know this. So why do you argue that Harry's speech about heroic responsibility is good advice?

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-14T21:35:10.029Z · LW · GW

You are analyzing "heroic responsibility" as a philosophical construct. I am analyzing it as [an ideological mantra]. Considering the story, there's no reason for Harry to have meant it as the former, given that it is entirely redundant with the pre-existing philosophical construct of consequentialism, and every reason for him to have meant it as the latter, given that it explains why he must act differently than Hermione proposes.

[Note: the phrase "an ideological mantra" appears here because I'm not sure what phrase should appear here. Let me know if what I mean requires elaboration.]

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-14T21:28:13.022Z · LW · GW

s/work harder, not smarter/get more work done, not how to get more work done/

This advice doesn't tell people how to fix things, true, but that's not the point--it tells people how to get into the right mindset to fix things.

Why do you believe this to be true?

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-14T08:25:11.810Z · LW · GW

Neither Hermione nor Harry dispute that they have a responsibility to protect the victims of bullying. There may be people who would have denied that, but none of them are involved in the conversation. What they are arguing over is what their responsibility requires of them, not the existence of a responsibility. In other words, they are arguing over what to do.

Human beings are not perfect Bayesian calculators. When you present a human being with criteria for success, they do not proceed to optimize perfectly over the universe of all possible strategies. The task "write a poem" is less constrained than the task "write an Elizabethan sonnet", and in all likelihood the best poem is not an Elizabethan sonnet, but that doesn't mean that you will get a better poem out of a sixth-grader by asking for any poem than by giving them something to work with. The passage from Zen and the Art of Motorcycle Maintenance Eliezer Yudkowsy quoted back during the Overcoming Bias days, "Original Seeing", gave an example of this: the student couldn't think of anything to say in a five-hundred word essay about the United States, Bozeman, or the main street of Bozeman, but produced a five-thousand word essay about the front facade of the Opera House. Therefore, when I evaluate "heroic responsibility", I do not evaluate it as a proposition which is either true or false, but as a meme which either produces superior or inferior results - I judge it by instrumental, not epistemic, standards.

Looking at the example in the fanfic and the example in the OP, as a means to inspire superior strategic behavior, it sucks. It tells people to work harder, not smarter. It tells people to fix things, but it doesn't tell them how to fix things - and if you tell a human being (as opposed to a perfect Bayesian calculator) to fix something, it sounds like you're telling them to fix it themselves because that is what it sounds like from a literary perspective. "You've got to get the job done no matter what" is not what the hero says when they want people to vote in the next school board election - it's what the hero says when they want people to run for the school board in the next election, or to protest for fifteen days straight outside the meeting place of the school board to pressure them into changing their behavior, or something else on that level of commitment. And if you want people to make optimal decisions, you need to give them better guidance than that to allocating their resources.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-14T00:11:29.283Z · LW · GW

I downvoted RobinZ's comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread.

I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the -1 had "I disagreed with Eliezer Yudkowsky and he has rabid fans" orders of magnitude more likely than "I made a category error reading the fanfic and now we're talking past each other", and a few words from you could have reversed that ratio.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-13T23:47:48.755Z · LW · GW

I'm realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective, but telling me that I am responsible for x doesn't tell me that I am allowed to delegate x to someone else, and - especially in contexts like Harry's decision (and Swimmer's decision in the OP) - doesn't tell me whether "those nominally responsible can't do x" or "those nominally responsible don't know that they should do x". Harry's idea of heroic responsibility led him to conflate these states of affairs re: McGonagall, and the point of advice is to make people do better, not to win philosophy arguments.

When I came up with the three-point plan I gave to you, I did not do so by asking, "what would be the best way to stop this bullying?" I did so by asking myself, "if McGonagall is the person best placed to stop bullying, but official school action might only drive bullying underground without stopping it, what should I do?" I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible - better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own.

(Actually, thinking about localism suggested a modification to my Step 1: brief the prefects on the situation in addition to briefing McGonagall. That said, I don't know if that would be a good idea in this case - again, I stopped reading twenty chapters before.)

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-13T20:00:14.059Z · LW · GW

Full disclosure: I stopped reading HPMoR in the middle of Chapter 53. When I was researching my comment, I looked at the immediate context of the initial definition of "heroic responsibility" and reviewed Harry's rationality test of McGonagall in Chapter 6.

I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved. Based on McGonagall's characterization in the part of the story I read, barring some drastic idiot-balling since I quit, she's willing to take Harry seriously enough to act based on the information he provides; unless the bullies are somehow so devious as to be capable of evading both Harry and McGonagall's surveillance - and note that, with McGonagall taking point, they wouldn't know that they need to hide from Harry - this plan would have a reasonable chance of working with much less effort from Harry (and much less probability of misfiring) than any finger-snapping shenanigans. Not to mention that, if Harry read the situation wrong, this would give him a chance to be set straight. Not to mention that, if McGonagall makes a serious effort to crack down on bullying, the effect is likely to persist for far longer than Harry's term.

On the subject of psychology: really, what made me so emphatic in my denouncing "heroic responsibility" was [edit: my awareness of] the large percentage of adults (~10-18%) subject to anxiety disorders of one kind or another - including me. One of the most difficult problems for such people is how to restrain their instinct to blame themselves - how to avoid blaming themselves for events out of their control. When Harry says, "whatever happens, no matter what, it’s always your fault" to such persons, he is saying, "blame yourself for everything" ... and that makes his suggestion completely useless to guide their behavior.

Comment by RobinZ on A discussion of heroic responsibility · 2014-11-03T03:55:17.819Z · LW · GW

My referent for 'heroic responsibility' was HPMoR, in which Harry doesn't trust anyone to do a competent job - not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don't know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if those bullies attempted to evade her supervision. And, in the real world, that would be a perfect example of comparative advantage and opportunity cost in action: Harry is a lot better at high-stakes social and magical shenanigans relative to student discipline than McGonagall is, so for her to expend her resources on the latter while he expends his on the former would produce a better overall outcome by simple economics. (Not to mention that Harry should face far worse consequences if he screws up than McGonagall would - even if he has his status as Savior of the Wizarding World to protect him.) (Also, leaving aside whether his plans would actually work.)

I am advocating for people to take the initiative when they can do good without permission. Others in the thread have given good examples of this. But you can't solve all the problems you touch, and you'll drive yourself crazy if you blame yourself every time you "could have" prevented something that no-one should expect you to have. There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.

Comment by RobinZ on A discussion of heroic responsibility · 2014-10-31T01:09:57.652Z · LW · GW

Well, let's imagine a system which actually is -- and that might be a stretch -- intelligently designed.

Us? I'm a mechanical engineer. I haven't even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease - and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane.

The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what's going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.

Comment by RobinZ on A discussion of heroic responsibility · 2014-10-30T21:59:35.858Z · LW · GW

Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient's hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean ... a doctor won't be perfectly reliable either, but like a professional scout who can say, "His college batting average is .400 because there aren't many good curveball pitchers in the league this year", a doctor can detect low-prior confounding factors a lot faster than a computer can.

Comment by RobinZ on A discussion of heroic responsibility · 2014-10-30T20:34:23.476Z · LW · GW

Even assuming that the machine would not be modified to give treatment recommendations, that wouldn't change the effect I'm concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they'll stop remembering how to diagnose disease and instead remember how to use the machine. It's called "transactive memory".

I'm not arguing against a machine with a button on it that says, "Search for conditions matching recorded symptoms". I'm not arguing against a machine that has automated alerts about certain low-probability risks - if there was a box that noted the conjunction of "from Liberia" and "temperature spiking to 103 Fahrenheit" in Thomas Eric Duncan during his first hospital visit, there'd probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, "No diagnosis found".

Comment by RobinZ on A discussion of heroic responsibility · 2014-10-30T19:57:53.664Z · LW · GW

Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can.

Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many 'pilots' don't know how to fly a plane. A system which automates almost all diagnoses would do that.

Comment by RobinZ on A discussion of heroic responsibility · 2014-10-30T18:50:24.603Z · LW · GW

True story: when I first heard the phrase 'heroic responsibility', it took me about five seconds and the question, "On TV Tropes, what definition fits this title?" to generate every detail of EY's definition save one. That detail was that this was supposed to be a good idea. As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that's a recipe for everyone getting in everyone else's way and burning out within a year. And, as you point out, you don't actually know the doctor's job better than the doctors do.

In my opinion, what we should be advocating is the concept of 'subsidiarity' that Fred Clark blogs about on Slacktivist:

Responsibility — ethical obligation — is boundless and universal. All are responsible for all. No one is exempt.

Now, if that were all we had to say or all that we could know, we would likely be paralyzed, overwhelmed by an amorphous, undifferentiated ocean of need. We would be unable to respond effectively, specifically or appropriately to any particular dilemma. And we would come to feel powerless and incapable, thus becoming less likely to even try.

But that’s not all that we can know or all that we have to say.

We are all responsible, but we are not all responsible in the same way. We each and all have roles to play, but we do not all have the same role to play, and we do not each play the same role all the time.

Relationship, proximity, office, ability, means, calling and many other factors all shape our particular individual and differentiated responsibilities in any given case. In every given case. Circumstance and pure chance also play a role, sometimes a very large role, as when you alone are walking by the pond where the drowning stranger calls for help, or when you alone are walking on the road to Jericho when you encounter the stranger who has fallen among thieves.

Different circumstances and different relationships and different proximities entail different responsibilities, but no matter what those differences may be, all are always responsible. Sometimes we may be responsible to act or to give, to lift or to carry directly. Sometimes indirectly. Sometimes our responsibility may be extremely indirect — helping to create the context for the proper functioning of those institutions that, in turn, create the context that allows those most directly and immediately responsible to respond effectively. (Sometimes our indirect responsibility involves giving what we can to the Red Cross or other such organizations to help the victims of a disaster.)

The idea of heroic responsibility suggests that you should make an extraordinary effort to coerce the doctor into re-examining diagnoses whenever you think an error has been made. Bearing in mind that I have no relevant expertise, the idea of subsidiarity suggests to me that you, being in a better position to monitor a patient's symptoms than the doctor, should have the power to set wheels in motion when those symptoms do not fit the diagnosis ... which suggests a number of approaches to the situation, such as asking the doctor, "Can you give me more information on what I should expect to see or not see based on this diagnosis?"

(My first thought regarding your anecdote was that the medical records should automatically include Bayesian probability data on symptoms to help nurses recognize when the diagnosis doesn't fit, but this article about the misdiagnosis of Ebola suggests revising the system to make it more likely for doctors see the nurses' observations that would let them catch a misdiagnosis. You're in a better position to examine the policy question than I am.)

I have to admit, I haven't been following the website for a long while - these days, I don't get a lot of value out of it - so what I'm saying that Fred Clark is saying might be what a lot of people already see as the meaning of the concept. But I think that it is valuable to emphasize that responsibility is shared, and sometimes the best thing you can do is help other people do the job. And that's not what Harry Potter-Evans-Verres does in the fanfic.

Comment by RobinZ on 2014 Less Wrong Census/Survey · 2014-10-29T06:16:05.470Z · LW · GW

Completed survey less annoying question that required using an annoying scanner that makes annoying noises (I am feeling annoyed). Almost skipped it, but realized that the attitudes of ex-website-regulars might be of interest.

Comment by RobinZ on Welcome to Less Wrong! (6th thread, July 2013) · 2014-06-10T23:39:12.514Z · LW · GW

Also, I don't know if "Typical mind and gender identity" is the blog post that you stumbled across, but I am very glad to have read it, and especially to have read many of the comments. I think I had run into related ideas before (thank you, Internet subcultures!), but that made the idea that gender identity has a strength as well as a direction much clearer.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-10T22:25:24.336Z · LW · GW

Hence the substitution. :)

Comment by RobinZ on Welcome to Less Wrong! (6th thread, July 2013) · 2014-06-10T22:05:24.282Z · LW · GW

I'm afraid I haven't been active online recently, but if you live in an area with a regular in-person meetup, those can be seriously awesome. :)

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-10T22:03:36.929Z · LW · GW

Jiro didn't say appeal to you. Besides, substitute "blog host" for "government" and I think it becomes a bit clearer: both are much easier ways to deal with the problem of someone who persistently disagrees with you than talking to them. Obviously that doesn't make "don't argue with idiots" wrong, but given how much power trivial inconveniences have to shape your behavior, I think an admonition to hold the proposed heuristic to a higher standard of evidence is appropriate.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T18:18:02.560Z · LW · GW

Hmm ... that and a la shminux's xkcd link gives me an idea for a test protocol: instead of having the judges interrogate subjects, the judges give each pair of subjects a discussion topic a la Omegle's "spy" mode:

Spy mode gives you and a stranger a random question to discuss. The question is submitted by a third stranger who can watch the conversation, but can't join in.

...and the subjects have a set period of time they are permitted to talk about it. At the end of that time, the judge rates the interesting-ness of each subject's contribution, and each subject rates their partner. The ratings of confirmed-human subjects would be a basis for evaluating the judges, I presume (although you would probably want a trusted panel of experts to confirm this by inspection of live results), and any subjects who get high ratings out of the unconfirmed pool would be selected for further consideration.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T17:57:04.324Z · LW · GW

Were I using that test case, I would be prepared with statements like "A fluid ounce is just under 30 cubic centimeters" and "A yardstick is three feet long, and each foot is twelve inches" if necessary. Likewise "A liter is slightly more than one quarter of a gallon".

But Stuart_Armstrong was right - it's much too complicated an example.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T15:49:41.513Z · LW · GW

Honestly, when I read the original essay, I didn't see it as being intended as a test at all - more as an honorable and informative intuition pump or thought experiment.

In other words, agreed.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T15:43:42.535Z · LW · GW

Your test seems overly complicated; what about simple estimates? Like "how long would it take to fly from Paris, France, to Paris, USA" or similar? Add in some Fermi estimates, get them to show your work, etc...

That is much better - I wasn't thinking very carefully when I invented my question.

If the human subject is properly motivated to want to appear human, they'd relax and follow the instructions. Indignation is another arena in which non-comprehending programs can hide their lack of comprehension.

I realize this, but as someone who wants to appear human, I want to make it as difficult as possible for any kind of computer algorithm to simulate my abilities. My mental model of sub-sapient artificial intelligence is such that I believe many such might pass your test, and therefore - were I motivated properly - I would want to make it abundantly clear that I had done more than correctly parse the instructions "[(do nothing) for (4 minutes)] then {re-type [(this sentence I've just written here,) skipping (one word out of 2.)]}" That is a task that is not qualitatively different from the parsing tasks handled by the best text adventure game engines - games which are very far from intelligent AI.

I wouldn't merely sputter noisily at your failure to provide responses to my posts, I'd demonstrate language comprehension, context awareness, knowledge of natural-language processing, and argumentative skills that are not tested by your wait-four-minutes proposal, both because I believe that you will get better results if you bear these factors in mind and because - in light of the fact that I will get better results if you bear them in mind - I want you to correctly identify me as a human subject.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T15:32:48.939Z · LW · GW

The manner in which they fail or succeed is relevant. When I ran Stuart_Armstrong's sentence on this Web version of ELIZA, for example, it failed by immediately replying:

Perhaps you would like to be human, simply do nothing for 4 minutes, then re-type this sentence you've just written here, skipping one word out of 2?

That said, I agree that passing the test is not much of a feat.

Comment by RobinZ on Self-Congratulatory Rationalism · 2014-06-10T15:27:47.687Z · LW · GW

Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel - this is MichaelBishop's summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel's essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T14:29:17.342Z · LW · GW

Similar to your lazy suggestion, challenging the subject to a novel (probably abstract-strategy) game seems like a possibly-fruitful approach.

On a similar note: Zendo-variations. I played a bit on a webcomic forum using natural numbers as koans, for example; this would be easy to execute over a chat interface, and a good test of both recall and problem-solving.

Comment by RobinZ on Open thread, 9-15 June 2014 · 2014-06-10T14:20:04.736Z · LW · GW

Very nice! I love this kind of mathematical detective-story - I'm reminded of Nate Silver's consideration of the polling firm Strategic Vision here and here - but this is far, far more blatant.

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T13:41:34.719Z · LW · GW

Speaking of original Turing Test, the Wikipedia page has an interesting discussion of the tests proposed in Turing's original paper. One of the possible reads of that paper includes another possible variation on the test: play Turing's male-female imitation game, but with the female player replaced by a computer. (If this were the proposed test, I believe many human players would want a bit of advance notice to research makeup techniques, of course.) (Also, I'd want to have 'all' four conditions represented: male & female human players, male human & computer, computer & female human, and computer & computer.)

Comment by RobinZ on Come up with better Turing Tests · 2014-06-10T13:29:12.760Z · LW · GW

[EDIT: Jan_Rzymkowski's complaint about 6 applies to a great extent to this as well - this approach tests aspects of intelligence which are human-specific more than not, and that's not really a desirable trait.]

Suggestion: ask questions which are easy to execute for persons with evolved physical-world intuitions, but hard[er] to calculate otherwise. For example:

Suppose I have a yardstick which was blank on one side and marked in inches on the other. First, I take an unopened 12-oz beverage can and lay it lengthwise on one end of the yardstick so that half the height of the can is touching the yardstick and half is not, and duct-tape it to the yardstick in that position. Second, I take one-liter plastic water bottle, filled with water, and duct-tape it to the other end in a similar sort of position. If I lay a deck of playing cards in the middle of the open floor and place the yardstick so that the 18-inch mark is centered on top of the deck of cards, when I let go, what will happen?

(By the way, as a human being, I'm pretty sure that I would react to your lazy test with eloquent, discursive indignation while you sat back and watched. The fun of the game from the possibly-a-computer side of the table is watching the approaches people take to test your capabilities.)

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-10T12:35:43.039Z · LW · GW

It is a neat toy, and I'm glad you posted the link to it.

The reason I got so mad is that Warren Huelsnitz's attempt to draw inferences from these - even weak, probabilistic, Bayesian inferences - were appallingly ignorant for someone who claims to be a high-energy physicist. What he was doing would be like my dad, in the story from his blog post, trying to prove that gravity was created by electromagnetic forces because Roger Blandford alluded to an electromagnetic case in a conversation about gravity waves. My dad knew that wasn't a true lesson to learn from the metaphor, and Richard Feynman agrees with him:

However, a question surely suggests itself at the end of such a discussion: Why are the equations from different phenomena so similar? We might say: “It is the underlying unity of nature.” But what does that mean? What could such a statement mean? It could mean simply that the equations are similar for different phenomena; but then, of course, we have given no explanation. The “underlying unity” might mean that everything is made out of the same stuff, and therefore obeys the same equations. That sounds like a good explanation, but let us think. The electrostatic potential, the diffusion of neutrons, heat flow—are we really dealing with the same stuff? Can we really imagine that the electrostatic potential is physically identical to the temperature, or to the density of particles? Certainly ϕ is not exactly the same as the thermal energy of particles. The displacement of a membrane is certainly not like a temperature. Why, then, is there “an underlying unity”?

Feynman goes on to explain that many of the analogues are approximations of some kind, and so the similarity of equations is probably better understood as being a side effect of this. (I would add: much in the same way that everything is linear when plotted log-log with a fat magic marker.) Huelsnitz, on the other hand, seems to behave as if he expects to learn something about the evolutionary history of the Corvidae family by examining crowbars ... which is simply asinine.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-10T03:07:48.781Z · LW · GW

If my research is correct:

"Casus ubique valet; semper tibi pendeat hamus:
     Quo minime credas gurgite, piscis erit."

Ovid's Ars Amatoria, Book III, Lines 425-426.

I copied the text from Tuft's "Perseus" archive.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-10T02:48:04.510Z · LW · GW

Coincidentally, I was actually heading out to meet my dad (a physics Ph.D.), and I mentioned the paper and blog post to him to get his reaction. He asked me to send him a link, but he also pointed me at Feynman's lecture on electrostatic analogs, which is based on one of those simple ideas that invites bullet-swallowing: The same equations have the same solutions.

This is one of those ideas that I get irrationally excited about, honestly. The first thing I thought of when you described these hydrodynamic experiments was the use of similitude in experimental modeling, which is a special case of the same idea: after you work out the equations that you would need to solve to calculate (for example) the flow of air around a wing, instead of doing a lot of intractable mathematics, you rewrite the equations in terms of dimensionless parameters like the Reynolds number and put a scale model of the wing in a wind tunnel. If you adjust the velocity, pressure, &c. correctly in your scale model, you can make the equations that you would need to solve for the scale model exactly the same as the equations for the full-sized wing ... and so, when you measure a number on the scale model, you can use that number the same way that you would use the solution to your equations, and get the number for the real wing. You can do this because the same equations have the same solutions.

For that matter, one of the stories my dad wrote on his blog about his Ph.D. research mentions a conversation in which another physicist pointed out a possible source of interesting complexity in gravitational waves by metaphor to electromagnetic waves - a metaphor whose validity came from the same equations having the same solutions.

I have to say, though, that my dad does not get excited about this kind of thing, and he explained to me why in a way which parallels Feynman's remark at the end of the lecture: these physical models, these analog computations, are approximate. Feynman talks about these similarities being used to design photomultiplier tubes, but explains - in a lecture delivered before 1964, mind - that "[f]or the most accurate work, it is better to determine the fields by numerical methods, using the large electronic computing machines." And at the end of section 4.7 of the paper you linked to:

From the value of alpha, it seems that the electrostatic force is about two orders of magnitude weaker than the mechanical force between resonant bubbles. This suggests one limitation of the bouncing-droplet experiment as a model of quantum mechanics, namely that spherically-symmetric resonant solutions are not a good model for the electron.

On the basis of these factors, I think I would fully endorse Brady and Anderson's conclusions in the paper: that these experiments have potential as pedagogical tools, illuminating some of the confusing aspects of quantum mechanics - such as the way multiple particles interacting produce a waveform that is nevertheless defined by a single amplitude and phase at every point. By contrast, when the blogger you link to says:

What are the quantum parallels for the effective external forces in these hydrodynamic quantum analogs, i.e. gravity and the vibrations of the table? Not all particles carry electric charge, or weak or color charge. But they are all effected by gravity. Is their a connection here to gravity? Quantum gravity?

...all I can think is, "does this person understand what the word 'analogue' means?" There is no earthy reason to imagine that the force of gravity on the droplet and liquid surface should have anything to do with gravity acting on particles in quantum waveforms. Actually, it's worse than that: we can know that it does not, in the same way that, among simple harmonic oscillators, the gravity force on pendulums has nothing to do with the gravity force on a mass on a spring. They are the same equations, and the equations in the latter case don't have gravity in them ... so whatever work gravity does in the solution of the first equation is work it doesn't do in the solution of the second.

I may be doing the man a gross injustice, but this ain't no way to run a railroad.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-09T23:07:20.811Z · LW · GW

...huh.

I have to go, but downvote this comment if I don't reply again in the next five hours. I'll be back.

Edit: Function completed; withdrawing comment.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-09T21:39:30.834Z · LW · GW

I don't think I understand the relevance of your example, but I agree on the bullet-swallowing point, especially as I am an inveterate bullet-dodger.

(That said, the experiments sound awesome! Any particular place you'd recommend to start reading?)

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-09T20:45:01.947Z · LW · GW

Having come from there, the general perception is that LW-ers and our positions are not idiots, but instead the kind of deluded crackpot nonsense smart people make up to believe in. Of course, that's largely for the more abstruse stuff, as people in the outside world will either grudgingly admit the uses of Bayesian reasoning and debiasing or just fail to understand what they are.

There's also a tendency to be doctrinaire among LW-ers that people may be reacting to - an obvious manifestation of this is our use of local jargon and reverential capitalization of "the Sequences" as if these words and posts have significance beyond the way they illuminate some good ideas. Those are social markers of deluded crackpots, I think.

Comment by RobinZ on Examples of Rationality Techniques adopted by the Masses · 2014-06-09T20:28:23.966Z · LW · GW

A good second stage is to look for techniques that were publicized and not used, and see why some techniques gained currency while others did not.

Comment by RobinZ on Examples of Rationality Techniques adopted by the Masses · 2014-06-09T20:27:37.961Z · LW · GW

I see what you're getting at, although praying is a bad example - most people pray because their parents and community prayed, and we're looking at ways to lead people away from what their parents and community had done. The Protestant Reformation might be a better case study, or the rise of Biblical literalism, or the abandonment of the prohibition on Christians lending money at interest.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-09T18:45:58.999Z · LW · GW

David Deutsch is right - it doesn't appear in Twain, and it's difficult to find any good citation for the true originator.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-09T18:24:16.868Z · LW · GW

You post a link to "Disputing Definitions" as if there is no such thing as a wrong definition. In this case, the first speaker's definition of "decision" is wrong - it does not accurately distinguish between vanadium and palladium - and the second speaker is pointing this out.

Comment by RobinZ on Rationality Quotes June 2014 · 2014-06-09T17:20:40.897Z · LW · GW

I would also like to note that I have learned a number of interesting things by (a) spending an hour researching idiotic claims and (b) reading carefully thought out refutations of idiocy - like how they're called "federal forts" because the statutes of the states in which they were built include explicitly ceding the land upon which they were built to the federal government.