post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2012-10-29T21:44:25.055Z · LW(p) · GW(p)

I'm a little confused by this post, in that it seems to be a little all over the place and comes off as a general "how things weren't so good, but are better now and there's a happy end" story. It says that it's about agency, but several of the problems involved (e.g. there being two factions with contradictory goals in the original group, the fact that the purchasing negotiations were complex) have no obvious connection to people being agenty.

I also have only a rough guess of what's meant by "agenty" in the first place, which might contribute to my confusion. I think this post could benefit if it explicitly gave a definition for the term and more clearly linked the various parts of the story with that definition. There are already some parts where the connection to agency is made quite clearly - e.g. "Today I know that if an agenty person has to write bylaws and they don't have experience, they go off and read about how to write bylaws" - but they're currently the exception rather than the norm.

I'm also not entirely convinced that "agency" is the best possible way of characterizing the events in question. For instance:

I appreciate the very few people who came to all of the meetings, and the people who actually put down their money and committed who didn't come to meetings. Even the people who just did a little, took on a risk that other people didn't, they did a lot more than the people who did nothing.

This sounds to me like "some of the people were more motivated by this goal than others" could be both a more accurate and a more useful description than "some were more agenty than others". More useful in the sense that the first description allows for the possibility of asking questions like "Might these people work harder if they were more motivated? Could they be more motivated? Is this lack of motivation a sign of the fact that they're not so strongly on board with this anyway, suggesting troubles later on?" and then taking different kinds of action based on the answers you get. The second description feels more likely to just make one shrug their shoulders and think "eh, they're just not sufficiently agenty, that's too bad". Or to put it in other words, the "agent" characterization sounds like an error model of people rather than a bug model.

So I think that this post would also benefit from not just defining "agenty", but also saying a few words about why we should expect this to be a useful and meaningful concept.

Replies from: ShannonFriedman, John_Maxwell_IV
comment by ShannonFriedman · 2012-10-30T04:42:13.678Z · LW(p) · GW(p)

Hi Kaj,

Thanks for taking t he time to write out these thoughtful feedback/questions.

I'm a little confused by this post, in that it seems to be a little all over the place and comes off as a general "how things weren't so good, but are better now and there's a happy end" story.

I needed an example to make my point, and the founding of Tortuga was the one I came up with. That particular story was all over the place and a mess, which is kind of the point. Real life is messy. The whole thing was a big mess, that Patri and I and group somehow managed to persist through and make work.

The ending I was shooting for was more appreciation of people like Patri, especially those in this community, and both inspiration and caution regarding agency. Its really really really hard, and some people do it. If you try it and you're not used to it, you'll probably fail immediately. This is to be expected, and if you really want to be an agent, you don't give up and let that stop you, like it would for most people.

but several of the problems involved (e.g. there being two factions with contradictory goals in the original group, the fact that the purchasing negotiations were complex) have no obvious connection to people being agenty.

Yes, that was just an example of the stupid crap that came up in this particular case. How we dealt with it was agenty - we didn't just let it destroy the project - Patri did research, I figured out how our case was like an example in his research, and he figured out a solution to the problem we identified. In most cases, when a group got stuck with something like two factions, it would simply fail, and that would be the end of the project.

Sorry about the lack of definition of agency - its a term used very frequently by Less Wrong types I hang out with, so I figured it was common and safe lingo to use. I should have known better since I also had someone ask in another post. Here's my quick answer:

Taking personal responsibility for making things happen. Observing opportunities and going for them. Taking risks.

And here's something from Lukeprog:

http://lesswrong.com/lw/5i8/the_power_of_agency/

I'm also not entirely convinced that "agency" is the best possible way of characterizing the events in question. For instance:

I appreciate the very few people who came to all of the meetings, and the people who actually put down their money and committed who didn't come to meetings. Even the people who just did a little, took on a risk that other people didn't, they did a lot more than the people who did nothing.

I don't think I said anything about comparing agency, or that every single thing I wrote was specifically and directly about agency - you are arguing with a claim I didn't make. Writing that was an attempt to show appreciation for people doing anything, since most people in my experience do absolutely nothing to make things happen outside of societal norms. Its frustrating that everyone doesn't do more, but I do want to give at least some positive reinforcement for doing anything. If it hadn't been for the people who came to meetings, nothing would have happened, in the same way that if Patri hadn't been there, nothing would have happened. Its just that people coming to meetings happens much more often than Patris. Feel free to ask clarifying questions on this - I realize its not the most elegantly written, and I'm not quite sure how to get at exactly what you're after.

So I think that this post would also benefit from not just defining "agenty", but also saying a few words about why we should expect this to be a useful and meaningful concept.

People who are agenty are people who make shit happen. Amazing things don't just happen by themselves. That's why the world doesn't function like how we can all imagine it doing in more ideal situations. To really make the world as awesome as it could be, we need more agents. And the agents we do have, are almost all struggling with the sort of problems that happened in the founding of Tortuga that I described. The problems are different in different situations, but generally, there is a very small number of people on any given project that are really thinking about it, and acting with intention, keeping the big picture in mind, and they have to manage everything and everyone else, and this is very challenging.

I feel like I should edit this more since these are such good questions, but unfortunately I don't have the energy for it right now and am unlikely to in the near future. I hope this helps!

Replies from: Mitchell_Porter, ialdabaoth
comment by Mitchell_Porter · 2012-10-30T09:40:36.029Z · LW(p) · GW(p)

I think what Kaj is responding to, is that the post doesn't have the abstract clarity of purpose of a typical post in the Main forum. It's more of a personal history and a passionate exhortation to reward agency when it appears within the LW community. It's a bit out of line for me to play LW front-page style-pundit, when I am mostly a careless creature of Discussion and have no ambition to shape LW's appearance or editorial norms, and I even sort of like the essay as it is; but it probably does deserve a rewrite. (It'll get twice as many upvotes if you do it really well.)

Replies from: ShannonFriedman
comment by ShannonFriedman · 2012-11-01T02:05:39.300Z · LW(p) · GW(p)

Thanks for explaining.

Its true, my writing is not as high quality as most of the top level posts. I'm not a professional writer at all. Although I did get someone good to edit this for me, so its much better than it would have been without that.

I don't know of anyone who is a better writer than I am who understands and cares enough about this content enough to put it out there, so I did it myself. If you or anyone you know who is a better writer would like to do a rewrite, by all means, I would love for them to do it!

Replies from: fezziwig
comment by fezziwig · 2012-11-02T14:40:44.814Z · LW(p) · GW(p)

I don't think it's the general quality of your writing that's causing problems; I think it's a particular, specific flaw in this essay. Compare this comment thread to the one under 'How To Deal With Depression' -- there's agreement and there's disagreement, but unlike in this comment thread there's no deep confusion about what your point is and how your essay supports it.

So what is that flaw? My theory is that 'agentiness' is psychological phlogiston, an imprecise non-explanation which should be purged from our collective vocabulary with great force. Taboo it, decompose it and retry.

If I'm right about the problem but wrong about the solution, my next best guess is that you've chosen too complicated an anecdote. I can see why you wouldn't want to expand on the hospital story specifically, but something about that size might work better.

Hope this helps.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-11-03T03:52:42.915Z · LW(p) · GW(p)

I agree that there isn't a problem with Shannon's prose. I thought agentiness was a clear concept, but I might be kidding myself.

comment by ialdabaoth · 2012-11-02T00:27:13.487Z · LW(p) · GW(p)

The ending I was shooting for was more appreciation of people like Patri, especially those in this community, and both inspiration and caution regarding agency. Its really really really hard, and some people do it. If you try it and you're not used to it, you'll probably fail immediately. This is to be expected, and if you really want to be an agent, you don't give up and let that stop you, like it would for most people.

In all seriousness, though, why bother? As long as there are colossi striding the world, what possible affect will us mere mortals have?

In general, agency provides its own rewards. I'm more curious what kind of teleological narrative us mere mortals can maintain, in the face of people who are simply objectively better than us at getting shit done no matter what?

What influence do average people have on anything that actually matters, compared to people like Patri or Eliezer?

Replies from: None, Strange7
comment by [deleted] · 2012-11-02T11:23:32.459Z · LW(p) · GW(p)

As someone who has met Patri and Eliezer (and many other heroes besides), I can tell you this: they are men of flesh and blood, with their own insecurities and fears. And I can tell you that they cannot do it alone - why else would Patri have started the Seasteading Institute, or Eliezer Less Wrong? They have both put significant labor into building communities, support networks, and organizations because they need the help of 'average people'.

They are impressive. Let's strive to emulate their best qualities. But to the extent that we wait in the shadows, waiting for for them to fix the world for us, we also sabotage their efforts. They need us. They need you.

I'd also recommend you take a look at this diagram.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-02T18:19:45.445Z · LW(p) · GW(p)

That assumes that the individual is in control of their own mindset.

Mindsets arise through an interaction of the individual and their environment. The individual's social environment, in particular, plays a strong role in determining one's view of challenges and opportunities, of flaws and capabilities, and of agency and fate.

In the absence of warmth, sunlight, nutrients and water, a seed will not grow, even if it is (genetically) a perfectly formed and hardy seed. In the absence of resources and adequately-scaled challenges, a mind will not flourish, even if it is (genetically) a perfectly formed and clever mind.

Replies from: None
comment by [deleted] · 2012-11-06T00:40:33.683Z · LW(p) · GW(p)

You sound like you're making excuses for not trying to do things. It seems like you're trying to defend your belief that you're incapable, because admitting that you don't have to be would mean you'd a) have to do something difficult like try things, b) have to face the potential for failure, and c) have to admit that you've been wasting your time working on things that don't matter as much as what you could be working on.

Secondly - Less Wrong isn't the worst environment for nurturing your mindset. For all the inaction we have around here, we at least have some pretty good memes (see the Challenging the Difficult sequence).

Anyway - I think you'll improve your mindset as soon as you want to. I'm going to get back to trying to help.

Replies from: Strange7
comment by Strange7 · 2012-11-13T22:49:32.047Z · LW(p) · GW(p)

I believe Ialdabaoth is referring to other environmental factors, not Lesswrong.

Replies from: None
comment by [deleted] · 2012-11-14T05:59:08.162Z · LW(p) · GW(p)

My sense was that he was discussing one's 'environment in general', and I was recommending thinking of LW as part of his environment, since it has some good memes. I wasn't trying to correct a misunderstanding of LW, but rather encourage him to absorb good memes from LW.

comment by Strange7 · 2012-11-13T22:52:09.100Z · LW(p) · GW(p)

Colossi are better at getting shit done when surrounded by a legion of supporters, than when alone. Any given member of that legion may be interchangeable or even ultimately dispensable, but each has a marginal contribution to make.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T22:55:03.660Z · LW(p) · GW(p)

True. I guess my own personal narrative has taught me to be extremely distrustful of any role where I am ultimately dispensable and interchangeable - I'm tired of being reassigned to bus-axle greasing duties while the bus is still rolling.

comment by John_Maxwell (John_Maxwell_IV) · 2012-10-30T03:44:03.301Z · LW(p) · GW(p)

I think the idea of near mode and far mode might be useful in formulating a definition. Something like, "a person is like an agent to the extent that they consistently and intelligently work towards accomplishing far-mode goals".

Also, http://www.paulgraham.com/relres.html

comment by Alicorn · 2012-10-29T18:28:54.244Z · LW(p) · GW(p)

I find the following emotions are often associated with me being insufficiently "agenty":

  • Wistfulness. An example.

  • Exasperation. (Especially exasperation that someone else is not being agenty, about something that I could just as easily take over and get done myself.)

  • Ugh-field sensations.

  • Creative "stuckness". Talking to a beta reader almost always clears up this problem for me inside of fifteen to twenty minutes even if the beta doesn't actually have anything to say, and I still don't instantly grab one and start yammering when I feel it.

  • Non-strategic "but I don't know how to do X!" (This is sometimes useful strategically, though.)

Replies from: Swimmer963, Armok_GoB
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-10-29T19:47:56.832Z · LW(p) · GW(p)

I'm insufficiently "agenty" in the following situations:

-When my working memory gets too full, i.e. when I'm doing a clinical at the hospital and my mental "to-do" list gets too long, I stop caring whether I'll get everything done, how I'll get everything done, whether I know how to do everything I'm supposed to do, etc. I then become a little obedient puppy who follows people around waiting until they tell me to do things.

-Whenever my mental response to a situation is "are you serious?", my actual response is likely to be less than enthusiastic.

-Feeling embarrassed because of someone else's behaviour-similar to exasperation. I don't know if this is a conditioned response to not getting along with fellow members of group projects, but whenever I'm watching someone else struggle because they're unprepared, say/do something stupid, etc, my motivation to make an effort drops to zero.

comment by Armok_GoB · 2012-10-29T20:19:57.357Z · LW(p) · GW(p)

By a 5 second estimate this might be able to interact usefully with some of those phone apps that ask you aabut stuff at random intervals. At random every few minutes, ask if you're feeling any of those things, have you click "yes" or "no", and if yes it prompts with standard response you should have.

comment by JoshuaFox · 2012-10-29T19:40:12.583Z · LW(p) · GW(p)

Often, if people fail to be agenty in the context that you are interested in, they are simply saving their energies to be agenty in another.

As to 95% failure rate: It might have been for the best that many of those projects did not proceed. Just because you've started something doesn't mean you should finish it.

Replies from: handoflixue
comment by handoflixue · 2012-10-30T20:44:31.375Z · LW(p) · GW(p)

Do you have any particular reason to believe that most people are really living at ideal agency levels and simply "saving" it for the things that matter? I'm pretty positive I'm both distinctly above-average in agency (judged by largely having a successful life and resolving my complaints with it over time), and still have fairly severe failures.

At least for me, "I don't have the energy to accomplish X now, but I'll put it on my list for when I can" and "I didn't realize I could accomplish X" are very different states, and it seems like he average person has only a minimal sense of the former.

Replies from: JoshuaFox
comment by JoshuaFox · 2012-10-30T20:47:29.109Z · LW(p) · GW(p)

Even the few people who show agency on any project whatsoever are non-agenty on most projects whose goals they support.

comment by MixedNuts · 2012-10-29T21:09:33.455Z · LW(p) · GW(p)

I don't think it's just people being stupid. I mean, if I try to become a PC, I will die. (I will abandon my support structures, become unable to sustain effort before building new ones, get horribly sick, and die.) Many people have big losses on failure (internal or external), like responsibility to family.

Still, since you're a PC who knows lots of PCs: how do people, in practice, go about things like "At the age of 12, Badass Example left her village in East Timor to join the French Resistance. After assassinating Hitler, she founded the Donner Party and was elected President of Belgium."? I don't think you can just look up "How to locate and join a secret group" on eHow.

Replies from: ShannonFriedman
comment by ShannonFriedman · 2012-11-09T00:25:17.429Z · LW(p) · GW(p)

I can understand your concern, given how you are seeing PC. PC does not mean you have to do any specific thing. So it by no means includes a definition that you should abandon your support structures and all that. To me, a PC is someone who makes conscious choices to honor the values they most care about. They tend to see novel solutions to problems because they are willing to consider anything that will solve the problem. They do tend to be less risk averse, but ideally they are not stupid about it :)

Its a well known bias that people are naturally much more motivated by fear of pain the seeking pleasure (I've heard figures of 2 or 3 to 1 pain avoidance v.s. pleasure seeking). This is not how I want to live my life, so have taken steps to correct my psychology for this, to optimize for maximal utility over minimal suffering.

As far as how to do this, there are a lot of personal growth gurus out there happy to teach you things. Landmark is the cheap version that is everywhere in the US, and I can recommend several people in California, New York, and Canada who I know to be especially good, depending on what specialty you are most interested in, and what your tolerance for woo is. A lot of LWers are very intolerant of woo, which in my opinion is throwing the baby out with the bathwater since I think that community does provide a lot of genuine value, so YYMV.

Replies from: zerker2000, MixedNuts
comment by zerker2000 · 2012-12-27T10:03:11.297Z · LW(p) · GW(p)

Woo has been renamed to pitches, noting for posterity. Easy enough to google; then again so is gur onfvyvfx yet everyone treats it as a big secret.

comment by MixedNuts · 2012-11-09T05:38:23.581Z · LW(p) · GW(p)

Something's not getting through. I know you understand how depression works so I'm sort of at a loss here. I don't think I have any options other than "Never do anything out of the ordinary" and "Bite off more than I can chew, then jump ship when everything comes crashing down, abandoning any people I was supposed to work with, and neglect everything while recovering from exhaustion".

Replies from: ShannonFriedman
comment by ShannonFriedman · 2012-11-27T22:32:28.733Z · LW(p) · GW(p)

Your vision of being PC is different than mine - we have very different basic assumptions. I don't think that there is anything in particular required, other than making conscious choices. So there's nothing requiring you to bite off more than you can chew or to abandon anyone. I would recommend switching to a more PC mode to be done in small steps for most people in most situations. Just try to change one mental habit at a time at first. Pick the lowest hanging fruit. Talking to some sort of coach is very helpful if you can afford it, for help with deciding what to prioritize and having accountability. Does that help?

Replies from: MixedNuts
comment by MixedNuts · 2012-11-28T09:43:01.061Z · LW(p) · GW(p)

Not in the least. The only way I can interpret your "anything in particular required, other than making conscious choices" is adding "and I consciously choose to do so" after "I'm sick like a dog, there's no way I'm going to class, or doing anything more tiring that collapse back into bed with Downton Abbey fanfic". Can I have an example, preferably of a very small step?

Replies from: ShannonFriedman
comment by ShannonFriedman · 2012-12-06T13:50:24.096Z · LW(p) · GW(p)

I'm not quite sure I'm parsing what you're saying correctly. I'll give it a try. I would say that if you are genuinely sick, making a conscious choice regarding that would often be to do what is required to recover quickly, so going to bed is quite reasonable. Other agenty things to do about sickness would be to take vitamin D or other remedies that you've determined will help you overthrow the cold faster. I also consider whether or not to take Dayquil or Nyquil, since even though my understanding is that they don't actually help you get over the cold faster, they do often help with work and sleep, so I actively consider whether it is best to be more highly functioning while sick v.s. focusing on speed of recovery to make this choice.

There was a time when I had major surgery, for stage 4 endometriosis, and wanted to go to a wedding precisely a week afterward, when I was told recovery was 2-3 weeks. I was told that I probably shouldn't fly but sometimes people recover early enough. So what I did in that case, was to focus on recovery as hard as I could for the five days after the surgery. When the nurse asked me if I wanted to go home from the hospital as soon as possible, I said that I wanted to stay right where I was for the full time I was allowed that was prepaid for my stay before having to spend the night. Why would I cause extra trauma to my body while it was in early stages of recovery, just to be in a familiar environment. Then, after the first 24hrs where I slept as much as possible but did some walking around every hour to be careful about blood clots, for the next four days I hardly got out of bed and took sleeping pills to encourage myself to stay in bed. I did this until it was time to pack, at which I got up, packed, got on a plane, and was actually recovered enough by the wedding to be able to dance!

That is an example of agentiness as I see it, because I was working within the constraints I had, and actively thinking about and doing what would cause me to recover most quickly because of my goal of making it to the wedding.

Replies from: DaFranker, MixedNuts, army1987
comment by DaFranker · 2012-12-06T13:54:32.338Z · LW(p) · GW(p)

This sounds like a great example of many small things that one does get better at after some training in instrumental rationality.

comment by MixedNuts · 2012-12-06T17:33:31.128Z · LW(p) · GW(p)

Your surgery recovery example is weird, because (as you describe it) the nurse came to you and asked you to make a choice with well-defined options (any length of time between "as soon as possible" and "as late as is paid for") with consequences that were already salient to you. That's more agenty than "go with whatever the nurse suggests", but I think most of us can make choices when handed a menu.

Let me take a very stupid example. You want some bylaws written. You look up "how to write bylaws", and notice there's a lot to it. You estimate you'll become able to write bylaws after 200 hours of research. The options that immediately occur to you are:

  • Research bylaws as much as you are physically able to. This leads to about six hours of learning about bylaws, followed by a daze where you read the same sentence over and over again for ten hours until you can drag yourself into bed, followed by a few entirely unproductive days.

  • Research bylaws for a couple hours each day, then walk away while you are still fresh. This requires 100 productive days, which, adding days where you have to do something more important and entirely unproductive days, represents about a year. A year later, Patri has written perfectly good bylaws and has started looking for housing and your knowledge is useless.

  • Chuck non-vital projects like "write bylaws" and focus entirely on becoming more productive. Ten years later, wonder why you haven't done anything with your life. Drown your crushing sense of failure in whichever drug you determine costs the fewest QALYs.

  • Think until you find a better option.

Your mental fog is too heavy to decide, so you stretch and stagger into the kitchen for a drink of water. The light bulb needs changed, but your knee and balance are acting up so you save that for later. After a drink, a light snack, and a few minutes forcing yourself into motion, you manage to get yourself to shower. Afterwards, you feel able to think clearly.

What do you do?

Replies from: ShannonFriedman
comment by ShannonFriedman · 2012-12-16T23:10:13.215Z · LW(p) · GW(p)

I actually find examples like the surgery thing quite frequently in life - the most unusual thing about it may be the way I framed it. I notice options and possibilities and win/win scenarios for making unusual agreements where most people don't.

With the hospital example, I think the nurse just asked me if I wanted to go home, as opposed to giving me a list of options and implications, although I do not have a recording of the conversation.

Regarding more complex examples, depends on things like opportunity cost. One of the first things I would do would be to discuss with Patri and other agents in the group. When you have multiple agents, you can optimize among everyone's good ideas, and if you cooperate, you don't end up with situations like case #2 where Patri and I duplicate work.

comment by A1987dM (army1987) · 2012-12-07T11:34:26.939Z · LW(p) · GW(p)

There was a time when I had major surgery, for stage 4 endometriosis, and wanted to go to a wedding precisely a week afterward, when I was told recovery was 2-3 weeks. I was told that I probably shouldn't fly but sometimes people recover early enough. So what I did in that case, was to focus on recovery as hard as I could for the five days after the surgery. When the nurse asked me if I wanted to go home from the hospital as soon as possible, I said that I wanted to stay right where I was for the full time I was allowed that was prepaid for my stay before having to spend the night. Why would I cause extra trauma to my body while it was in early stages of recovery, just to be in a familiar environment. Then, after the first 24hrs where I slept as much as possible but did some walking around every hour to be careful about blood clots, for the next four days I hardly got out of bed and took sleeping pills to encourage myself to stay in bed. I did this until it was time to pack, at which I got up, packed, got on a plane, and was actually recovered enough by the wedding to be able to dance!

I've done stuff like that plenty of times. Sometimes I've even done the reverse (stuff myself with medicines and stuff this afternoon so that I'll be fine for the party tonight, even if that means I'd likely be very sick tomorrow and the day after, when I wouldn't have much to do anyway so I wouldn't mind staying in bed that much).

comment by Said Achmiz (SaidAchmiz) · 2012-10-29T18:57:59.750Z · LW(p) · GW(p)

Perhaps I missed some previous required-reading, but... what exactly do you mean by "agenty", "agentiness", etc.?

(Also, what does "PC" refer to in this context?)

Edit: This?

Replies from: Manfred, Vaniver, John_Maxwell_IV
comment by Manfred · 2012-10-29T20:48:14.258Z · LW(p) · GW(p)

She means being proactive. If you want something to happen, you do it, or you make it happen however you can.

For example, there's a local meetup group that we've had a couple meetings of - but we don't have a schedule, we only have meetings when someone is proactive - they say "I've found the meeting place, the activity, I've emailed the people, come by, I will certainly be there, I will make sure we have a good time no matter how many people show up." And then we have a meetup.

I'm reminded of the line from The Caine Mutiny that naval ships were designed by geniuses to be run by idiots. If you want to do something, you can either design the ship, or you can just run an existing ship. If we had schedules and a book full of activities and a large group on a mailing list, someone with all the proactiveness of a bag of rocks could make meetups happen. But we don't have those things, so nothing happens unless someone is feeling "agent-ey."

Replies from: SaidAchmiz, Jolly
comment by Said Achmiz (SaidAchmiz) · 2012-10-29T21:24:06.010Z · LW(p) · GW(p)

If we had schedules and a book full of activities and a large group on a mailing list, someone with all the proactiveness of a bag of rocks could make meetups happen. But we don't have those things, so nothing happens unless someone is feeling "agent-ey."

This suggests that the way to systematically make things happen is not to organize meetings, but to put in place such a system (schedules, book full of etc.) for organizing meetings. Otherwise you need someone to feel "agent-ey" every time; that doesn't seem sustainable.

Edit: That is, if you're a person in such a group and you're feeling "agent-ey", Manfred's comment suggests that your efforts would be better spent putting a system in place that would allow things to happen without any agentness involved, as opposed to putting forth the effort to make a thing happen this one time. I'm not sure if my experience supports this; I'll have to think about it.

Replies from: Manfred
comment by Manfred · 2012-10-29T23:25:08.004Z · LW(p) · GW(p)

The big choke-point then being item #3 - getting a large group :P

comment by Jolly · 2012-10-30T00:12:27.924Z · LW(p) · GW(p)

This mirrors my own experience - the way I've found to have the most influence, and get the most done is often not being the one completing the tasks, but rather the one creating and documenting the process/procedures, and teaching and training other people to do the work.

It's also far more lucrative from a career standpoint! :D

comment by Vaniver · 2012-10-29T19:04:16.057Z · LW(p) · GW(p)

PC refers to "player character." In many games, there would be many characters, most of which don't have goals and function primarily as scenery, and PCs, who both have goals and move heaven and earth to achieve those goals.

As for "agentiness," I think a similar term is executive-nature. They're an entity that can be well modeled by having goals, planning to achieve those goals, and achieving those goals. Many people just react to life; agents act.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2012-10-29T19:15:45.478Z · LW(p) · GW(p)

Ah, thank you. I'm quite familiar with the term, and with the PC/NPC distinction in games; I just didn't make the connection in this context. So the idea here is that most people don't have goals? Or have goals, but don't act to further them?

Replies from: Nisan
comment by katydee · 2012-10-30T06:04:28.573Z · LW(p) · GW(p)

Would you consider removing the last paragraph of this post? It reads like an overt "look at all the high-status people I know! I'm high-status too!" bid and jars with the rest of the post, which is significantly better.

Replies from: pewpewlasergun
comment by pewpewlasergun · 2012-10-30T07:05:00.856Z · LW(p) · GW(p)

I disagree. It also provides several other examples for those (like Kaj_Sotala) who didn't find the post's example of agency sufficient.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-31T05:41:48.902Z · LW(p) · GW(p)

Those examples are not descriptive of how agency is hard. They don't bolster the strength of the post.

comment by NancyLebovitz · 2012-10-30T00:08:25.447Z · LW(p) · GW(p)

Hypothesis: One reason there aren't many agenty people is that a lot of parents find that agenty children are more of a challenge to their authority than they want and/or take more resources of various sorts than they feel they can afford.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-10-30T04:30:21.483Z · LW(p) · GW(p)

My feeling about this is: Look at animals. They pretty much just do random stuff: hang out here, hang out there, eat when they're hungry, etc. It'd be kind of surprising if evolution had taken us from that to industrial strength ass-kicking fast enough for civilization not to have formed in between.

(Additionally, there's the whole near/far idea, that the intelligent parts of our brains aren't supposed to be controlling our behavior, really.)

Replies from: None
comment by [deleted] · 2012-11-03T03:31:46.707Z · LW(p) · GW(p)

They pretty much just do random stuff: hang out here, hang out there, eat when they're hungry, etc.

That sounds like the behavior of a bored domestic cat, or a bear in a mediocre zoo. Wild animals, especially clever, complex-brained ones can get up to some abstract or spontaneous stuff. Elephants hold funeral vigils for their dead (and at least sometimes for dead humans as well); orcas hunting seals will play sadistically with a captured pup for minutes or hours before getting down to the business of feeding; echidnas will go to incredible lengths to explore something novel, even after ascertaining it has no chance of providing them with food. There's one recent funny case of an Antarctic leopard seal attempting to teach a human scuba diver how to kill penguins (in much the same way cats sometimes catch mice and leave them for their house-humans). When you add in complex social interactions, animal behavior (particularly that of any species prone to human-noticeable levels of personality variation) is quite dynamic -- and even a lot of what looks like superficially simple behavior is the product of low-level drives common to many organisms being acted out in unique ways by the individual critter.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-03T04:29:42.102Z · LW(p) · GW(p)

Thanks for the info. My impression was that emotions are like different modes that an animal can operate under, and you switch modes in a kind of haphazard way based on social and environmental cues, energy level, etc. Does that sound more or less accurate?

Has coherent goal-directed behavior spanning multiple days been observed in animals?

Replies from: None, Emily
comment by [deleted] · 2012-11-03T18:06:34.659Z · LW(p) · GW(p)

Well, that's sufficiently vaguely-phrased that just something like a pack of wolves or orcas pursuing their quarry for days, which does happen, would seem to qualify. Or the bird building a nest as described below.

FWIW, pregnant African elephants often find a good time and place to give birth around the end of their term and then consume the leaves of a certain tree to induce labor (humans in the area use it for the same purpose). The pregnancy takes over a year and the labor itself, once begun, can take several days.

comment by Emily · 2012-11-03T15:30:15.179Z · LW(p) · GW(p)

Something as simple as a bird building a nest would seem to meet that criterion.

comment by gwern · 2012-10-31T20:49:49.045Z · LW(p) · GW(p)

An even stronger criticism of AGI, both in agent and tool forms, is that a general intelligence is unlikely to be developed for economic reasons: specialzied AIs will always be more competitive.

Economic reasoning cuts many ways. Consider the trivial point known as Amdahl's law: speedups are always limited by the slowest serial component. (I've pointed this out before but less explicitly.)

Humans do not increase their speed even if specialized AIs are increasing their speed arbitrarily. Therefore, a human+specialized-AI system's performance asymptotically approaches the limit where the specialized-AI part takes zero time and the human part takes 100% of the time. The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.

Since humans are a known fixed quantity, if an AGI can be improved - even if at all times it is strictly inferior to a specialized AI at the latter's specialization - then eventually an AGI+specialized-AI system will outperform a human+specialized-AI system barring exotic unproven assumptions about asymptotic limits.

(What human is in the loop on high frequency trading? Who was in the loop when Knight Capital's market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That's fine when it's 'just' hundreds of millions of dollars at stake and companies can decide to take the risk for themselves or not - but the stakes can change, externalities can increase.)

comment by John_Maxwell (John_Maxwell_IV) · 2012-10-30T04:24:39.470Z · LW(p) · GW(p)

I think there's a bit of a chicken-and-egg problem when you're not much of an agent yet, and you haven't accomplished anything interesting under your own steam, so it doesn't really even seem worthwhile to plan anything out. (Another failure mode: it does seem worthwhile to plan things out, but only because you haven't yet noticed that you rarely work on any of your plans.) Probably it makes sense to debug what's making you ineffective and build up a track record before tackling anything really big (see success spirals).

If you're one of those people who makes plans but never works on them, it might be a good idea to start being very distrustful of yourself whenever you say you're going to do something. Concrete example: Maybe there's some topic you've been intending to study for a while. If you were distrustful of yourself, instead of just continuing to intend to study it, you might block out some time during your week, then set up reminders on your cell phone, rules for when you can skip your study period, rules for when you're allowed to abandon the project entirely, etc.

The problem with these behavior regulation devices is that building them takes a large activation energy. Here are my solutions for this so far:

  • Jot down ideas for a device whenever you have them, even if you aren't going to implement them immediately.

  • Wait until you're in an especially energetic or inspired mood, then take advantage of it and implement a few devices (or debug ones that failed).

  • Have the devices come in to effect a while after you've finished building them (ex: build your device in the evening and have it activate the next morning).

  • Consume stimulants like coffee or kratom.

In my experience, after using such devices for a while I no longer needed them as much.

Of course, there are other components to being an agent, e.g. fearlessness. The modern world is pretty safe, but evolution calibrated us for a world that was much more dangerous, especially with regard to social blunders.

comment by Shmi (shminux) · 2012-10-29T20:25:41.371Z · LW(p) · GW(p)

I'm wondering if what you call agency and what EY calls PC (or, in extreme/fictional cases, heroic responsibility) is what the rest of the (English-speaking) world calls initiative/motivation/perseverance?

Replies from: gwern
comment by gwern · 2012-10-29T22:02:09.724Z · LW(p) · GW(p)

what the rest of the (English-speaking) world calls initiative/motivation/perseverance?

I don't think it's quite the same thing; nor do I think your three choices are synonymous or mean the same things.

EY used to use a term 'anti-sphexishness', based on Hofstadter's description of a sphex wasp in GEB who executes its nesting program endlessly if someone messes with it, which seems to be synonymous with 'PC' or 'heroic responsibility', but which one would certainly not describe as 'motivation' or 'perseverance' - after all, the sphex wasp endlessly executing its program displays motivation and perseverance beyond any mere human! (Motivation and perseverance beyond that, in fact, of the biologists who was messing with it in Hofstadter's description.)

Or take this example: an Asian kid is told by his parents to become a doctor, and after endless studying gets into med school, graduates, does his internship etc and becomes a full-fledged doctor. As expected, such things correlate with Conscientiousness, the doctor could fairly be described as having 'motivation' and 'perseverance' - but did he display 'initiative'?

Similarly we can think of examples of people who display 'initiative' and 'motivation' but not perseverance (think ADHD) while also not especially being 'PC' - they follow their whim in choosing topics of interest (initiative, because certainly no one told them to pick said topics), and they prosecute said topic with great energy and intensity (motivation), but this leads to no lasting change and represents no deep thought about their goals, preferences, and the state of the world.

Replies from: shminux
comment by Shmi (shminux) · 2012-10-29T23:09:04.217Z · LW(p) · GW(p)

I don't think it's quite the same thing; nor do I think your three choices are synonymous or mean the same things.

I agree, I meant / as +. Since when is division not the same thing as addition... What else would one add to the mix to get the meaning right?

Replies from: gwern
comment by gwern · 2012-10-30T01:00:26.192Z · LW(p) · GW(p)

I'm not sure. The meaning of PC/anti-sphexishness/heroic-responsibility seems to be a sort of stepping outside of routine and comparing the status quo with the original intrinsicly desirable goals the status quo was supposed to achieve, and taking action to remedy the discrepancies.

You could call this 'visionary' or 'philosophical' or 'righteous' or 'wise' but none of them seem right to me - probably why those 3 terms were invented. 'Enlightened' comes close but only if you were living 2 centuries ago, because these days 'enlightened' is sarcastic or religious in undertones. (That is, figures like Voltaire were 'enlightened' but also definitely reminiscent of the 3 terms.)

comment by arundelo · 2012-11-03T14:35:38.577Z · LW(p) · GW(p)

Right after WWII John Holt was finishing out his U.S. Navy tour of duty on the west coast. In Never Too Late he tells this story about his favorite band at the time, the Woody Herman Herd, who he had listened to only on records:

They had been playing on the East Coast, and one of the many reasons I was eager to get out of the Navy was so that I could go hear them. Just as I was getting close to the date of my discharge, I heard terrible news -- the Herman band was going to come to the West Coast to play for a couple of months, and was then going to break up. I was going to miss them! I would never hear them! I was such a timid and conventional young man that it never occurred to me, not for a second, that I might stay out on the West Coast, arrange to get discharged there, see something of California and the Northwest, and hear Woody Herman in the process. But no, my home was in the East, and when the war ended I had to go home.

(Luckily he did manage to schedule his trip back east so that he crossed paths with the Herd in Chicago.)

comment by CronoDAS · 2012-10-29T19:09:22.088Z · LW(p) · GW(p)

As you say, getting things to happen takes a lot of time and effort, which is one of the things I learned when working on group projects in school. I think that most people, when they realize just how much effort it takes to Get Stuff Done, usually end up saying "screw it, there are other things I'd rather be doing" and go on doing whatever it is that they were doing before.

Replies from: drethelin, Swimmer963
comment by drethelin · 2012-10-29T20:02:31.247Z · LW(p) · GW(p)

Thing is, getting things to happen doesn't actually take a LOT of time and effort (depending on what you're trying to make happen.). The difference between something happening and not can be as simple as making a facebook event and inviting a bunch of people. The key is that someone has to take responsibility, however little or however much effort that is, to say "I will be the one to try and make this happen" instead of saying "Man this should really be a thing"

Replies from: CronoDAS
comment by CronoDAS · 2012-10-29T20:10:14.075Z · LW(p) · GW(p)

It's the old "herding cats" problem - finding a time and place that you can get enough people to show up at is hard.

Replies from: drethelin
comment by drethelin · 2012-10-29T20:16:06.421Z · LW(p) · GW(p)

Never ask. Asking a bunch of people about that will end up taking forever and have no conclusive answer. Schedule events when convenient for what you think will be multiple people sometime in advance, but with a concrete time and place. People will happily give you input after you do this and you can change it later, but this is much more reliable.

Replies from: gwern
comment by gwern · 2012-10-29T21:42:12.362Z · LW(p) · GW(p)

The beauty of defaults: the default to an open question is to ignore or not act, the default to an announcement of a date is to think about it.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-10-29T19:40:54.184Z · LW(p) · GW(p)

I think the biggest problem with group projects in school is that a) the people you're working with aren't pre-filtered for motivation, and b) you're working on toy problems that, likely, no one in the group would bother to do if it weren't assigned. Some people want to get 90% because they have a scholarship and need to maintain a certain GPA, but some people are happy just to pass and want to do the minimum of work, and pretty much everyone has the attitude that "it's just school".

I've had group projects break down both because I was the most motivated person in the group (and lacked the leadership/interpersonal skills to deal with this), and because I was the least motivated person in the group (I'm somewhat spoiled and used to getting As without putting in too much time, and I was not prepared to have 3-hour group meetings starting at 8 pm after class.)

In general, I would expect group projects to run more smoothly in the workplace, both because people are more motivated–they're in their chosen field, they're getting paid, they're working on a real world problem, etc–and because the process of getting hired filters for interpersonal skills, which isn't the case for getting accepted into college or university, so you're less likely to end up with people who can't work with other people.

Replies from: Eliezer_Yudkowsky, CronoDAS, Bugmaster
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-30T00:53:38.228Z · LW(p) · GW(p)

Grade school didn't assign me many group projects. In fact, I can only remember one. And on that one I think I tried cooperating for like 15 minutes or something and then told the other two kids, "Go away, I'll handle this" because it was easier to just do all the work myself.

Sometimes our early life experiences really are that metaphorical.

Replies from: SaidAchmiz, wedrifid, William_Quixote
comment by Said Achmiz (SaidAchmiz) · 2012-10-30T03:13:54.209Z · LW(p) · GW(p)

Similar, possibly relevant anecdote from my own life:

In a recent computer science class I took, we (the entire class as a whole) were assigned a group project. We were split up into about 4 sub-groups of about 6 people each; each sub-group was assigned a part of the project. I was on the team that was responsible for drawing up specifications, coordinating the other groups, testing the parts, and assembling them into a whole.

Like Eliezer, I quickly realized that I could just write the whole thing myself (it was a little toy C++ program). And I did (it took maybe a couple of days). However, the professor was (of course) not willing to simply let me submit a complete project which I had written in its entirety; and since the groups were separated, there was no way for me to submit my work in a way that would plausibly let me claim that any sort of cooperation had taken place.

So I had to spend the rest of the semester trying to get the other groups to independently write the code that I had already written, trying to get them to see that the solutions I'd come up with were in fact working ones, and generally having conversations like the following:

Clueless Classmate: How should we do this? Perhaps ?

SaidAchmiz: Mmm... perhaps we might instead try .

CC: That doesn't make any sense and will never work!

SA: Sigh.

I'm not quite sure what this could be a metaphor for, but it certainly felt rather metaphorical at the time...

Replies from: wedrifid
comment by wedrifid · 2012-10-30T03:23:22.090Z · LW(p) · GW(p)

I'm not quite sure what this could be a metaphor for, but it certainly felt rather metaphorical at the time...

It sounds like a metaphor for "what you need to learn to handle effectively in order to succeed in a typical workplace". Good luck!

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2012-10-30T03:28:02.849Z · LW(p) · GW(p)

My question is: are there some straightforward heuristics one can apply to find/select a workplace where such things occur as little as possible? At what kinds of places can one expect more of this, and at what kinds less? The effort to find a workplace where you do NOT have to handle such situations seems like it would be more effective in the long run (edit: that is, more effective in achieving happiness/sanity/job satisfaction) than learning to deal with said situations (though of course those things are not mutually exclusive!).

Replies from: wedrifid
comment by wedrifid · 2012-10-30T03:30:04.550Z · LW(p) · GW(p)

My question is: are there some straightforward heuristics one can apply to find/select a workplace where such things occur as little as possible? At what kinds of places can one expect more of this, and at what kinds less?

Yes, and it is an extremely high expected-value decision to actively seek out people who understand which workplaces are likely to be most suitable according to this and other important metrics.

comment by wedrifid · 2012-10-30T03:26:43.108Z · LW(p) · GW(p)

Grade school didn't assign me many group projects. In fact, I can only remember one. And on that one I think I tried cooperating for like 15 minutes or something and then told the other two kids, "Go away, I'll handle this" because it was easier to just do all the work myself.

Sometimes our early life experiences really are that metaphorical.

And sometimes they aren't. As a 'grown up' you outright founded (with assistance) an organisation to help you handle your new project as well as playing a pivotal role in forming a community around a relevant area of interest. Congratulations are in order for learning to transcend the "just do myself" instinct---at least for the big things, when it matters.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2012-10-30T03:33:25.924Z · LW(p) · GW(p)

It seems to me that "selecting people with whom I can usefully cooperate" is different from "learning to cooperate with arbitrarily assigned people". Do you think that captures the distinction between Eliezer's grade school anecdote and his later successes, or is that not a meaningful difference?

Replies from: wedrifid
comment by wedrifid · 2012-10-30T03:37:46.738Z · LW(p) · GW(p)

It seems to me that "selecting people with whom I can usefully cooperate" is different from "learning to cooperate with arbitrarily assigned people". Do you think that captures the distinction between Eliezer's grade school anecdote and his later successes

It is certainly a significant factor. (Not the only one. Eliezer is also wiser as an adult than he was as a child.)

comment by William_Quixote · 2012-10-30T04:33:22.881Z · LW(p) · GW(p)

Math over metaphor. This is a common experience. Assume a child 99th percentile of age group by intelligence

Elementary school is assigned by geography, so average intelligence is 50th percentile (or fairly close)

By middle school there may not be general tracking for all students, but very low performers have been tracked off (say 20% of students), so average intelligence is 60th percentile

In high school they often have a high and a low track, split the students 50/50 and you get average intelligence is 80th percentile

If the student then goes to a college that rejects 90% of applicants (or gets work in a similar selective profession) average intelligence is 98th percentile

and all of a sudden the student is now well socialized and has learned the important skill of cooperating with their peers.

EDIT: the above holds if you track “effectiveness” which is some combination of conscientiousness and intelligence instead of intelligence. In practice I expect most tracking systems capture quite a bit of conscientiousness, but the above reads more cleanly with intelligence in each line than “some combination of conscientiousness and intelligence”

Replies from: Bugmaster
comment by Bugmaster · 2012-10-30T04:43:48.490Z · LW(p) · GW(p)

This is a bit off-topic, but I think that the word "competence" effectively conveys the meaning of, "some combination of conscientiousness and intelligence".

comment by CronoDAS · 2012-10-29T19:56:45.322Z · LW(p) · GW(p)

I was generally lucky with group project partners at school (both high school and college), I guess; I didn't have much explicit conflict, and I never had one actually fall apart. There was one class in which I was in a group of three in which I did 95% of the work, one of the other two people did about 60% of the work, and the third guy basically didn't show up, but I was okay with that. I wrote code that worked, the second guy supported me so I could write that code, and the third guy would have only gotten in the way anyway.

Edit: (Yes, that adds up to more than 100%, because of duplication of effort, inefficiency, correcting each other's mistakes, etc. In other words, it's a joke, along the lines of "First you do the first 80%, then you do the second 80%.")

Replies from: Alicorn
comment by Alicorn · 2012-10-29T20:50:20.896Z · LW(p) · GW(p)

I did 95% of the work, one of the other two people did about 60% of the work

...?

Replies from: Swimmer963, SaidAchmiz
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-10-29T21:04:50.589Z · LW(p) · GW(p)

I interpreted that as "I did 95% of the work that I said I would do, and one of the other partners did 60% of the work that he committed to do, and the third partner didn't do anything." But yeah, if you interpret it as straight-up percentages, it doesn't really add up...

Replies from: satt
comment by satt · 2012-10-29T21:47:59.289Z · LW(p) · GW(p)

I read it as CronoDAS implying his group did 55% more work than it needed to, but your interpretation makes as much sense.

comment by Said Achmiz (SaidAchmiz) · 2012-10-29T21:30:05.279Z · LW(p) · GW(p)

I could've sworn there was a bit in the grandparent that acknowledged this apparent contradiction, and pointed out pair programming as the explanation, but it seems to be gone. That would, in any case, account for the percentages.

Replies from: CronoDAS
comment by CronoDAS · 2012-11-02T23:06:53.882Z · LW(p) · GW(p)

Yeah, I don't know what happened to that paragraph.

comment by Bugmaster · 2012-10-30T04:49:39.942Z · LW(p) · GW(p)

Out of curiosity, am I the only one who had experienced at least some instances of productive group work in high school and college ? I am not nearly as smart as most people here, so perhaps that fact played a role, since I actually needed the cooperation of other people in order to get the job done.

comment by gwern · 2012-10-31T21:29:35.216Z · LW(p) · GW(p)

You are affirming the consequent and also overgeneralizing.

I argued that 'some economically valuable uses of AGI are replacing humans' (disproving Szabo's core argument that "AGI can always be outperformed at a specific task by a specialized-AI, therefore, there are no economically valuable uses of AGI").

That is not the same thing as 'all replacements of humans are economically valuable uses of AGI' for which 'non-AGI HFTs replacing humans' serves as a disproof (but so would cars or machines, for that matter).

comment by ArisKatsaris · 2012-10-31T13:17:25.177Z · LW(p) · GW(p)

Never heard of mayors and judges and ship captains officiating marriages?

ETA: And also James Randi of the JREF has also officiated at a wedding

comment by wedrifid · 2012-10-31T04:32:25.320Z · LW(p) · GW(p)

For instance, this Divia lady you mention and her husband even changed their last name to 'Eden', the name of the earthly paradise in the Bible, and married in a ceremony officiated by Yudkowsky.

A marriage ceremony, officiated by an available elder of some sort. Name changes. Wow, what sort of crazy culture or subculture would do that kind of thing? Oh, right. Most of them, in some shape or form. This was actually a hat tip towards normality, not the reverse. Like celebrating Christmas with family and friends without actually believing in a Christ.

Your ranting is nonsense. Get some perspective.

Replies from: ArisKatsaris, drethelin
comment by ArisKatsaris · 2012-10-31T09:51:04.637Z · LW(p) · GW(p)

V_V's comments do serve as datapoints towards what elements can look cultish to outsiders even though, I agree with you,such a thing would be unfair as pretty much every community does these things.

Replies from: wedrifid
comment by wedrifid · 2012-10-31T10:12:10.116Z · LW(p) · GW(p)

V_V's comments do serve as datapoints towards what elements can look cultish to outsiders

I do not model V_V as someone whose utterances can be considered representative of outsiders.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-10-31T11:04:46.958Z · LW(p) · GW(p)

V_V's utterances about what looks cultish are generally useless in regards to talking about ideas: trying to shame us into not having certain ideas, just because they look bad is rather a circular and useless argument. (and frankly "transhumanism" "cryonics", "AI apocalypse" bring to mind the low status of an SF geek, not the low status of a cult, so V_V's words miss doubly the mark in this respect)

On the other hand practices like marriage ceremonies and cohabitations pattern-match more, and so it's something to be careful about from a Public Relations perspective. But it's not as if I'm sure whether they're a net positive or a net negative all things considered; so consider my words to be hesitant and uncertain, not really sharing into V_V's criticism...

Replies from: wedrifid
comment by wedrifid · 2012-10-31T11:20:53.473Z · LW(p) · GW(p)

On the other hand practices like marriage ceremonies and cohabitations pattern-match more, and so it's something to be careful about from a Public Relations perspective.

Pattern matching and public relations are both interesting and important and using V_V as an outsider datapoint while doing so would produce unreliable results.

comment by drethelin · 2012-10-31T21:28:53.100Z · LW(p) · GW(p)

I have to disagree here. Even if from the outside view christian marriage or whatever is equally as weird as yudkowskian marriage, it definitely feels cultish to me and I'm an atheist. The normal way to get married is NOT by a friend of yours whose teachings you follow.

Replies from: DaFranker, Nornagest
comment by DaFranker · 2012-11-01T20:38:17.959Z · LW(p) · GW(p)

Errh. What is the normal way to get married then, from your view? Mail a letter to the nearest municipal or judicial office?

"Getting married", once shed of all religious connotations and other nasty bits, is a social contract before witnesses published so that: 1) The spouses are more motivated to cooperate and remain at a high level of mutual affection. 2) Individuals not part of the marriage (i.e. everyone else) are aware that these spouses are "together" presumably for a long time and that they should not get in their way and they are not "available".

That's the way I see it / was taught, anyway.

Replies from: drethelin
comment by drethelin · 2012-11-01T23:58:19.815Z · LW(p) · GW(p)

In a church, with two families present, by a priest. Just because it's nonsense doesn't make it not normal.

Replies from: orthonormal
comment by orthonormal · 2012-11-09T05:51:12.452Z · LW(p) · GW(p)

I don't think you're using the right reference class for the question. If we're talking about the set of people who might find Less Wrong interesting, I predict that most of them would find it more weird if two atheists from atheist families got married by a priest than if they got married by the head of an Internet community. (Most normal for that reference class is picking a celebrant who's just a friend, or a Unitarian minister, or a comedian, etc.)

comment by Nornagest · 2012-11-01T20:19:56.584Z · LW(p) · GW(p)

I've got a number of friends in non-SingInst/LW circles who've been married in public ceremonies overseen by friends whom they consider wise, or instrumental in their social groups, or simply good speakers. I don't have any actual data, but in the circles I run in it seems like one of the more popular secular options.

comment by gwern · 2012-10-31T22:22:39.853Z · LW(p) · GW(p)

First, I'd like to ask why you didn't reply directly to my previous comment, and instead started an entirely separate top-level comment. I hope your motive wasn't less than honorable, like hoping that I wouldn't notice and people would infer that I was tacitly admitting I was refuted? Hopefully I'm just being paranoid and you were careless about posting your comment or something.

But you didn't provide an argument for AGI being more effective at replacing humans at these tasks rather than more and more specialized AIs.

The claim "AGI will be more effective at replacing humans in using specialized-AIs" was assumed in my argument, and also not criticized by Szabo, who thinks his argument works even granting the existence of such AGI:

Even if there was such a thing as a "general intelligence" the specialized machines would soundly beat it in the marketplace. It would be very far from a close contest.

comment by Abd · 2012-11-01T19:04:12.198Z · LW(p) · GW(p)

Great piece, Shannon. Brings to mind a couple of things.

What you call "agency" is, in Landmartian, "being cause in the matter," being "at cause," "taking a stand," and acting "consistently with that stand."

This is distinguished from being caught in a "racket," defined as a persistent complaint combined with a fixed way of being. Someone caught in a racket does not take responsibility for things as they are, but rather sets up stories that express being a victim of circumstances or others. The generic alternative is to accept responsibility, as a stand, not as a "truth."

That's been oft-misunderstood. I am responsible for, say, the WTC attack, as a stand, not as a fact. If I'm responsible, it means that I can look at my life as missing something that might make a difference, as full of possibilities.

In any case, most people, most of the time, are not at cause, we are simply reacting.

Then, if we actually take responsibility, beyond merely saying a few words, we act in accordance with that, which includes making mistakes, picking ourselves up and acting again, varying behavior as necessary to find a path to fulfillment.

A conversation I've had is "How many people does it take to transform society?"

The answer I've generally come up with is two. It's amazingly difficult to find two. Maybe that's just my racket, but your story shows how two can sometimes find more, if more are required to realize a stand. Two is where it starts. At least one of the two must be willing to be at cause, and able to stand there.

Replies from: NancyLebovitz, Nornagest
comment by NancyLebovitz · 2012-11-03T04:07:20.261Z · LW(p) · GW(p)

I think that being "agenty" includes being good at making the sort of changes you want as well as working on making those changes.

Replies from: Abd
comment by Abd · 2012-11-03T04:33:11.327Z · LW(p) · GW(p)

Okay, it starts with a declaration, with an assumption of responsibility, with taking a stand, but creating structures for fulfillment, they are called, is something that is strengthened with practice.

comment by Nornagest · 2012-11-01T20:06:31.090Z · LW(p) · GW(p)

Took me a while to sort out the background for this. I take it your "Landmartian" indicates the parlance of Landmark Education?

Replies from: Abd
comment by Abd · 2012-11-02T00:52:11.724Z · LW(p) · GW(p)

Yes. I made that up, but Landmartians immediately recognize it.

comment by ArisKatsaris · 2012-10-31T22:25:07.571Z · LW(p) · GW(p)

You're referring the Turing Test as a criterion and accusing us of anthroporphizing AI?

I think that an AI might become intelligent enough to destroy the human species, and still not be able to pass the Turing Test. Same way that we don't need to mimic whales or apes before being able to kill them.

It's not us who are anthroporphizing AI, it's you who're antroporphizing "intelligence that rivals and eventually surpasses the human intelligence both in magnitude and scope".

comment by DaFranker · 2012-10-31T21:29:45.294Z · LW(p) · GW(p)

Your argument is that humans will eventually become the limiting factor in economic systems thus AGIs will be needed to replace them.

Good strawmanning. Very subtle.

gwern: The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.

That's the key part. The specialized High-freq-trading software won't be replaced, but the humans who use that specialized software will eventually be replaced if someone figures out how to make an AGI that can think about all the relevant variables and can be scaled to go faster and better than a human.

comment by gwern · 2012-10-31T21:20:18.546Z · LW(p) · GW(p)

Wow, way to miss the point and not respond to the argument - you know, the stuff that is not in parentheses.

(And anyway, how exactly am I supposed to give an example where AGI use is driven by economic pressures to surpass human performance, when AGI doesn't yet exist?)

comment by DaFranker · 2012-10-31T21:16:36.479Z · LW(p) · GW(p)

So, even though you didn't clearly contest any of the premises nor the reasoning, let's assume that the second paragraph is a rebuttal to premises (1:) and/or (2:) of the grandparent.

An AGI is not something a bunch of nerds can cook up in their basement in their spare time.

I contest this premise, and I'm really wondering where you'd think that up. As technology progresses, we've noticed that it gets easier and easier to do stuff that was previously only possible for massive organizations.

Examples include, well, anything involving computers (since computers were first something only massive organizations could possess, until a bunch of nerds cooked it up in their basement), creating new software in the first place, creating videogames, publishing original research, running automated data-miners, creating new hardware gadgets, creating software that emulates hardware devices, validating formal mathematical proofs, running computer simulations...

...I could probably go on for a while, but I'm being warned that this should be enough to point at the right empirical cluster. Basically, we have lots of evidence saying that new-stuff-that-can-only-be-done-by-large-organizations can eventually be done by smaller groups, and not much that sets AGI apart as a particular exception other than the current perceived level of difficulty.

comment by gwern · 2012-10-31T21:07:00.522Z · LW(p) · GW(p)

If an AGI will be always less effective than its contemporary specialized AIs, people will be unwilling to put their money, time and effor in it.

I just pointed out how economic reasoning can justify an AGI which is outperformed at any specific task by a specialized-AI. I'm not even an economist and it's a trivial argument, yet - there it is.

Even if one had a formal proof that AGIs must always be outperformed, that still would not show that AGIs will not be worth developing. You need a far more impressive argument covering all economic possibilities, especially since software & AI techniques are so economically valuable these days with no sign of interest letting up so handwaving arguments look implausible.

(I would be deeply amused to see a libertarian like Nick Szabo try to do such a thing because it runs so contrary to cherished libertarian beliefs about the value of local knowledge or the uselessness of elites or the weakness of theory, though I know he won't.)

comment by nshepperd · 2012-10-31T13:48:22.503Z · LW(p) · GW(p)

anthropomorphization (the AGI), a exceptionally evil superhuman adversary (UFAI as Satan or the Antichrist)

Oddly, it seems to me that anthropomorphization is what makes people think AGI is perfectly safe.

comment by ArisKatsaris · 2012-10-31T13:08:03.401Z · LW(p) · GW(p)

Some future technology we currently have no idea how to develop will do X ("A miracle occurs").

Yeah, you treat the concept of new technologies (even though we experience new technologies every single year) on the same level as 'miracles' (which we've never experienced at all). I get that.

And I've seen lots of religious people argue thusly: "You believe in 'electrons' and 'quarks' that you've never seen with your own eyes, and I believe in angels and demons that I've never seen either. Therefore your 'scientific' ideas are just as faith-inspired as mine."

If we're to throw guilt-by-perceived association around, then I think that your criticism of LW-ideas are typically- religious. You're following the typical argument of the religious, where you try to claim all belief in things unseen is equally reasonable, all expectations of the future are equally reasonable, and hence "see, you're also a religion after all".

I think I'll have to revise my position - you are really not saying anything worth hearing.,.,

comment by nshepperd · 2012-11-05T02:44:31.798Z · LW(p) · GW(p)

Maybe I'm attributing malice where a more likely explanation exists, but a policy which seems deliberately designed to incentivize groupthink appears to be more consistent with a cult rather than "a community blog devoted to refining the art of human rationality".

It's supposed to prevent people from feeding the trolls.

comment by gwern · 2012-11-02T01:37:58.391Z · LW(p) · GW(p)

The system says: "Replies to downvoted comments are discouraged. You don't have the requisite 5 Karma points to proceed."

I don't follow. My comment http://lesswrong.com/lw/f53/now_i_appreciate_agency/7q56 was not at any point in the negative, much less the -5 or whatever that would cause the new karma penalty thing to kick in.

Still, that doesn't imply that AGI will be economically viable unless you show that humans will still be the limiting factor after every human skill that can be transfered to a specialized AI has been trasfered.

If every human skill has been transferred, including that of employing or combining specialized-AIs, then in what sense do the groups of specialized-AIs not then comprise an AGI?

This argument would seem to reduce you to confronting a dilemma: if every human skill has been transferred to specialized-AIs, then a complex of specialized-AIs by definition now forms an AGI which outperforms all humans; if not every human skill has been transferred, such as employing specialized-AIs, then there is the very large economic niche for AGIs which I have identified with my Amdahl's law argument. So either there exist AGI which outperform all humans, or there exists economic pressure for AGI.

Replies from: Alicorn
comment by Alicorn · 2012-11-02T02:03:27.546Z · LW(p) · GW(p)

I don't follow. My comment http://lesswrong.com/lw/f53/now_i_appreciate_agency/7q56 was not at any point in the negative, much less the -5 or whatever that would cause the new karma penalty thing to kick in.

I believe the penalty now applies if any comment upstream has the requisite negative value.

Replies from: gwern, chaosmosis
comment by gwern · 2012-11-02T02:12:42.515Z · LW(p) · GW(p)

Oh. I thought it was just for replying to the comment which was negative. I guess this is what Wedrifid or whomever meant when they pointed out that the feature could strike in unexpected places...

Replies from: MugaSofer
comment by MugaSofer · 2012-11-09T10:19:09.637Z · LW(p) · GW(p)

Indeed. It can be very annoying to reply to a positive-karma comment and discover you will be charged 5 karma for the privilege.

comment by chaosmosis · 2012-11-02T02:14:00.381Z · LW(p) · GW(p)

I want someone to undo this part, if not the whole thing. Discouraging people from replying to people who are unpopular or wrong is bad. Preventing new users who are perceived as wrong from defending themselves is extremely bad.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-09T10:20:53.552Z · LW(p) · GW(p)

If you don't want to discourage replies to downvoted comments, then you want to undo the whole thing. That's what this feature is for. It shouldn't be doing anything else, and if it is then that's a mistake that should be corrected.

Replies from: chaosmosis
comment by chaosmosis · 2012-11-09T18:38:51.131Z · LW(p) · GW(p)

Regardless of whether or not we should discourage replies to downvoted comments, we should avoid discouraging replies to the replies to downvoted comments. People who are downvoted should not be discouraged from speaking up about their ideas, even if those ideas are bad. That's the way that those people go about improving.

Additionally, if they're discouraged from defending their ideas in more detail or from addressing criticisms, but they actually happened to be correct or at least to make a good point, then discouraging them is an extremely bad idea.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-09T19:23:34.384Z · LW(p) · GW(p)

Regardless of whether or not we should discourage replies to downvoted comments, we should avoid discouraging replies to the replies to downvoted comments.

Oh, agreed.

comment by gwern · 2012-11-04T23:20:43.321Z · LW(p) · GW(p)

A group of specialized AIs doesn't need to have shared goals or shared representations of the world. A group of interacting specialized AIs would be certainly be a complex system that will likely exhibit unanticipated behavior, but this doesn't mean that it will be an agent (an antropic model created in economy to model the behavior of humans).

I don't think this is a meaningful reply, or perhaps it's just question-begging.

If having a coherent goal is the point of the human in the loop, then you are quietly ignoring the hypothetical given that 'every human skill has been transferred' and your points are irrelevant. If having a coherent goal is not what the human is supposed to be doing, well, every agent can be considered to 'exhibit unanticipated behavior' from the point of view of its constituent parts (what human behavior would you anticipate from a single hair cell?), and it doesn't matter what the behavior of the complex system is - just that there is behavior. We can even layer on evolutionary concerns here: these complex systems will be selected upon and only the ones that act like agents will survive and spread!

Assuming that AI technology will necessarily lead to a super-intelligent but essentially human-like mind is anthropomorphization in the same sense that the gods of traditional religions are anthropomorphizations of complex and poorly understood (at the time) phenomena such as the weather or biological cycles or ecology.

Yeah, whatever.

Arguing against 'necessarily leading to a super-intelligent but essentially human-like mind' is a big part of Eliezer and LW's AI paradigm in general going back to the earliest writings & motivation for SIAI & LW, one of our perennial criticisms of mainstream SF, AI writers, and 'machine ethics' writers in particular, and a key reason for the perpetual interest by LWers in unusual models of intelligence like AIXI or in exotic kinds of decision theories.

If you've failed to realize this so profoundly that you can seriously write the above - accusing LW of naive religious-style anthropomorphizing! - all I can conclude is that you either are very dense or have not read much material.

comment by ArisKatsaris · 2012-10-31T21:20:35.640Z · LW(p) · GW(p)

"but still no AGI or commercial nuclear fusion, despite these having constantly been predicted to be in the next 25 years for the last 60 years.

Please clarify this plainly for me: Are you saying these technologies will NEVER be developed? Not in 25 year, nor in 100 years, nor in 500 years, nor in 10,000 years?

Is your whole disagreement a matter of timescales -- whether it is likely to happen to happen within our lifetimes or not?

Because if so, then there are a lot of us here who likewise don't expect to see AGI in our lifetimes.

If you're not saying "It will NEVER happen" then please specify a date by which time you'd assigning Probability > 50% of these technologies to have happened.

But until then, again your whole argument seems to be "it hasn't happened yet, so it will never happen."

comment by DaFranker · 2012-10-31T20:37:10.107Z · LW(p) · GW(p)

Looks like you've completely missed the point of SIAI and massively misunderstand AI theory.

It seems to me like you have not even remotely the right order of magnitude of an idea of just how immense the laziness of some programmers can get. And the lazier programmers get, the more they try to write programs that do all their own work for them.

The ultimate achievement of the lazy programmer is to write a one-time program that will anticipate future needs to write programs, and write programs that can better anticipate such future needs and thus better write programs that meet the need, ad infinitum, without any further intervention from said programmer.

SIAI actually agrees that the above is probably not the most economically sensible thing to do and that it is not what most AIs, or even AGIs, developed in the near future will look like. However, SIAI is also aware that some people will, despite this, still want to be the ultimate lazy programmer and write the ultimate recursively self-modifying AI. No reasonable amount of reasonable arguments will change this fact.

Therefore, something must be done to prevent those AIs they will create from exterminating us. SIAI, in no small part through the work of Yudkowsky, has concluded that the best method of achieving this is currently through FAI research, and that eventually the only solution might be to make a Friendly self-modifying AGI before anyone else makes a non-Friendly one, so that the FAI has an unfair advantage and can outsmart any future non-friendly AGIs.

If you want to avoid being logically rude, you will contest either the premises (1: Some people will attempt to make the ultimate AGI. 2: One of them will eventually succeed. 3: Ultimate AGIs are Accidentally Unfriendly by default.) or some element in the chain of reasoning above. If you fail to do so, then the grandparent comment is understating how much you're missing the point and sidetracking the discussion.