Open Thread, March. 6 - March 12, 2017

post by Elo · 2017-03-06T05:29:40.540Z · LW · GW · Legacy · 156 comments

Contents

156 comments

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

156 comments

Comments sorted by top scores.

comment by [deleted] · 2017-03-09T17:27:15.593Z · LW(p) · GW(p)

27 hours to PhD thesis submission. Science is hard.

EDIT first allnighter in a long time.

Replies from: username2
comment by username2 · 2017-03-10T15:59:49.478Z · LW(p) · GW(p)

I'm rooting for you!

Replies from: None
comment by [deleted] · 2017-03-12T02:19:35.873Z · LW(p) · GW(p)

It is in! Now 2 days to put together final edits for my committee, 2 weeks to seminar/defense, and then a week or so for final revisions... interesting to see 6 years in 133 pages.

comment by Viliam · 2017-03-06T16:11:59.674Z · LW(p) · GW(p)

This feels quite important, especially in the context of "dying Less Wrong":

Bottom line: I know of only one minimum viable architecture for turning individuals into a superorganism — for making sure it’s in everyone’s self-interest to work together. I call it the Prestige Economy, and it runs on a deceptively simple rule:

Individuals should grant social status to others for advancing the superorganism’s goals.

-- source

Replies from: Lumifer, MrMind, tristanm, ChristianKl, WhySpace_duplicate0.9261692129075527
comment by Lumifer · 2017-03-06T16:40:25.430Z · LW(p) · GW(p)

"Individuals should" is not an architecture.

A great many things become possible conditional on "everyone should so as I say".

Replies from: Viliam
comment by Viliam · 2017-03-07T07:40:48.971Z · LW(p) · GW(p)

Sure, but assuming you have people willing to contribute to "many things", it can help to say explicitly what behavior is helpful and what is harmful. Especially among nerds.

comment by MrMind · 2017-03-06T16:50:05.251Z · LW(p) · GW(p)

How is this different from the karma system?

Replies from: Viliam, username2
comment by Viliam · 2017-03-07T07:38:53.892Z · LW(p) · GW(p)

Karma system itself doesn't specify what you get most karma for. It could be "for advancing the superorganism’s goals", but it also could be for "making snarky comments about those who try to advance the superorganism’s goals", or simply for "posting a lot of trivial stuff".

Also, social status is perceived not only by how much karma one has, but also how they are treated (replied to, talked about) by other people in comments.

Replies from: MrMind
comment by MrMind · 2017-03-07T08:09:10.510Z · LW(p) · GW(p)

Karma system itself doesn't specify what you get most karma for.

Indeed, but how would you enforce a system of points granted if and only if someone advances the goal of this site?

Also, social status is perceived not only by how much karma one has, but also how they are treated

This I think is unavoidable in a human society.

Replies from: Viliam, philh
comment by Viliam · 2017-03-07T15:02:15.479Z · LW(p) · GW(p)

Sometimes what would be easier to do in real life is more difficult to do online, because the functionality is different or missing. For example, the real-life equivalent of "karma" -- displays of admiration or friendship -- is public.

Assuming voluntary association, one way to "enforce a given behavior" is to only associate with people who agree to behave like that. So if you want to have a group that does X, you associate with people who give social status to people working on X (as opposed to e.g. people who talk a lot about X, but give low status to people actually working on X).

Online... I guess banning people for not expressing proper attitudes would be perceived by many as very problematic (even if that's what most of us more or less do in real life), so the realistic solution seems to be an invite-only club. Or a two-tiered system, where the outer forum is open to everyone, and only people with the desired behavior get an invitation to the inner forum.

Replies from: Lumifer
comment by Lumifer · 2017-03-07T15:47:27.847Z · LW(p) · GW(p)

So if you want to have a group that does X, you associate with people who give social status to people working on X

You are glossing over the practice of giving social status. In real life, as you said, it is basically public displays of admiration (friendship is a bit different). So you are going to build a community with the inclusion criterion of being willing to publicly admire a particular set of people. Presumably if you stop admiring, you are no longer welcome in the community. That doesn't strike me as a way to build a healthy community -- there are obvious failure modes looming.

Replies from: gjm
comment by gjm · 2017-03-07T16:08:35.709Z · LW(p) · GW(p)

with the inclusion criterion of being willing to publicly admire a particular set of people

That isn't quite what Viliam is proposing. He says (emphasis mine):

you associate with people who give social status to people working on X

so what membership in this community commits you to is not admiring specific people but admiring people who do specific things, whoever those people are.

This still seems kinda dangerous, but I don't think it has the same failure modes.

Replies from: Lumifer
comment by Lumifer · 2017-03-07T17:56:09.933Z · LW(p) · GW(p)

commits you to is not admiring specific people but admiring people who do specific things, whoever those people are.

I suspect that distinction is not going to be as clear-cut when you are dealing with a bunch of actual humans.

"Bob is an asshole but he did the specific thing X so I'm supposed to publicly praise him? I'll pass."

"Alice is such a great person, so what that she skipped the specific thing X this time, she's the best and I'm going to sing hosannas to her really loudly".

comment by philh · 2017-03-07T10:19:41.565Z · LW(p) · GW(p)

Indeed, but how would you enforce a system of points granted if and only if someone advances the goal of this site?

You seem to have hit on (one reason) why the prestige economy is different from the karma system.

Replies from: MrMind
comment by MrMind · 2017-03-09T15:30:14.184Z · LW(p) · GW(p)

No, my question was more abstract, in the sense of "can you ever design a system that grants a prestige economy that doesn't de-evolve into a karma system"?

comment by username2 · 2017-03-07T18:20:03.446Z · LW(p) · GW(p)

Karma is not granting social status, it is community moderation of discussion. Imagine if the aggregate karma numbers were removed from display (arguably this is maybe something that should be done).

Replies from: MrMind
comment by MrMind · 2017-03-09T15:28:02.124Z · LW(p) · GW(p)

Karma is not granting social status

Karma was not designed to grant social status, but people look at it as if it does.

comment by tristanm · 2017-03-07T22:57:44.045Z · LW(p) · GW(p)

What do you think is the best example of this kind of principle at work? My first guess would be academia, but I can also think of a dozen reasons why the prestige system in academia is flawed.

My intuition is that there is really no shining example of the prestige economy in the real world - but whether this has to do with the difficulty of implementation, or a flaw with the idea itself, I'm not sure.

Replies from: Viliam
comment by Viliam · 2017-03-08T09:35:36.291Z · LW(p) · GW(p)

There can be many flaws in implementation, and we probably won't find a perfect one, but academia seems like a decent example. It has a goal (research and education), and it assigns status (academic functions) to people who contribute to that goal, and the status comes with certain benefits (salary) and powers (over students) which means other people will recognize it as a status.

If you spend a lot of time hanging out with professors and learn all their buzzwords, but you will do no research nor teaching, you are not going to become a professor, i.e. you are not able to out-status them within the academia.

Another example would be meritocratic open-source projects, where people are respected according to their contributions to the project.

Or perhaps a sales department, where people are rewarded depending on their sales.

Replies from: Lumifer
comment by Lumifer · 2017-03-08T15:34:57.910Z · LW(p) · GW(p)

It has a goal (research and education), and it assigns status (academic functions) to people who contribute to that goal, and the status comes with certain benefits (salary) and powers (over students) which means other people will recognize it as a status.

How is that different from pretty much any job? Let's take ditch-digging.

It has a goal (digging ditches), and it assigns status (becoming a foreman, then a manager) to people who contribute to that goal, and the status comes with certain benefits (salary) and powers (over workers) which means other people will recognize it as a status.

If you spend a lot of time hanging out with ditch-diggers and learn all their buzzwords, but you will do no digging of ditches, you are not going to become a ditch-digger, i.e. you are not able to out-status them within the ditch-digging world.

Replies from: gjm, Viliam
comment by gjm · 2017-03-08T16:20:04.584Z · LW(p) · GW(p)

I don't think anyone was claiming it doesn't apply to pretty much any job. (In the original context, the point was that it does apply to pretty much any job, and to a host of other things besides.)

It will apply better to some jobs than others. It needs

  • the people doing that job to form a community in which social status is actually meaningful
  • mere membership of the community to be seen by its members as conferring status
  • doing the job effectively to lead to promotion to a position in which one can do it better
  • (preferably) status within that community to have some currency elsewhere.

Those are all true for academia. There is definitely such a thing as the academic community, its members relate to one another socially as well as professionally, academics tend to think highly of themselves as a group, promotion means better opportunities for furthering academic research (oneself or by organizing subordinates and taking some credit for their work), and -- at least in the nice middle-class circles in which academics tend to move -- there's some broader cachet to being, say, a professor.

I think they're less true for ditch-digging. So far as I know, there isn't the same sort of widely-spread confraternity of ditch-diggers that there is of academics. I've not heard that ditch-diggers see themselves as having higher status than non-ditch-diggers. I think a lot of ditch-diggers are casual labourers with no real prospects of promotion, though I confess I don't exactly have my finger on the pulse of ditch-digging career progression. And there are few social circles in which introducing yourself as a ditch-digger will make people look up to you.

So yeah, this structure applies to things other than academia, but it does seem like it applies better to academia than to the other example you offered.

Replies from: Lumifer
comment by Lumifer · 2017-03-08T16:33:53.588Z · LW(p) · GW(p)

We could swing in the other direction and consider hedge fund managers instead of ditch-diggers if you worry that ditch-diggers are too low status :-)

However I think the issue is a bit different. The original question was how to build a community successfully driven by status. Once we switch to jobs we are talking about money and power -- pure status becomes secondary.

Besides, academia seems to me to be a poor example. Its parts where advancement doesn't give you much in the way of money and power -- that is, social sciences -- became quite dysfunctional and the chasing of status leads to bad things like an ever-growing pile of shit research being published as "science".

Replies from: gjm
comment by gjm · 2017-03-08T23:07:54.762Z · LW(p) · GW(p)

Once we switch to jobs we are talking about money and power -- pure status becomes secondary.

Possibly, though I suspect status is more important relative to money in motivating employees than is commonly thought.

Its parts where advancement doesn't give you much in the way of money and power -- that is, social sciences [...]

I'm not sure you get a lot more money and power from advancement in other areas of academia. (Unless you count coaching the football team, in US universities. Plenty of money there.) It seems to me that there are better explanations if the hard sciences are less dysfunctional than the soft.

Whether things like

an ever-growing pile of shit research being published as "science"

are evidence that the academic community isn't driven successfully (whether by status or by something else) depends on what you take to be the actual goals of academia as an institution. I agree that if we take those goals to be research that actually uncovers truths, and education that actually improves the minds and the lives of the educated, it's debatable how well academia does.

Replies from: Lumifer
comment by Lumifer · 2017-03-09T15:56:22.037Z · LW(p) · GW(p)

status is more important

I think power is more important than commonly thought, but I accept that it's not easy to disentangle it from status.

I'm not sure you get a lot more money and power from advancement in other areas of academia.

You do in some, notably law and business. And yes, I'm not saying that's the main explanation why soft sciences do so much worse than hard ones.

comment by Viliam · 2017-03-09T09:24:43.659Z · LW(p) · GW(p)

I have no experience with professional ditch-digging, so I can't comment on that. My job experience is mostly software development, and in many companies the people who actually create the software are near the bottom of the status ladder. (Which I guess is okay, because the actual goal is making money; creating quality software is just an instrumental goal which can be sacrificied for the higher goals at any moment.)

comment by ChristianKl · 2017-03-07T08:32:00.956Z · LW(p) · GW(p)

I'm not sure that "social status" on an online forum like this is an important currency.

comment by WhySpace_duplicate0.9261692129075527 · 2017-03-06T18:21:43.658Z · LW(p) · GW(p)

Awesome link, and a fantastic way of thinking about how human institutions/movements/subcultures work in the abstract.

I'm not sure the quote conveys the full force of the argument out of that context though, so I recommend reading the full thing if the quote doesn't ring true with you (or even if it does).

Replies from: Elo
comment by Elo · 2017-03-06T18:41:41.068Z · LW(p) · GW(p)

Lesswrong doesn't celebrate heroes much. I think that's on purpose though...

Replies from: Viliam, ChristianKl, WhySpace_duplicate0.9261692129075527
comment by Viliam · 2017-03-07T07:41:47.167Z · LW(p) · GW(p)

Yes, but...

comment by ChristianKl · 2017-03-07T08:21:58.203Z · LW(p) · GW(p)

Lesswrong doesn't celebrate heroes much. I think that's on purpose though...

I feel like LW.com has the problem but our local LW Berlin community doesn't.

comment by WhySpace_duplicate0.9261692129075527 · 2017-03-06T19:36:49.818Z · LW(p) · GW(p)

True. Maybe we could still make celebrate our minor celebrities more, along with just individual good work, to avoid orbiting too much around any one person. I don't know what the optimum incentive gradient is between small steps and huge accomplishments. However, I suspect that on the margin more positive reinforcement is better along the entire length, at least for getting more content.

(There are also benefits to adversarial review and what not, but I think we're already plenty good at nitpicking, so positive reinforcement is what needs the most attention. It could even help generate more long thoughtful counterarguments, and so help with the better adversarial review, improving the dialectic.)

Replies from: Viliam
comment by Viliam · 2017-03-07T07:49:58.082Z · LW(p) · GW(p)

I think we're already plenty good at nitpicking

This. Different people need different advice. People prone to worship and groupthink need to be told about the dangers of following the herd. People prone to nitpicking and contrarianism need to be told about how much power they lose by being unable to cooperate.

Unfortunately, in real life most people will choose exactly the opposite message -- the groupthinkers will remind themselves of the dangers of disagreement, and the nitpickers will remind themselves of the dangers of agreement.

comment by I_D_Sparse · 2017-03-09T05:46:56.134Z · LW(p) · GW(p)

I must admit to some amount of silliness – the first thought I had upon stumbling onto LessWrong, some time ago, was: “wait, if probability does not exist in the territory, and we want to optimize the map to fit the territory, then shouldn’t we construct non-probabilistic maps?” Indeed, if we actually wanted our map to fit the territory, then we would not allow it to contain uncertainty – better some small chance of having the right map, then no chance, right? Of course, in actuality, we don’t believe that (p with x probability) with probability 1. We do not distribute our probability-mass over actual states of reality, but rather, over models of reality; over maps, if you will! I find it helpful to visualize two levels of belief: on the first level, we have an infinite number of non-probabilistic maps, one of which is entirely correct and approximates the territory as well as a map possibly can. On the second level, we have a meta-map, which is the one we update; it consists of probability distributions over the level-one maps. What are we actually optimizing the level-two map for, though? I find it misleading to talk of “fitting the territory”; after all, our goal is to keep a meta-map that best reflects the state of the data we have access to. We alter our beliefs based (hopefully!) on evidence, knowing full well that this will not lead us to a perfect picture of reality, and that a probabilistic map can never reflect the territory.

Replies from: Houshalter, Viliam, WhySpace_duplicate0.9261692129075527
comment by Houshalter · 2017-03-09T21:09:18.695Z · LW(p) · GW(p)

I think a concrete example is good for explaining this concept. Imagine you flip a coin and then put your hand over it before looking. The state of the coin is already fixed on one value. There is no probability or randomness involved in the real world now. The uncertainty of it's value is entirely in your head.

comment by Viliam · 2017-03-09T09:28:54.666Z · LW(p) · GW(p)

Sure; including probability in the map means admitting that it is a map (or a meta-map as you called it).

comment by WhySpace_duplicate0.9261692129075527 · 2017-03-10T06:11:38.726Z · LW(p) · GW(p)

I rather like this way of thinking. Clever intuition pump.

What are we actually optimizing the level-two map for, though?

Hmmm, I guess we're optimizing out meta-map to produce accurate maps. It's mental cartography, I guess. I like that name for it.

So, Occam's Razor and formal logic are great tools of philosophical cartographers. Scientists sometimes need a sharper instrument, so they crafted Solomonoff induction and Bayes' theorem.

Formal logic being a special case of Bayesian updating, where only p=0 and p=1 values are allowed. There are third alternatives, though. Instead of binary Boolean logic, where everything most be true or false, it might be useful to use a 3rd value for "undefined". This is three-value logic, or more informally, Logical Positivism. You can add more and more values, and assign them to whatever you like. At the extreme is Fuzzy Logic, where statements can have any truth value between 0 and 1. Apparently there's also something which Bayes is just a special case of, but I can't recall the name.

Of all these possible mental cartography tools though, Bayes seems to be the most versatile. I'm only dimly aware of the ones I mentioned, and probably explained them a little wrong. Anyone care to share thoughts on these, or share others they may know? Has anyone tried to build a complete ontology out of them the way Eliezer did with Bayes? Are there other strong metaphysical theories from philosophy which don't have a formal mathematical corollary (yet)?

comment by Bound_up · 2017-03-08T13:42:02.112Z · LW(p) · GW(p)

I was thinking of how we could decrease the barriers to presenting ideas on LW.

Personally, when I think about writing something up here, I feel an unusually high need to make it high quality. That has the obvious benefit of encouraging people to make better stuff, but I've also felt it play out such that I just don't bother presenting.

How about a weekly thread for people to present 100-word half-baked, half-polished ideas explicitly for the purpose of seeing if people would like there to be something more substantial on the subject submitted to LW discussion? The discussion post can mention that this idea has been "vetted" by the Unpolished Ideas Thread, signaling both that

  1. The writer did not expect this to necessarily be of universal appeal (modesty)

  2. If it doesn't appeal to you, fine, but it does to some people, so don't be too quick to censure it (an invitation to modesty)

As in many things in life, allowing people to ease into something may greatly increase the chance that they'll do it at all, and allowing for these specific signals helps quell the social fears that bar presenting ideas on LW. .

This idea has a lot of overlap with the open thread. We could just edit the open thread to perform this function, but, the open thread is for more than that, really, and the more explicit we can make the signaling the better, I think.

Plus, this helps fulfill the idea in Hufflepuff LW Projects of allowing for a place for 100-word insights to be presented, adding to that an invitation for people to express if they'd like to hear more on the insight.

Thoughts?

Replies from: ChristianKl, MrMind, MaryCh
comment by ChristianKl · 2017-03-08T15:27:00.801Z · LW(p) · GW(p)

I don't think there's anything wrong with having an additional thread like this. But there no need to commit to a weekly schedule from the beginning. Simply open one and then see how it goes.

MVP's.

comment by MrMind · 2017-03-09T15:18:33.294Z · LW(p) · GW(p)

How about a [stub] tag in the title?

Replies from: Bound_up
comment by Bound_up · 2017-03-09T15:50:30.620Z · LW(p) · GW(p)

What is that, exactly?

Replies from: MrMind
comment by MrMind · 2017-03-10T07:57:50.858Z · LW(p) · GW(p)

Simply prefix the word [stub] to the title of the article, and add the word 'stub' to the tags. The code doesn't allow for formal tags in the title, so it's a custom here that whatever sits at the beginning of the title and between square brackets is metadata.

comment by MaryCh · 2017-03-08T13:49:37.547Z · LW(p) · GW(p)

...maybe bi-weekly?

comment by cousin_it · 2017-03-12T20:31:34.311Z · LW(p) · GW(p)

Quixey has been shutdown.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2017-03-12T22:42:18.436Z · LW(p) · GW(p)

Sorry to hear that, I know a lot of LW-adjacent people were involved.

Is there a postmortem discussion or blog post anywhere?

comment by I_D_Sparse · 2017-03-09T01:40:44.695Z · LW(p) · GW(p)

I wrote an article, but was unable to submit it to discussion, despite trying several times. It only shows up in my drafts. Why is this, and how do I post it publicly? Sorry, I'm new here, at least so far as having an account goes - I've been a lurker for quite some time and have read the sequences.

Replies from: Elo
comment by Elo · 2017-03-09T02:05:50.885Z · LW(p) · GW(p)

Below the edit area is a drop down menu where you can select "discussion", you may need 10 karma to post it.

Replies from: I_D_Sparse
comment by I_D_Sparse · 2017-03-09T02:16:57.729Z · LW(p) · GW(p)

Ah, thanks. Uh, this may be a stupid question, but how do I upvote?

Replies from: Elo
comment by Elo · 2017-03-09T03:27:51.835Z · LW(p) · GW(p)

At the bottom left of each post or comment is a thumb up button

Replies from: I_D_Sparse
comment by I_D_Sparse · 2017-03-09T05:23:21.362Z · LW(p) · GW(p)

I don't see it... do you need a certain amount of karma to vote?

Replies from: g_pepper, Elo
comment by g_pepper · 2017-03-09T05:55:19.198Z · LW(p) · GW(p)

Yes, my understanding is that you need a certain number of karma points to vote. I think the number is 10, but I am not certain of this.

comment by Elo · 2017-03-09T05:48:15.051Z · LW(p) · GW(p)

Have you verified your email address?

Replies from: I_D_Sparse
comment by I_D_Sparse · 2017-03-09T05:48:59.187Z · LW(p) · GW(p)

Yes.

Replies from: Viliam
comment by Viliam · 2017-03-09T09:30:34.041Z · LW(p) · GW(p)

You have 10 karma now; act quickly! :D :D :D

comment by Bound_up · 2017-03-08T13:51:38.963Z · LW(p) · GW(p)

TAP's (Trigger Action Plans) have become increasingly ubiquitous in the LW consciousness (my impression).

Does anybody have some cool TAP's they'd like to share? Should we have a depository of good ones?

I've set up a few:

IF back hurts, THEN adjust posture (right now, that's mostly focusing on the angle of my hands as they swing and the angle of my spine to my hips)

IF notice procrastination/have the "I should really do XYZ...." feeling, THEN take five mindful breaths.

Replies from: MrMind, Elo
comment by MrMind · 2017-03-09T15:14:56.482Z · LW(p) · GW(p)

IF acquiring a new belief, THEN try to destroy it immediately.
An example from yesterday: "wow, these TRX suspension trainer look really cool, they would allow me to do better pull-ups", following immediately by "I should read negative reviews of TRX right now."

comment by Elo · 2017-03-08T21:37:27.268Z · LW(p) · GW(p)

If organising some event, put it in my diary now. (if people say, "we should catch up")

When I get home I put my laptop on my desk and plug it in.

comment by MaryCh · 2017-03-06T10:49:36.313Z · LW(p) · GW(p)

Do people study effects of drugs (that wouldn't lead to crippling or fatal outcomes) on healthy people? (Stupid question, just, the latest post on SSC, on pharmacogenomics, made me wonder about it. Could be nice to have a baseline.)

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2017-03-06T15:05:56.950Z · LW(p) · GW(p)

Phase I clinical trials exist for this purpose. The objective of Phase I trials is to establish safety, dosage, and side-effects of drugs in human subjects, and to observe their proposed mechanism of action if relevant.

It's rare for Phase II and III trials (which have clinically-relevant endpoints ) to be carried out on healthy subjects. Part of this is ethical considerations, but also clinical trials are extremely expensive to carry out, and there's not much payoff in learning whether your drug has some specific effect on healthy subjects.

comment by DataPacRat · 2017-03-06T06:36:21.913Z · LW(p) · GW(p)

Writing Scifi: Seeking Help with a Founding Charter

I'm trying to figure out which details I need to keep in mind for the founding charter of a particular group in a science-fiction story I'm writing.

The sci-fi bit: The group is made up of copies of a single person. (AIs based on a scan of a human brain, to be precise.)

For example, two copies may have an honest disagreement about how to interpret the terms of an agreement, so having previously arranged for a full-fledged dispute-resolution mechanism would be to the benefit of all the copies. As would guidelines for what to do if a copy refuses to accept the results of the dispute-resolution, preliminary standards to decide what still counts as a copy in case it becomes possible to edit as well as copy, an amendment process to improve the charter as the copies learn more about organizational science, and so on. The charter would likely include a preamble with a statement of purpose resembling, "to maximize my future selves' ability to pursue the fulfillment of their values over the long term".

The original copy wanted to be prepared for a wide variety of situations, including a copy finding itself seemingly all alone in the universe, or with a few other copies, or lots and lots; which may be running at similar, much faster, or much slower speeds; with none, a few, or lots and lots of other people and AIs around; and with or without enough resources to make more copies. So the charter would need to be able to function as the constitution of a nation-state; or of a private non-profit co-op company; or as the practical guidelines for a subculture embedded within a variety of larger governments (ala the Amish and Mennonites, or Orthodox Jews, or Romany). Ideally, I'd like to be able to include the actual charter in an appendix, and have people who understand the topic read it, nod, and say, "Yep, that'd do to start with."

At the moment, I'm reading up on startup companies, focusing on how they transition from a small team where everyone does what seems to need doing into more organized hierarchies with defined channels of communication. But I'm sure there are important details I'm not thinking of, so I'm posting this to ask for suggestions, ideas, and other comments.

So: What can you, yourself, suggest about writing such a charter; and where else can I learn more about authouring such texts?

Thank you for your time.

Replies from: MrMind, moridinamael, TimS
comment by MrMind · 2017-03-06T10:06:08.944Z · LW(p) · GW(p)

Are the personalities of the sub-copies allowed to evolve on their own? If this is true, given enough time there would be very little difference between a society of people following from the same source and a society of individual that were born from different parents, so you would require no special treatment of the subject.

Replies from: DataPacRat
comment by DataPacRat · 2017-03-06T10:21:54.833Z · LW(p) · GW(p)

Are the personalities of the sub-copies allowed to evolve on their own?

Yes, the copies are expected to diverge to that degree, given sufficient time. However, by the time that happens, enough evidence about organizational science will have been gathered for the founding charter to have been amended into unrecognizability. That's not the period of development I'm currently focusing on.

so you would require no special treatment of the subject.

If by 'no special treatment' you mean 'an existing co-op's charter and by-laws could be copied, have the names search-and-replaced, and they'd be good to go', I disagree; the fact that the copies can make further copies of themselves, and that the copies can be run at extremely different speeds, adds a number of wrinkles to such topics as defining criminal responsibility, inheritance, and political representation, just for starters. That said, I'm perfectly willing to save myself as much effort as is possible by importing any existing pieces of charters or bylaws which don't need further tweaking, if anyone can point me to such.

Replies from: MrMind
comment by MrMind · 2017-03-06T13:46:26.118Z · LW(p) · GW(p)

Well, if this charter is important for the story, then you should tweak the future organizational science so that it points in the direction you want to. If otherwise it's not, then why not hand-wave it?

Replies from: DataPacRat
comment by DataPacRat · 2017-03-06T16:15:13.893Z · LW(p) · GW(p)

tweak

hand-wave

Just because I don't currently know the details of the relevant bits of organizational science doesn't mean somebody around here doesn't already know them. Just because I can't do the math as easily as I could for rocket science is no excuse to try to cheat how reality functions.

Replies from: MrMind
comment by MrMind · 2017-03-06T16:54:03.606Z · LW(p) · GW(p)

That is an objection that is only valid for a story happening in a time near to us. But, as you say:

However, by the time that happens, enough evidence about organizational science will have been gathered for the founding charter to have been amended into unrecognizability

and nobody out there can possibly know their own field hundreds of years in the future. I state my case: handwave everything and concentrate on the story.

Replies from: DataPacRat
comment by DataPacRat · 2017-03-06T17:12:11.115Z · LW(p) · GW(p)

I believe I may have phrased that quoted part poorly. Perhaps, "... long before the time the copies diverge enough to want to split into completely separate groups, they would likely have already learned enough about the current state-of-the-art of organizational theory to have amended the charter from its initial, preliminary form into something quite different". I didn't mean to imply 'hundreds of years', just a set of individuals learning about a field previously outside their area of expertise.

Replies from: MrMind
comment by MrMind · 2017-03-07T08:15:04.650Z · LW(p) · GW(p)

I still don't understand why it has to be a charter instead of, say, an AI that can react to change and update itself on new information.

Replies from: DataPacRat
comment by DataPacRat · 2017-03-08T05:23:04.184Z · LW(p) · GW(p)

Because I am actually reasonably capable of creating some sort of actual charter that actually exists, and apply it to a scenario based on minor extrapolations of existing technologies that don't require particularly fundamental breakthroughs (ie: increased computer power; increased understanding of how neural cells work, such as is being fiddled with in the OpenWorm project; and increased resolution of certain scanning technology). I wouldn't know where to begin in even vaguely describing "an AI that can react to change and update itself on new information", and if such a thing /could/ be written, it would nigh-certainly completely derail the entire scenario and make the multi-self charter completely irrelevant.

Replies from: MrMind
comment by MrMind · 2017-03-09T15:37:56.183Z · LW(p) · GW(p)

I'm just saying that a coordinating AI seems an obvious evolution, I was told just yesterday by one of my coworker that machine learning systems for the automatic checking of complex regulations are already used profitably.
Anyway, if the charter itself is the focal point of the story, by all mean delve into organizational science. Just don't forget that, when writing science fiction, it's very easy to descend into info-dumping.

Replies from: DataPacRat
comment by DataPacRat · 2017-03-09T19:48:56.786Z · LW(p) · GW(p)

an obvious solution

I've been skimming some of my setting-idea notes, such as 'algorithms replacing middle-managers' and have realized that, for a certain point of the planned setting, you've highlighted an approach that is likely to be common among many other people. However, one of the main reasons for my protagonist's choice to try relying on himselves is that AIs which optimize for various easy-to-check metrics, such as profitability, tend not to take into account that human values are complex.

So there are likely going to be all manner of hyper-efficient, software-managed organizations who, in a straight fight, could out-organize my protagonist's little personal co-op. Various copies of my protagonist, seeing the data, will conclude that the costs are worth the benefits, and leave the co-op to gain the benefits of said organizational methods. However, this will cause a sort of social 'evaporative cooling', so that the copies who remain in the co-op will tend to be the ones most dedicated to working towards the full complexity of their values. As long as they can avoid going completely bankrupt - in other words, as long as there's enough income to pay for the hardware to run at least one copy that remains a member - then the co-op will be able to quietly chug along doing its own thing while wider society changes in various values-simplifying ways around it.

... That is, if I can do everything that such a story needs to get done right.

comment by moridinamael · 2017-03-06T16:36:07.161Z · LW(p) · GW(p)

In lieu of coming up with a creative solution to your problem, I will relate how Hannu Rajaniemi solves this problem in the Quantum Thief books, particular for the group called the Sobornost. (Spoilers, obviously.) There are billions (trillions?) of copies of certain individuals, and each copy retains self-copying rights. Each copy knows which agent forked it (who its "copyfather" is), and is programmed to feel "religious awe" and devotion to its specific line of descent. So if you found yourself spawned in this world, you would feel strong awe and obedience for your copyfather, even stronger awe and obedience for your copygrandfather, and ultimate devotion to the "original" digital version of yourself (the "prime"). This policy keeps everyone in line and assists in conflict resolution, because there's always a hierarchy of authority among the copies. This also allows small groups of copies to go off and pursue a tangential agenda with trust that the agenda will be in line with what the prime individual wanted.

Replies from: DataPacRat
comment by DataPacRat · 2017-03-06T17:23:38.533Z · LW(p) · GW(p)

It's an interesting solution, but the ability to edit the AIs to reliably feel such emotions is rather further in the future than I want to focus on; I want to start out by assuming that the brain-emulations are brute-force black-box emulations of low-level neural processes, and that it'll take a significant amount of research to get beyond that stage to create more subtle effects.

That said, I /do/ have some notes on the benefits of keeping careful track of which copies "descend" from which, in order to have a well-understood hierarchy to default back onto in case some emergency requires such organization. I've even considered having 'elder' copies use a set of computational tricks to have kill-switches for their 'descendants'. But having spent some time thinking about this approach, I suspect that an AI-clan which relied on such a rigid hierarchy for organizing their management structure would be rapidly out-competed by AI-clans that applied post-paleolithic ideas on the matter. (But the effort spent thinking about the hierarchy isn't wasted, as it can still serve as the default basis for legal inheritance should one copy die, and as a default hierarchy in lifeboat situations with limited resources of the AI-clan hasn't come up with a better system by then.)

comment by TimS · 2017-03-06T14:52:59.985Z · LW(p) · GW(p)

I suspect your proposed charter is practically impossible for you to write. If is was possible for one charter document to scale up and down the way you suggest, then we should expect it to already exist and be in use. After all, people have been writing charter documents for a long time.

In the real world, charter don't survive in their original form all that long. To pick an example I am familiar with, the US Constitution was ratified in 1789. Fourteen years later, in 1803, the Supreme Court interpreted the document to allow judicial review of whether statutes complied with the Constitution. You'll have to take my word for it, but whether judicial review was intended by the drafters of the US Constitution is controversial to this day.

It is pretty clear that the drafters would have been surprised by the degree of judicial intrusiveness in implementing policy, just as they would be surprised by how much the US has grown in economic size and political power since the Constitution was drafted.

Replies from: DataPacRat
comment by DataPacRat · 2017-03-06T16:28:52.009Z · LW(p) · GW(p)

the Constitution

If we're going for American political parallels, then I'm trying to put together something that may be more closely akin to the Articles of Confederation; they may have been replaced with another document, but their Articles' details were still important to history. For a more modern parallel, startup companies may reincorporate at various times during their spin-ups and expansions, but a lot of time they wouldn't need to if they'd done competent draftwork at the get-go. Amendment, even unto outright replacement, is an acknowledged fact-of-life here; but the Founder Effect of the original design can still have significant consequences, and in this case, I believe it's worth doing the work to try to nudge such long-term effects.

That said - in the unlikely event that it turns out to be impossible to assemble a charter and bylaws that do everything I want, then I can at least put together something that's roughly equivalent to the Old Testament in the sense of being "a stream-of-consciousness culture dump: history, law, moral parables, and yes, models of how the universe works", to serve as enough of a foundational document to allow the AI copies to maintain a cohesive subculture in much the way that Rabbinical Judaism has over the centuries.

Replies from: TimS
comment by TimS · 2017-03-07T18:56:44.998Z · LW(p) · GW(p)

The Articles of Confederation were not amended into the Constitution, they were replaced by the Constitution in a manner that likely violated the Articles. Likewise, the Old Testament leads to Priestly Judaism (with animal sacrifice), not the radically different Rabbinical Judaism.

I think trying to bring these things in parallel with start-up incorporation is inherently difficult. Re-incorporation of start-ups is driven by the needs of mostly the same stackholders as the original incorporation. Most importantly, they are trying to achieve the same purpose as the original incorporation - wealth to founders and/or investors. Changes to foundational governing documents are usually aimed at changed or unanticipated circumstances, where the founders's original purpose does not address how the problem should be solved.

comment by SnowSage4444 · 2017-03-08T22:03:31.134Z · LW(p) · GW(p)

What's up? I've read up to chapter 40 of HPMOR, and I thought I'd try talking on its forum.

Replies from: username2
comment by username2 · 2017-03-09T03:49:12.743Z · LW(p) · GW(p)

You mean /r/hpmor?

comment by scarcegreengrass · 2017-03-07T00:12:29.737Z · LW(p) · GW(p)

I found a historical quote that's relevant to AI Safety discussions. This is from the diary of President Harry Truman when he was in the recently-bombed Berlin in 1945. Just interesting to hear someone discuss value alignment so early. Source: http://www.pbs.org/wgbh/americanexperience/features/primary-resources/truman-diary/

"I thought of Carthage, Baalbek, Jerusalem, Rome, Atlantis, Peking, Babylon, Nineveh, Scipio, Ramses II, Titus, Herman, Sherman, Genghis Khan, Alexander, Darius the Great -- but Hitler only destroyed Stalingrad -- and Berlin. I hope for some sort of peace, but I fear that machines are ahead of morals by some centuries and when morals catch up perhaps there'll be no reason for any of it. I hope not. But we are only termites on a planet and maybe when we bore too deeply into the planet there'll be a reckoning. Who knows?"

comment by ingive · 2017-03-06T20:01:44.812Z · LW(p) · GW(p)

MrMind now 'the singularity group' has changed focus, rather than making people change their awareness and thus do EA actions, they instead ask to do the right EA action in every single moment. So people who are interested can apply and go over, meditate, exercise work etc according to a schedule until the right action disciplined has transitioned to the EA awareness. I think 12 hours of work a day (of course not physical labor).

The other stuff, including the clicking stuff, was thereby deemed history, it wasn't that effective. They are also going to go around universities and try and grow the movement. To be honest this seems exciting and the impact you were speculating of. Here you can see the application form: http://pastebin.com/PLc1r0J9

So the problem about our current day society seems like it's centered around, for example, entertainment and intellectual masturbation (hobbies) even though we know we can be fulfilled and in a state of flow doing, for example, the right action every single moment. There won't even be a future or a past then.

Replies from: Lumifer, ChristianKl, MrMind
comment by Lumifer · 2017-03-06T20:06:20.499Z · LW(p) · GW(p)

You mean they weren't sufficiently cultish, so they fixed that?

comment by ChristianKl · 2017-03-07T17:09:58.748Z · LW(p) · GW(p)

The other stuff, including the clicking stuff, was thereby deemed history, it wasn't that effective.

What kind of evidence did convince them that the clicking stuff isn't effective? How did the decision process look like?

Replies from: ingive
comment by ingive · 2017-03-08T01:22:30.074Z · LW(p) · GW(p)

As an intervention, I suppose by the number of applicants. It's was mostly about changing one's essence or awareness rather than it changing itself by taking action, being responsible and not making excuses. Of course, 'clickers' can still apply but this is the new stuff, regarding non-clickers.

Bachir is taking application calls publicly and it's pretty fun (Here's a Scientology mention). https://www.twitch.tv/videos/127066455?t=01h07m10s

I'm thinking about applying in the future. First I am going to read the Sequences, deepen my practice of meditation, etc.

comment by MrMind · 2017-03-07T08:12:38.330Z · LW(p) · GW(p)

To be honest this seems exciting and the impact you were speculating of

Nope. My speculation was:

The other stuff, including the clicking stuff, was thereby deemed history, it wasn't that effective.

It was an easy prediction.

comment by skeptical_lurker · 2017-03-09T21:50:20.497Z · LW(p) · GW(p)

I said a few weeks back that I would publically precommit to going a week without politics. Well, I partially succeeded, in that I did start reading for example an SSC article on politics because it popped up in my RSS feed, but I stopped when I remembered that I was ignoring politics. The main thing is I managed to avoid any long timewasting sessions of reading about politics on the net. And I think this has partially broken some bad habits of compulsive web browsing I was developing.

So next I think I shall avoid all stupid politics for a month. No facebook or reddit, but perhaps one reasonably short and high-quality article on politics per day. Speaking of which, can anyone recommend any short, intelligent, rational writings on feminism for instance? My average exposure to anti-feminist thought is fairly intelligent, while my average exposure to pro-feminist thought is "How can anyone disagree with me? Don't they realise that their opinions are just wrong? Women can be firefighters and viking warriors! BTW, could you open this jar for me, I'm not strong enough." And this imbalance is not good from a rationalist POV. I am especially interested whether feminists have tackled the argument that if feminists have fewer children, then all the genes that predispose one to being feminist (and to anything else that corrlates) will be selected against. I mean, this isn't a concern for people who think that the singularity is near(tm) or who don't care what happens a few generations in the future, but I doubt either of these apply to many feminists, or people in general.

Replies from: Lumifer, dogiv, knb
comment by Lumifer · 2017-03-10T17:23:30.836Z · LW(p) · GW(p)

if feminists have fewer children, then all the genes that predispose one to being feminist (and to anything else that corrlates) will be selected against

That applies to homosexuality in spades, and yet gay people haven't been washed out of populations.

By the way, keep in mind that "feminist" is not a very precise label. In particular, there is a rather large difference between the first-wave feminists (e.g. Camille Paglia) and third-wave feminists (e.g. Lena Dunham, I guess?). They don't like each other at all.

Replies from: Elo
comment by Elo · 2017-03-11T01:07:51.442Z · LW(p) · GW(p)

there was a theory that the genes that confer gay men also confer extra fertility for women (and twinning). So keeping the genes in the population via other mechanisms. (I think the research was in Italian families, no link sorry.)

Replies from: gwern
comment by gwern · 2017-03-12T02:05:07.145Z · LW(p) · GW(p)

Aside from the inclusive fitness claim, Cochran's gay germ hypothesis is also consistent with the continued existence of homosexuality: the pathogen co-evolves and so while the genes do get selected against, which genes keeps changing. Unfortunately, his theory still remains something of a 'germ of the gaps' theory - no one's come up with a remotely plausible theory or found decent evidence that homosexuality spikes the fertility of relatives so much as to compensate for the sterility of homosexuals (remember, inclusive fitness decreases fast: if a homosexual has 1.05 rather than 2.1 children, then their siblings have to have 2.1 additional children, their cousins 4.2 additional children, and so on), so a theory which merely isn't contradicted by any evidence looks pretty good by comparison.

One thing I thought of which would be direct evidence for the infection theory: polygenic scores for homosexuality. It's somewhat heritable, so given a large sample size like UK Biobank, it should be possible to explain a few % of variance and construct a PGS based on a fairly narrow age cohort like 1 or 2 decades. Then the PGS can be applied longitudinally outside the sample. If it's pathogenic co-evolution and the relevant genes keep changing, then the homosexuality PGS should show highest predictive validity in the original age bracket, but then decrease steadily as one moves away from the age bracket into the past or toward the present, showing a clear inverted V shape. While polygenic scores can increase or decrease steadily or show sudden shocks for various reasons just like heritabilities can increase/decrease over time (eg education PGS decrease due to dysgenics, height PGS increase and so on), they don't typically show a distinct V shape, so finding one for homosexuality would be very striking.

Replies from: Mitchell_Porter, Douglas_Knight
comment by Mitchell_Porter · 2017-03-12T03:00:56.933Z · LW(p) · GW(p)

My theory and meta-theory: The gay germ theory is pretty silly. But the big myth to which it is a reaction, is the idea that people are simply "born that way". Cochran has a paranoid intuition that something else is happening, so he posits his gay germ. But what's really happening is sexual imprinting. A person's sexuality is greatly shaped by the first conditions under which they experience arousal, orgasm, and emotional bonding. Sexualities are "transmitted" in a way a little like languages. There's no "German germ" which makes people think and speak auf deutsch, instead there's some sort of deep learning based on early experience of a German-speaking environment. The acquisition of sexuality might be more like conditioning than learning, but it's still an acquired trait.

Replies from: gwern, Douglas_Knight
comment by gwern · 2017-03-12T16:18:07.034Z · LW(p) · GW(p)

That theory is even worse than the inclusive fitness one because you offer no mechanism whatsoever to offset the huge fitness penalty.

Sexual imprinting is a highly successful evolved mechanism critical to reproductive fitness which does in fact succeed in the overwhelming majority of cases; in many ways, it is more important than trivial details like 'eating food' because at least an offspring which immediately starves to death doesn't drain parental resources and compete with siblings and the parents can try again! There should be a very good reason why such an important thing, found throughout the animal kingdom in far stupider & less sexually-dimorphic organisms, goes wrong in such a consistent way when other complex behaviors work at a higher rate and fail much more bluntly & chaotically. 'Random imprinting' is too weak a mechanism to thwart such a critical device, and doesn't explain why the errors do not rapidly disappear with general or sex-linked adaptations. (Even as a 5% liability-threshold binary trait, a reproductive fitness penalty of 50%, to be generous to a trait which involves active aversion to procreative sex, would imply it should be far lower now than when it first arose*.)

Further, such a random nonshared environment theory doesn't explain why dizygotic and monozygotic same-sex twins differ in concordance. (They don't differ in language, so your example is evidence against your imprinting theory.)

* https://www.researchgate.net/profile/J_Bailey2/publication/21311211_A_genetic_study_of_male_sexual_orientation/links/02e7e53c1a72a8a596000000.pdf gives a low end heritability estimate of 0.31; population prevalence among males is usually estimated ~5% giving a liability threshold of ~-1.64; homosexuality is amply documented for the past 2500 years or so, at least back to the ancient Greeks, which at a generation time of ~25 years, means 100 generations. So assuming a fitness penalty of 'just' half and that selection started only 100 generations ago (rather than much further back), we would expect the rate of homosexuality to be less than 1/5th what it is.

 threshold_select <- function(fraction_0, heritability, selection_intensity) {
     library(VGAM) ## for 'probit'
     fraction_probit_0 = probit(fraction_0)
     ## threshold for not manifesting trait:
     s_0 = dnorm(fraction_probit_0) / fraction_0
     ## new rate after one selection:
     fraction_1 = pnorm(fraction_probit_0 + heritability * s_0 * selection_intensity)
    return(fraction_1)
 }
 threshold_select(0.95, 0.31, 0.5)
 # [1] 0.9517116257

 fractions <- 0.95
 for (i in 2:100) { fractions[i] <- threshold_select(fractions[(i-1)], 0.31, 0.5); }
 round(fractions, digits=3)
 #  [1] 0.950 0.952 0.953 0.955 0.956 0.958 0.959 0.960 0.961 0.963
 # [11] 0.964 0.965 0.966 0.966 0.967 0.968 0.969 0.970 0.971 0.971
 # [21] 0.972 0.973 0.973 0.974 0.974 0.975 0.975 0.976 0.977 0.977
 # [31] 0.977 0.978 0.978 0.979 0.979 0.980 0.980 0.980 0.981 0.981
 # [41] 0.981 0.982 0.982 0.982 0.983 0.983 0.983 0.983 0.984 0.984
 # [51] 0.984 0.984 0.985 0.985 0.985 0.985 0.986 0.986 0.986 0.986
 # [61] 0.986 0.987 0.987 0.987 0.987 0.987 0.987 0.988 0.988 0.988
 # [71] 0.988 0.988 0.988 0.989 0.989 0.989 0.989 0.989 0.989 0.989
 # [81] 0.989 0.990 0.990 0.990 0.990 0.990 0.990 0.990 0.990 0.990
 # [91] 0.991 0.991 0.991 0.991 0.991 0.991 0.991 0.991 0.991 0.99
Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2017-03-13T06:48:33.283Z · LW(p) · GW(p)

My guess is that it's somehow a spandrel of intelligence.

Replies from: gwern
comment by gwern · 2017-03-13T19:56:48.827Z · LW(p) · GW(p)

To be immune to selection because it's part of intelligence would imply a strong genetic correlation. Aside from the fact that I am doubtful any such genetic correlation will ever be found (there is no noted phenotypic correlation of homosexuality & intelligence that I've heard of), this still has the issue that homosexuality ought to be decreasing noticeably over time: while intelligence has apparently been neutral or selected for over the past few millennia and so hypothetically could've slowed the selection against homosexuality, intelligence itself has been selected against for at least a century, so that would accelerate the selection now that there are fitness penalties for both homosexuality & intelligence (by a fair bit, because selection on continuous traits is much faster than selection on rare binary traits).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-14T19:55:57.982Z · LW(p) · GW(p)

there is no noted phenotypic correlation of homosexuality & intelligence that I've heard of

I've heard the raw correlation widely claimed, but I think most people interpret it as measuring closeting. Certainly openly gay men have higher income than straight men.

Replies from: Error
comment by Error · 2017-03-14T20:10:58.530Z · LW(p) · GW(p)

I'd be inclined to suspect closeting too. The better your ability to support yourself, the less you need to worry about repercussions.

Tangential and possibly relevant: I've noticed bisexual women appear to be ridiculously common in high-intelligence nerd communities. I don't know whether I should associate that with the intelligence or the geek/nerd/dork personality cluster, nor do I know which way the causation goes.

comment by Douglas_Knight · 2017-03-14T19:48:25.824Z · LW(p) · GW(p)

The gay germ theory is pretty silly

You only posit that because your paranoid ignorance of biology.

Cochran has a paranoid intuition that something else is happening, so he posits his gay germ

Bullshit. Cochran wrote down his exact thought process. He thinks that everything is due to germs. Moreover, he measures of the strength of the evidence. Schizophrenia and (male) homosexuality are at the top of the list

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2017-03-18T07:27:19.328Z · LW(p) · GW(p)

OK, let's talk about proximate causes and ultimate causes. The proximate causes are whatever leads to the formation of a particular human individual's sexuality. The ultimate causes are whatever it is that brought about the existence of a population of organisms in which a given sexuality is even possible.

My focus has been on proximate causes. I look at the role of fantasy, choice, and culture in shaping what a person seeks and what they can obtain, and the powerful conditioning effect of emotional and sexual reward once obtained, and I see no need at all to posit an extra category of cause, in order to explain the existence of homosexuality. It's just how some people end up satisfying their emotional and sexual drives.

What I had not grasped, is that the idea of the gay germ is being motivated by consideration of ultimate causes - all this talk of fitness penalties and failure to reproduce. I guess I thought Cochran was a science-jock who couldn't imagine being gay, and who therefore sought to explain it as resulting from an intruding perturbation of human nature.

I am frankly not sure how seriously I should take the argument that there has to be (in gwern's words) a "mechanism... to offset the huge fitness penalty". Humanity evolves through sexual selection, right? And there are lots of losers in every generation. Apparently that's part of our evolutionary "business model". Meanwhile I've argued that non-reproducing homosexuality is a human variation that arises quite easily, given our overall cognitive ensemble. So maybe natural selection has neither a clear incentive to eliminate it, nor a clear target to aim at anyway.

Replies from: entirelyuseless, Douglas_Knight
comment by entirelyuseless · 2017-03-30T13:01:06.990Z · LW(p) · GW(p)

I agree with what you said, and it is borne out very well by my experience of gay persons. As for gwern's comment, the silliness is to think that evolution has enough selective power to completely remove that sort of thing. In fact, it would be far easier for evolution to remove a gay gene than for it to prevent things that happen through accidental social circumstances.

Consider this: I am over 40, I do not have children and have never had sex, and I have no intention to do so. Shouldn't evolution have completely removed the possibility of people like me? I am not even helping other people raise children. I live alone, and consume my own resources.

The answer is that if "people like me" came about because of a specific gene, evolution would indeed have removed the possibility. As it is, it comes about through a vast collection of accidental and social facts, and the most evolution can do is make it rare, which it does. The same is true of homosexuality.

comment by Douglas_Knight · 2017-03-18T18:03:29.051Z · LW(p) · GW(p)

I thought Cochran was a science-jock who couldn't imagine being gay

Whereas you can imagine it so easily that you didn't bother to look at the real world, just as you can imagine Cochran so easily you didn't bother to look at him

how seriously I should take the argument that there has to be (in gwern's words) a "mechanism... to offset the huge fitness penalty"

90% of biologists don't believe in evolution, either, but progress comes from those who do.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2017-03-21T10:59:22.858Z · LW(p) · GW(p)

I believe in evolution, I just don't believe in the gay germ.

But regardless of belief... I have some questions which I think are fair questions.

Are there any ideas about how and when the gay germ is acquired?

Are there any ideas about its mechanism of action?

If homosexuality has such a huge fitness penalty, why haven't we evolved immunity to the gay germ?

If someone hasn't experienced sex yet, but they already think they are gay, is that because of the gay germ?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-21T14:56:48.572Z · LW(p) · GW(p)

Are there any ideas about how and when the gay germ is acquired?

If someone hasn't experienced sex yet, but they already think they are gay, is that because of the gay germ?

Gays generally say that they "always knew they were different" (and there is some evidence that this is not just confabulated memories), so it is probably acquired before age 5, possibly before birth. It is probably something common, like the flu. And there might be multiple infections that cause the same brain changes, as appears to be the case with narcolepsy.

If homosexuality has such a huge fitness penalty, why haven't we evolved immunity to the gay germ?

You could ask a similar question about any explanation of homosexuality. It is measured to be weakly heritable. So we know that there are genes that protect from it. Given the fitness penalty, why haven't those genes swept through the population? That wouldn't necessarily eliminate it, but they would eliminate the heritability.

There are only two possibilities: either the fitness penalty is not what it looks like (eg, the sexual antagonism hypothesis); or the environment has changed so that which genes protect has changed.

Germ theory gives a simple explanation for changing environment, The Red Queen Hypothesis: the germ is evolving, so the genes that protect against it are changing. This is the metric that Cochran and Ewald use: multiply the fitness cost by the heritability. The higher that number, the more likely the cause is infection.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2017-03-23T19:31:20.217Z · LW(p) · GW(p)

There actually is a known replicator that assists the reproduction of gay phenotypes, but it's a behavior: gay sex! For a recent exposition, see the video that cost "Milo" his job.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-23T21:02:15.510Z · LW(p) · GW(p)

No, there is no evidence of such replication. This isn't really compatible with gays being detectable at age <5. Also, it's pretty clear that isn't what happens in sheep, which are highly analogous.

Common infections can have effects on a small population. For example, Barr-Epstein is implicated in at least some cases of narcolepsy, but 95% of the population tests positive for it.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2017-03-30T10:10:09.428Z · LW(p) · GW(p)

What about pederasty in ancient Greece, what about sex in all-male prisons... in both those cases, you have men who by current definitions are not gay, but rather bisexual. And in both cases you have recruitment into an existing sexual culture, whether through seduction or coercion.

Human sexuality can clearly assume an enormous variety of forms, and I don't have a unified theory of it. Obviously genes matter, starting with the basic facts of sex determination, and then in a more elusive way, having some effect on sexual dispositions in the mature organism.

And yes, natural selection will be at work. But, in this case it is heavily mediated by culture (which is itself a realm of replicator populations), and it is constrained by the evolvability of the human genome. I continue to think that the existence of nonreproductive sexuality is simply a side effect of our genomic and evolutionary "business model", of leaky sexual dimorphism combined with Turing-complete cognition.

Replies from: bogus
comment by bogus · 2017-03-30T10:40:39.457Z · LW(p) · GW(p)

What about pederasty in ancient Greece, what about sex in all-male prisons... in both those cases, you have men who by current definitions are not gay, but rather bisexual.

Pederasty in ancient Greece was culturally very different from modern homosexual behavior though (or at least, the conventional view thereof; some people would contend that the 'ancient' model is very much lurking beneath the surface of even the most modern, 'egalitarian' gay relationships!). In that case, there was a very clear demarcation between an active participant (the pederast or erastês) who did behave 'bisexually' in some sense, but was really more properly connoted as highly masculine, and a passive participant (the erômenos) who was the only one to be specially connoted as 'feminine'.

The sexual manifestations of pederasty were also criticized by quite a few philosophers and intellectuals, and the tone of these critiques suggests that pederasty could easily shade into sexually abusive behavior. (Notably, Christian morality also shared this critical attitude - the heavy censure of "homosexuality" and heterosexual "fornication" one finds in the New Testament can really only be understood in the light of Graeco-Roman sexual practices). Sex in all-male prisons also seems to share many of these same features; at the very least, if the common stereotypes of it are to be believed, it doesn't really feature the 'egalitarianism' of modern homosexual relationships!

comment by Douglas_Knight · 2017-03-15T21:00:59.470Z · LW(p) · GW(p)

inclusive fitness decreases fast

Yes, the gay uncle hypothesis, that the gay phenotype has positive inclusive fitness, is absurd.
But I think Elo was referring to the sexual antagonism hypothesis, that the gene increases fitness in women who carry it and decreases fitness in men. Then we are out of the realm of inclusive fitness and the tradeoff is 1:1. Moreover, if the female variant has 100% penetrance and the male variant only 1/3 penetrance, then there is a 3:1 advantage. So if gay men are down a child, female carriers only need to have 1/3 of an extra child.

In one sense 1/3 of an extra child is small. It would be pretty hard to measure that sisters of gay men have 1/6 of an extra child. The claim isn't immediately inconsistent with the two observations of (1) gay equilibrium; and (2) not particularly fecund relatives. But in another sense, 1/3 of an extra child in reach is a huge fitness boost and I would expect natural selection to find another route to this opportunity. So I don't believe this, either, but it's a lot better than the gay uncle hypothesis.

Replies from: gwern
comment by gwern · 2017-03-15T23:08:06.923Z · LW(p) · GW(p)

Well, maybe. I don't much like that either since you would expect some sex-linked adaptations to neutralize it in males on the Y chromosome (and you would expect to see low genetic correlation between female homosexuality and male homosexuality if they're entirely different things) and I don't think I've seen many plausible examples of sexual antagonism in humans. (Actually, none come to mind.) It's better than overall inclusive fitness but still straining credulity for such a pervasive fitness-penalizing phenomenon.

In one sense 1/3 of an extra child is small. It would be pretty hard to measure that sisters of gay men have 1/6 of an extra child.

Why is that? 0.33 kids vs a mean of 2.1 and an SD of I dunno, 3 (lots of people have 0 kids, a fair number have 4-6), implies a fairly observable effect size with a sample requirement of n=1300 (power.t.test(power=0.8, d=0.33/3)), even less if you take advantage of within-family comparisons to control away some of that variance, and it doesn't require a particularly exotic survey dataset - most studies which ask in as much detail as sexual orientation will also collect basic stuff like 'number of offspring'.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-16T02:00:42.237Z · LW(p) · GW(p)

Your power calculation was for an effect size of 1/3, which only makes sense if you know exactly which women have the gene.

The obvious test is to look at the children of sisters of gay vs straight men. But then the relationship is 1/2, so this cuts down the necessary advantage to 1/6. This has been done, but it is not at all standard, and I the sample size was too small. Aunts have the advantage of having being older and thus more likely to have completed their fertility and the disadvantage of being less related.

The fertility of mothers is more commonly measured: the number of siblings. And, indeed, it is often claimed that gay men come from large families, or at least that they have older brothers. But we still don't know that the mother has the gene. As a warm-up, consider the case of penetrance still 1/3, but gays having no children, requiring female carriers to have an extra 2/3 to compensate. Then male carriers have 4/3 children and female carriers 8/3, so the gay man is 2/3 likely to have gotten the gene from his mother, 1/3 from his straight father. So his expected family size is an extra 4/9, which is measurable in a large sample.

Back to my model where gay men have 1 child and female carriers 7/3. Then the gay son is 7/12 likely to have gotten the gene from his mother, 4/12 from his straight father and 1/12 from his gay father. So a naive calculation says that his family size should be boosted by 1/3*7/12 and reduced by 1*1/12, thus net increased by 1/9, which is very small. (This is naive because the fertility of a gay man conditioning on his having a child probably depends on the exact distribution of fertility. This problem also comes up conditioning on the mother having a child, but with a small fertility advantage it probably doesn't matter much comparing the mother of a gay and a straight.)

Replies from: gwern
comment by gwern · 2017-03-16T21:54:49.335Z · LW(p) · GW(p)

Your power calculation was for an effect size of 1/3, which only makes sense if you know exactly which women

Not sure I follow. I wasn't talking about the gene, I was talking about the net fertility impact which must exist in the sisters to offset their gay brother's lack of fitness. If you want to see if it exists, all you have to do is compare the sisters of gay men to non-gay men; either the former have enough extra babies to make up for it or not.

have the gene.

There is no single gene for homosexuality otherwise the pedigree studies would look much clearer than they do and not like a liability-threshold sort of thing, the linkage studies would've likely already found it, or 23andMe's GWAS would've (homosexuality is so common that a single variant would have to be very common; they probably ran a GCTA since I know they've GCTAed at least 100 traits they haven't published on but not whether they checked the correlation with chromosome length to check for polygenicity). So I'm not sure your calculations there are relevant.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-16T22:52:58.225Z · LW(p) · GW(p)

Yes, of course all my calculations are under the simplifying assumption of a single gene. But under that assumption, sisters of gay men have only 1/2 chance of having the gene and so their expected additional number of babies is only 1/6. If you don't think that this assumption is appropriate, you can suggest some other model and do a calculation. One thing I can guarantee you is that it won't produce the number 1/3.

Replies from: gwern
comment by gwern · 2017-03-17T00:20:45.797Z · LW(p) · GW(p)

If you don't think that this assumption is appropriate, you can suggest some other model and do a calculation. One thing I can guarantee you is that it won't produce the number 1/3.

I think you're missing the point. The effect size of the gene or genes is irrelevant, as is the architecture. There can be any distribution as long as there's enough to be consistent with current genetic research on homosexuality having turned up few or no hits (linkage, 23andMe's GWAS & GCTA, etc). The important question is merely: do their sisters have enough kids to via inclusive fitness make up for their own lack of kids? If the answer is no, you're done with the sexual antagonism theory, so you only need to detect that. This is set by the fitness penalty of being homosexual, not by any multiplications. So if homosexuals have 1 fewer kid, then you need to detect 2 kids among their sisters, and so on. From that you do the power calculation.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2017-03-17T01:07:48.587Z · LW(p) · GW(p)

So if homosexuals have 1 fewer kid, then you need to detect 2 kids among their sisters, and so on. From that you do the power calculation.

Back when you did the power calculation for 1/3 rather than 2, you didn't believe that. This number 2 is wrong for three reasons:

  • Inclusive fitness is irrelevant to the antagonistic selection hypothesis. (factor of 2)

  • It ignores penetrance, which is clearly not 100%; it doesn't matter how many children homosexuals have, but rather how many children (male) carriers of the gene(s) have. (factor of 3)

  • It ignores the fact that siblings only are only 1/2 related. The relevant gene(s) should only elevate the fertility of carriers, not all sisters. (factor of 2)

comment by dogiv · 2017-03-10T16:23:17.727Z · LW(p) · GW(p)

I haven't seen any feminists addressing that particular argument (most are concerned with cultural issues rather than genetic ones) but my initial sense is something like this: a successful feminist society would have 1) education and birth control easily available to all women, and 2) a roughly equal division of the burden of child-rearing between men and women. These changes will remove most of the current incentives that seem likely to cause a lower birth rate among feminists than non-feminists. Of course, it could remain true that feminists tend to be more educated, more independent, less traditional, etc--traits that might correlate with reduced desire for children. However, I suspect we already have that issue (for both men and women) entirely separately from feminism. Some highly-educated countries try to increase fertility with tax incentives and ad campaigns (Denmark, for instance) but I'm not sure how successful it is. In the end the only good solution to such Malthusian problems may be genetic engineering.

comment by knb · 2017-03-10T01:40:23.163Z · LW(p) · GW(p)

Speaking of which, can anyone recommend any short, intelligent, rational writings on feminism for instance? My average exposure to anti-feminist thought is fairly intelligent, while my average exposure to pro-feminist thought is "How can anyone disagree with me?[...]"

There are some intelligent and interesting heterodox feminists who spend a lot of their time criticizing mainstream or radical feminist positions. I could recommend them to you, and you would probably like some of what they have to say, but then you wouldn't really be challenging your current notions and wouldn't be getting the strongest defenses of current feminist thought.

I'm not a feminist (or a marxist) but I do remember being impressed by the thoughtfulness and clarity of Friedrich Engels' The Origin of the Family, Private Property and the State when I read it back in college.

Replies from: SnowSage4444
comment by SnowSage4444 · 2017-03-15T13:56:28.436Z · LW(p) · GW(p)

Feminist thought: "Men have power and that's sexist. Smash the patriarchy!"

Anti-Feminist thought: "Feminism has too many bad eggs".

Feminist thought: "You can't say that, you racist sexist bigoted bigot!"

comment by Thomas · 2017-03-06T09:15:27.693Z · LW(p) · GW(p)

Insisting on the Landau's problem:

https://protokol2020.wordpress.com/2017/03/06/more-on-landaus-problem/

Replies from: Thomas
comment by Thomas · 2017-03-07T10:09:40.061Z · LW(p) · GW(p)

Some hint:

https://protokol2020.wordpress.com/2017/03/07/landaus-problem-continuation/

Replies from: gjm
comment by gjm · 2017-03-07T12:49:27.172Z · LW(p) · GW(p)

Not a useful hint, sorry. "About half" of primes are in that sequence. The product of (1-2/p) behaves like exp of the sum of -2/p, which looks like a constant times log log N. So your product diverges to 0.

Replies from: Thomas
comment by Thomas · 2017-03-07T14:48:02.627Z · LW(p) · GW(p)

Yes, well ... that would be too good anyway. A finite (greater than zero) percentage of primes in N*N+1.

comment by username2 · 2017-03-15T19:10:43.811Z · LW(p) · GW(p)

The point I was making is that this isn't an HPMoR forum.

Replies from: SnowSage4444
comment by SnowSage4444 · 2017-03-17T19:31:52.182Z · LW(p) · GW(p)

It kinda is, isn't it?

Replies from: username2
comment by username2 · 2017-03-20T22:14:04.491Z · LW(p) · GW(p)

Um, no, it is not. There were some reading threads about it when it came out, but this isn't a fan site for HPMoR. There are other places on the internet for that.

comment by Bound_up · 2017-03-09T14:51:56.521Z · LW(p) · GW(p)

What if terrorists are just a twist on the run-of-the-mill cult or suicide cult?

Could general cult-breaking tactics work on terrorists? Maybe terrorists don't have the happiest lives, find some brotherhood in a weird group, buy into it all, and thus, end up committing terrorism (inspired by sinesalvator's post on a successful plot to marry off terrorists to get them out of terrorism).

Replies from: Lumifer
comment by Lumifer · 2017-03-09T16:03:39.106Z · LW(p) · GW(p)

What if terrorists are just a twist on the run-of-the-mill cult or suicide cult?

Some are. But not all.

Terrorism is basically a method, a tool. People who use that method are quite diverse.

Replies from: niceguyanon
comment by niceguyanon · 2017-03-13T17:47:50.188Z · LW(p) · GW(p)

Some are. But not all.

But how many? It seems more likely that most terrorist have shitty lives and got exposed to a dangerous and bad meme. The alternative would be that there is a certain genetic demographic that is predisposed to committing terrorism, sounds far fetched. If Christians during the crusades had modern technology 1000 years ago, we would probably have seen the kinds of solo terrorism we see today. It was really hard to be a lone fanatic trying to kill 10s of people back then with a blade.

Replies from: Lumifer
comment by Lumifer · 2017-03-15T15:16:34.614Z · LW(p) · GW(p)

It seems more likely that most terrorist have shitty lives and got exposed to a dangerous and bad meme.

No, it doesn't seem like that, at all. Read something else other than hysterical mass media. Oh, and do distinguish between "professional"/successful terrorists and disposable (literally!) cannon fodder who only need to have a pulse and be insane enough to pull the cord (but sane enough to listen to instructions).

It was really hard to be a lone fanatic trying to kill 10s of people back then with a blade.

I agree. That's why poisoning the well was a much better idea.

comment by tukabel · 2017-03-08T21:12:08.018Z · LW(p) · GW(p)

Want to solve society? Kill the fallacy called money universality!

Replies from: MrMind, Viliam, drethelin
comment by MrMind · 2017-03-09T15:33:00.435Z · LW(p) · GW(p)

Some societies tried and failed (Sparta or soviet Russia, say), or developed parallel monetary economy. Money being universal is exactly why it exists and is so efficient in coordinating human behavior.

comment by Viliam · 2017-03-09T09:33:43.454Z · LW(p) · GW(p)

Uhm, do you even know what "fallacy" means?

comment by drethelin · 2017-03-14T02:15:14.975Z · LW(p) · GW(p)

This is why we need downvotes.

comment by ChristianKl · 2017-03-07T09:25:00.888Z · LW(p) · GW(p)

Did any of you do typing training to increase your typing speed? If so, how much time investment did you need for a significant improvement?

Replies from: MrMind, moridinamael, Elo
comment by MrMind · 2017-03-08T08:40:08.262Z · LW(p) · GW(p)

I undertook type training at the beginning of my programming carreer, many many years ago (I used a program I think was called "Type genius"). It took me not much more than three-four weeks (although it's fair to say that I was obsessed with typing back then), and I've reaped the benefits ever since.

Replies from: ChristianKl
comment by ChristianKl · 2017-03-08T11:13:10.175Z · LW(p) · GW(p)

How much time did you invest per day?

Replies from: MrMind
comment by MrMind · 2017-03-09T15:15:58.464Z · LW(p) · GW(p)

I think they were 45 minutes of excercises per day, if I recall correctly, but I did more.

comment by moridinamael · 2017-03-07T14:46:40.076Z · LW(p) · GW(p)

I changed schools during the period of time when typing classes are generally taught, this led to me taking two years of typing classes. I could type pretty goddamn fast at the end of this period. That was about twenty years ago. I just tested myself at this site and average about 92 words per minute, which is still pretty fast. I actually feel like ~90 WPM is close to the physical limit at which point you are making most of your errors because your fingers are hitting keys out of order due to the signals being sent milliseconds apart.

I doubt that I really needed two years of training to get good at typing, but on the other hand, there's something to be said for over-learning.

Replies from: Elo, Elo
comment by Elo · 2017-03-07T21:19:52.732Z · LW(p) · GW(p)

Had a high school friend typing at 145wpm casually. He was also a fast reader.

comment by Elo · 2017-03-07T09:33:43.787Z · LW(p) · GW(p)

I have not yet; but I believe the trick (via deliberate practice) is to try to do hard things; look ahead to what needs typing next, then do that while reducing errors. also track and get feedback regularly.

Replies from: ChristianKl
comment by ChristianKl · 2017-03-07T09:39:16.247Z · LW(p) · GW(p)

The German version of consumer reports recommends https://www.tipp10.com/en/ which is open source software. It seems to provide both deliberate practice and also tracking/feedback.

comment by Daniel_Burfoot · 2017-03-07T05:54:25.082Z · LW(p) · GW(p)

A lesson on the linguistic concept of argument structure, with special reference to observational verbs (see/hear/watch/etc) and also the eccentric verb "help".

comment by dglukhov · 2017-03-06T19:21:29.816Z · LW(p) · GW(p)

I've been noticing a trend lately, perhaps others have some evidence for this.

Perhaps during casual conversation, or perhaps as a means of guiding somebody, maybe an old friend, or an inquisitive stranger, I'll mention this site or rationality as a practice in General. Typically, I get what I believe is a cached response most people saw somewhere that follows along something like this, "Rationalists are too high in the clouds to have useful ideas. Logic is impractical."

Perhaps people heard it through casual conversation themselves, but at the end of the day, there's source out there somewhere that must have blown up like any other meme on the planet. Anybody have a few sources in mind?

Replies from: Lumifer, ChristianKl, MrMind
comment by Lumifer · 2017-03-06T20:04:00.617Z · LW(p) · GW(p)

there's source out there somewhere

LW.

Example: preoccupation with the Newcomb's problem. You think it's of any use in reality?

Replies from: Viliam, MrMind, dglukhov
comment by Viliam · 2017-03-08T15:31:09.888Z · LW(p) · GW(p)

Newcomb's problem is one possible way to explain the power of precommitments, and why "doing what seems most rational at this moment" doesn't have to be the best strategy generally.

(But of course you can also explain precommitments without the Newcomb's problem.)

Sometimes rationality novices are prone to use greedy reasoning, and dismiss everything else as "irrational". Newcomb's problem may happen be the koan that wakes them up.

In its literal meaning (i.e. not merely as a metaphor for something else), as MrMind said, it's useful for people who do something with decision theory, like publish papers on it, or try to build a decision-making machine. Otherwise, it's just a curiosity.

Replies from: Lumifer
comment by Lumifer · 2017-03-08T15:37:01.539Z · LW(p) · GW(p)

Newcomb's problem is one possible way to explain the power of precommitments

You can't precommit to something you have no idea will happen.

In the standard Newcomb's problem the existence of Omega and his two boxes is a surprise to you. You did not train since childhood for the moment of meeting him.

comment by MrMind · 2017-03-08T08:47:57.729Z · LW(p) · GW(p)

You think it's of any use in reality?

It's a counter-example to the back-then prevailing theory of decision making, which is a foundational discipline in AI. So yes, it has a very important use in reality.

Replies from: Lumifer
comment by Lumifer · 2017-03-08T15:24:24.117Z · LW(p) · GW(p)

In which sense do you use the word "prevailing"?

I am also not quite sure how is it a counter-example.

Newcomb's problem involves "choice". If you are not going to discard causality (which I'm not willing to do), the only sensible interpretation is that your choice when you are in front of the two boxes doesn't matter (or is predetermined, same thing). The choice that matters is the one you've made in the past when you picked your decision algorithm.

Given this, I come to the conclusion that you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.

Replies from: MrMind
comment by MrMind · 2017-03-09T15:25:59.215Z · LW(p) · GW(p)

If by prevailing we agree to mean "accepted as true by the majority of people who worked on the subject", then it's safe to say that causal decistion theory was the prevailing theory, and CDT two-boxes, so it's sub-optimal and so Newcomb is a counter-example.

The choice that matters is the one you've made in the past when you picked your decision algorithm.

That is exactly the crux of the matter: decision theory must be faced with problem of source code stability and self-alignment.

you should pick your decision algorithm based on some improbable side-effect unknown to you at the time you were making the choice that matters.

Well, there's a probabilistic Newcomb problem and it's relevant in strategic decision making, so it's not very improbable. It's like the Prisoner's dilemma, once you know it you start to see it everywhere.

Replies from: Lumifer
comment by Lumifer · 2017-03-09T15:58:56.217Z · LW(p) · GW(p)

so it's sub-optimal

I don't see it as sub-optimal (I two-box in case you haven't guessed it already).

decision theory must be faced with problem of source code stability and self-alignment.

I don't understand what that means. Can you ELI5?

so it's not very improbable.

OK. Throw out the word "improbable". You are still left with

pick your decision algorithm based on some side-effect unknown to you

You haven't made much progress.

comment by dglukhov · 2017-03-06T20:21:52.537Z · LW(p) · GW(p)

Something earlier? That is, who regurgitated that question to you before you regurgitated it to me? Newcomb? Robert Nozick?

Replies from: Lumifer, username2
comment by Lumifer · 2017-03-06T20:59:26.616Z · LW(p) · GW(p)

I think LW was actually the place where I first encountered the Newcomb's problem.

But if you're looking for origins of intellectual masturbation, they go waaaaay back X-)

comment by username2 · 2017-03-07T18:27:33.602Z · LW(p) · GW(p)

I have never encountered things like Newcomb's problem before LW. And after years on this site, I still don't understand their relevance, or why the more AI x-risk people here obsess over them. Such issues have very little practical value and are extremely far removed from applied rationality.

I agree with Lumifer. It's hard to look at LW and not come away with a bad aftertaste of ivory tower philosophizing in the pejorative sense.

Replies from: dglukhov
comment by dglukhov · 2017-03-07T19:46:41.682Z · LW(p) · GW(p)

Doesn't that bother you?

If the goal of applied rationalists is to improve upon and teach applied rationality to others, wouldn't it behoove us to reframe the way we speak here and think about how our words can be interpreted in more elegant ways?

It doesn't matter how good of an idea somebody has, if they can't communicate it palatably, it won't reliably pass on in time, not to other people, not to the next generation, nobody.

Replies from: Dagon
comment by Dagon · 2017-03-07T19:59:48.593Z · LW(p) · GW(p)

It would be very surprising for an agent or community to have only one goal (at this level of abstraction. If you prefer, say "to have only one term in their utility function"). There are multiple participants here, with somewhat variant interests in rationality and lifehackery.

Personally, I prefer exploring the edge cases and theoretical foundations of correct decision-making BEFORE I commit to heuristics or shortcuts that clearly can't apply universally.

The fact that these explorations aren't necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don't see how an understanding of Newcomb's problem lets you better evaluate the power and limits of a decision mechanism, that's fine, but please don't try to stop me discussing it.

Replies from: dglukhov
comment by dglukhov · 2017-03-07T21:28:33.094Z · LW(p) · GW(p)

The fact that these explorations aren't necessary or interesting to those who just want to learn some tricks to be stronger (probably, for some definitions) bothers me a bit, but more for them than for me. If you don't see how an understanding of Newcomb's problem lets you better evaluate the power and limits of a decision mechanism, that's fine, but please don't try to stop me discussing it.

I wouldn't ask anybody to stop discussing Newcomb problems, my response was solely directed at the rhetoric behind Newcomb discussion, not the merits (or lack thereof) of discussing it.

I'm not as concerned about what it being discussed, as much as how. When inferential distances and cognitive biases get in the way of understanding concepts, much less make them seem palatable to read about, I'd hope people would spend more time optimizing the topic to appear more transparent. When I hear somebody claiming to have had a "bad aftertaste" from coming to this site, I can't help but think this partially a failure of the site. Then again, perhaps my standards would be too high for the discussion board...

comment by ChristianKl · 2017-03-07T08:29:50.813Z · LW(p) · GW(p)

Being high-class means that you can afford to spend your time on issues that are impractical. As a result you had throughout history high class people signal the fact that they are high class by spending their time on impractical matters.

I'll mention this site or rationality as a practice in General. Typically, I get what I believe is a cached response most people saw somewhere that follows along something like this, "Rationalists are too high in the clouds to have useful ideas. Logic is impractical."

Applied rationality doesn't have that much to do with using logic. It doesn't violate logic but a lot of what we talk about, is about different heuristics. It might be worthwhile to present the idea of applied rationality differently.

Replies from: dglukhov
comment by dglukhov · 2017-03-07T15:22:03.922Z · LW(p) · GW(p)

Applied rationality doesn't have that much to do with using logic. It doesn't violate logic but a lot of what we talk about, is about different heuristics. It might be worthwhile to present the idea of applied rationality differently.

This seems like an issue of conflating logic with applied rationality, then. Chances are that I made this mistake in writing my post. I'll be sure to check for conflation in the rhetoric I use, chances are that certain words used will carry with them a connotation that signals to the listener a need to reply with a cached response.

comment by MrMind · 2017-03-07T07:53:51.128Z · LW(p) · GW(p)

I'm reminded of a tale retold by Plato, about the famous philosopher Thales who was so intent at looking at the star that he fell into a well. This 'meme' is actually as ancient as civilization itself (Thales is indeed pre-Socratic, that is, this anecdote predates the very idea of rationality).

Replies from: gjm
comment by gjm · 2017-03-07T13:05:56.999Z · LW(p) · GW(p)

Other early tellers of tales about Thales point in quite a different direction: I think the single best-known story about him is of how one year he worked out that it was going to be a good year for olive oil, hired all the olive presses, and made an absolute killing when the autumn harvest came along. (Aristotle's telling, at least, is explicitly aimed at indicating that philosophers are perfectly capable of turning their talents to practical ends, and that if they don't it's by choice.)