Open thread, September 8-14, 2014

post by polymathwannabe · 2014-09-08T12:31:36.779Z · LW · GW · Legacy · 296 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

296 comments

Comments sorted by top scores.

comment by Metus · 2014-09-09T20:05:25.788Z · LW(p) · GW(p)

I can't count the number of times I didn't do something that would have been beneficial because my social circle thought it would be weird or stupid. Just shows how important it is to choose the people around you carefully.

Replies from: None, James_Miller, gwern, Benito, Stefan_Schubert
comment by [deleted] · 2014-09-10T13:57:50.529Z · LW(p) · GW(p)

Someone -- maybe on LW? -- said that their strategy was to choose their friends carefully enough that they didn't have to resist peer pressure.

Replies from: Lumifer, michaelkeenan
comment by Lumifer · 2014-09-10T17:16:55.099Z · LW(p) · GW(p)

That has other dangers -- e.g. living in an echo chamber or facing the peer pressure to not change.

Replies from: None
comment by [deleted] · 2014-09-10T19:35:28.406Z · LW(p) · GW(p)

Yes, you have to be very careful. (And live in a place where the number of such people is large enough that it's even viable as a strategy, and ignore/isolate yourself from the wider culture or still maintain resistance to it, and so on, which makes it inaccessible to a large number of people, but it seems close to ideal in the rare circumstances where it's possible.)

Replies from: Lumifer
comment by Lumifer · 2014-09-10T20:36:04.321Z · LW(p) · GW(p)

I don't know if "careful" is the right word -- it's more an issue of finding a good balance and the optimal point isn't necessarily obvious. On the one hand, you should like your friends and not have them annoy you or push you in the directions you don't want to go. On the other hand, being surrounded by the best clones of yourself that you could find doesn't sound too appealing.

It's a bit like an ecosystem -- you want a healthy amount of diversity and not monoculture, but at the same time want to avoid what will poison you or maybe just eat you X-)

comment by michaelkeenan · 2014-09-11T23:35:58.978Z · LW(p) · GW(p)

Paul Graham wrote about that in A Student's Guide To Startups:

For nearly everyone, the opinion of one's peers is the most powerful motivator of all—more powerful even than the nominal goal of most startup founders, getting rich...So the best you can do is consider this force like a wind, and set up your boat accordingly. If you know your peers are going to push you in some direction, choose good peers, and position yourself so they push you in a direction you like.

comment by James_Miller · 2014-09-11T16:58:10.652Z · LW(p) · GW(p)

I've always been a huge non-conformist, caring relatively little what others think. I now believe that I went too far and my advice to my younger self would be to try and fit in more.

Replies from: Lumifer
comment by Lumifer · 2014-09-11T17:18:17.107Z · LW(p) · GW(p)

I've always been a huge non-conformist

You have a couple of graduate degrees and are a professor at a liberal arts college in the Northeast... People I would describe as "huge non-conformists" would probably be tailed by campus security if they ever showed up in the area X-D

Replies from: James_Miller
comment by James_Miller · 2014-09-11T17:55:52.399Z · LW(p) · GW(p)

See this.

Replies from: Lumifer
comment by Lumifer · 2014-09-11T18:07:16.031Z · LW(p) · GW(p)

Oh, I know you're a conservative in academia and had tenure troubles because of that. But that makes you a conservative in very liberal environment, not a non-conformist.

Of course you can call yourself anything you want to and the label is sufficiently fuzzy and could be defined in many ways. Still, from my perspective you're now a part of the establishment -- Smith did grant you tenure, even if screaming and kicking.

I am not passing judgement on you, it just surprised me that what you mean by a "huge non-conformist" is clearly very different from what I mean by a "huge non-conformist".

Replies from: James_Miller
comment by James_Miller · 2014-09-11T18:17:15.096Z · LW(p) · GW(p)

It's also stuff such as I don't like sports, music, fashion, or small talk, and in high school and college made zero effort to pay attention to them and it cost me socially. I realize now I should have at least pretended to like them to have had a better social life.

Replies from: Lumifer, cameroncowan
comment by Lumifer · 2014-09-11T18:26:04.975Z · LW(p) · GW(p)

It's also stuff such as I don't like sports, music, fashion, or small talk

That makes you a fully-conforming geek, as you undoubtedly know. Welcome to the club :-)

comment by cameroncowan · 2014-09-11T19:31:00.681Z · LW(p) · GW(p)

I figured out when I was about 15 years old that I had to keep on things I didn't care about to earn points socially and it helped me a great deal and powers what I do as a writer and talk show presenter.

comment by gwern · 2014-09-09T20:57:50.726Z · LW(p) · GW(p)

Such as?

Replies from: Metus
comment by Metus · 2014-09-09T21:11:09.123Z · LW(p) · GW(p)

In a great example of serendipity, the talking to myself is a case. I was observed doing that and people thought it would be weird, so I stopped doing that.

When I was younger, some adults told be that "you only understand something when you can teach it to someone", which people in my circle disputed as they were the kind of people that like to think of themselves as smart.

I didn't go to a couple of parties to socialise because there were people drinking copious amounts of alcohol, because there was a stigma against getting drunk and stupid. While the not drinking certainly was a good idea, the not socialising was not.

As a child I was extremely interested in everything scientific. Then in school none of the cooler kids were and neither were the friends I actually had, so I started playing video games. Thankfully I later found people interested in scholarship so I started doing that again.

(I am starting to realise most of these are from when I was in school. Might be because I matured or because I have more perspective through the distance)

Not that peer pressure can't have good effects, it is a tool like any other.

comment by Ben Pace (Benito) · 2014-09-12T06:09:02.040Z · LW(p) · GW(p)

My go-to catchphrase when I notice this sort of situation is (spoken sarcastically):

"Why be happy when you can be normal?"

comment by Stefan_Schubert · 2014-09-11T21:04:10.329Z · LW(p) · GW(p)

Though that certainly has happened to me as well, it strikes me that the opposite has happened more often: I've done things which turned out to be beneficial, and avoided to do things that would have been bad, because of the opinions of my social circles.

Lots of the time, things that are seen as weird and stupid by the majority actually are weird and stupid.

comment by NancyLebovitz · 2014-09-08T16:25:37.453Z · LW(p) · GW(p)

If people were a great deal better at coordination, would they refuse to use news sources which are primarily supported by advertising?

Replies from: Vulture, ChristianKl, lmm
comment by Vulture · 2014-09-08T16:29:38.848Z · LW(p) · GW(p)

That sounds like a good way to end up with more paywalls.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-09-08T16:58:04.050Z · LW(p) · GW(p)

There would definitely be more paywalls. The question is whether it would be a net loss.

Would the quality of information be better? Advertising gets paid for one way or another-- would no-advertising news (possibly even no-advertising media in general) be a net financial loss for consumers?

Replies from: Lumifer, cameroncowan
comment by Lumifer · 2014-09-08T17:16:52.080Z · LW(p) · GW(p)

Look at the history of cable TV. When it appeared it was also promoted as "no advertising, better shows".

Replies from: None, Eugene
comment by [deleted] · 2014-09-08T21:06:04.391Z · LW(p) · GW(p)

I would argue for the existence of a treadmill effect on these things.

comment by Eugene · 2014-09-08T22:12:29.440Z · LW(p) · GW(p)

Although this may not have been true at the beginning, it arguably did grow to meet that standard. Cable TV is still fairly young in the grand scheme of things, though, so I would say there isn't enough information yet to conclude whether a TV paywall improved content overall.

Also, it's important to remember that TV relies on the data-weak and fairly inaccurate Nielsen ratings in order to understand its demographics and what they like (and it's even weaker and more inaccurate for pay cable). This leads to generally conservative decisions regarding programing. The internet, on the other hand, is filled with as much data as you wish to pull out regarding the people who use your site, on both a broad and granular level. This allows freedom to take more extreme changes of direction, because there's a feeling that the risk is lower. So the two groups really aren't on the same playing field, and their motivations for improving/shifting content potentially come from different directions.

comment by cameroncowan · 2014-09-11T19:35:03.516Z · LW(p) · GW(p)

Yes I think so because journalism is time-consuming and expensive. You also have to have the right people on the right stories so that you get the best expression of what happened. Then you can back that up with commentary and opinion which in this day and age tends to end up all at once. I think the better option is to believe in people and their unique perspective. If you follow a writer or a journalist and you like their work that is a better system than believing in an institution which is more faceless. If I am covering a story on an oil spill in the gulf on The Cameron Cowan Show and then I take an ad from BP my viewers are going to wonder if I an going to continue to cover the spill with such tenacity and they will flee from me if I don't. People can vote with their feet and dollars with individuals far more than companies. Ergo, I would not take that ad to keep my loyal watchers and seek ads somewhere else. This logic is not used at any news outlet right now because bills have to be paid and there is far less backlash.

comment by ChristianKl · 2014-09-08T23:18:29.301Z · LW(p) · GW(p)

I don't think "refusing" news sources is helpful. Even a bad newspaper gives some perspective on some topics that you won't find elsewhere.

The whole idea of "news sources" is problematic. It assumes a certain 20th century model of learning about the world. If you want to get really informed about a topic it often necessary to read primary sources. I don't get scientific news from mainstream media. I either read the papers, discussion on LW or blogs by scientists.

When I see a claim that I find interesting and where I don't know it's true I head over to skeptic.stackexchange and open a question. The website is no newspaper but it also serves the purpose of staying in contact with world events.

Advertising is just one biases among many. If I watch a news video at German public television that's payed for by taxpayer money, the a German public television network pays a production company for that video. Some of those production companies also produce PR for paying customers.

A lot of articles in newspapers get these days written by freelance journalists who aren't payed very well and can be hired for other tasks. So even if the newspaper wouldn't make it's money by serving corporate interest the individual journalist might still serve corporate interests.

Wikipedia illustrates that we are actually quite good at coordination. Much better than anyone would have expected 20 years ago. It just doesn't like like we would have expected. Cultural development isn't just more of the same.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-09T16:18:20.042Z · LW(p) · GW(p)

I don't think "refusing" news sources is helpful. Even a bad newspaper gives some perspective on some topics that you won't find elsewhere.

But reading it takes time that one could spend on something else.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-09T16:49:44.298Z · LW(p) · GW(p)

If you make an utility calculation than the prime concern is about whether it makes sense to learn about a topic in the first place. If you do decide to inform yourself about a topic than you have to choose among the sources that are available. If you really care about an issue than it often makes sense to read multiple perspectives.

It quite easy to read government funded Al Jazeera, a commercial newspaper by a publically traded company that makes money via advertising and network driven community websites like Stackexchange or Wikipedia.

In a pluralistic society all those source of information can exist besides each other. If you don't like corporatist news sources there are a lot of alternatives these days.

comment by lmm · 2014-09-15T12:12:33.274Z · LW(p) · GW(p)

If people were a great deal better at coordination I suspect advertising wouldn't exist at all.

comment by sediment · 2014-09-12T19:53:01.191Z · LW(p) · GW(p)

I was recently heartened to hear a very good discussion of effective altruism on BBC Radio 4's statistics programme, More or Less, in response to the "Ice Bucket Challenge". They speak to Neil Bowerman of the Centre for Effective Altruism and Elie Hassenfeld from GiveWell.

They even briefly raise the possibility that large drives of charitable donations to ineffective causes could be net negative as it's possible that people have a roughly fixed charity budget, which such drives would deplete. They admit there's not much hard evidence for such a claim, but to even hear such an unsentimental, rational view raised in the mainstream media is very bracing.

Available here: http://www.bbc.co.uk/podcasts/series/moreorless (click the link to "WS To Ice Or Not To Ice"), or directly here: http://downloads.bbc.co.uk/podcasts/radio4/moreorless/moreorless_20140908-1200a.mp3

comment by NancyLebovitz · 2014-09-10T18:36:57.239Z · LW(p) · GW(p)

By Brad Hicks

.A thought that I've been carrying around in my head for a while, that I have no idea what to do with:

It seems to me that almost everybody, in relationships, wants the "I Win" button. For those of you who didn't play City of Heroes, it was a developer-team joke that they shared with the public: push one button, and get your way. It became player and developer jargon for times when people wanted to argue that their preferred way of winning wasn't unfair to others. So what's the "I Win" button for relationships?

People who are really good at non-verbal communication want all relationship boundaries, rules, and expressions of wants and needs to be based on non-verbal communication; they want their partner to "just know." People who are really good at written communication want those things to be handled via written rules and relationship contracts and user manuals. People who are really good at verbal, conversational communication want those things handled by talking them out. And all three of those groups think that the secret to happy relationships is for other people to learn to communicate their way.

I have no idea what to do with this insight other than to say, "Well, of course they do." Because if I'm really good at written communication and my partners aren't, and I can "persuade" my partners to agree that all communication about needs, wants, boundaries, and rules need to be in writing, I'm going to win every debate and argument. Who wouldn't want that?

Replies from: Nornagest, cameroncowan, blacktrance
comment by Nornagest · 2014-09-10T19:47:46.385Z · LW(p) · GW(p)

I think I'd be more inclined to frame this sort of thing as typical mind fallacy. Modeling it in terms of an I Win button seems to violate Hanlon's Razor: we don't need an adversarial model when plain old ignorance will suffice, and I don't think preferred interaction style is a matter of conscious choice for most people.

Replies from: polymathwannabe, NancyLebovitz
comment by polymathwannabe · 2014-09-10T20:01:35.050Z · LW(p) · GW(p)

Alternatively, the situation can be described in terms of tell vs. guess culture.

comment by NancyLebovitz · 2014-09-10T21:58:31.884Z · LW(p) · GW(p)

I'd split the difference-- I believe the typical mind fallacy can shade into believing that other sorts of minds aren't worth respecting.

comment by cameroncowan · 2014-09-11T19:52:20.047Z · LW(p) · GW(p)

I think people want that because they don't how to communicate effectively in any other way. You also have to decide why people choose to communicate in the way that they do. People that prefer written communication (as I do) may be passive aggressive or be afraid of verbal communication. Those who want their partner to "just know" I think will have the least amount of success because of their inability to use a method of agreeable communication to express their needs and desires. I am somewhat aware of this because I do expect people to have certain ideas and execute them and I have learned that I have to speak up about what I think should be done because they aren't "just going to figure it out" because most people don't think like I do.

As for the "I win" button, I don't think thats what people want. People want their needs met in a pleasurable and dynamic way. Is that "winning?"

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-09-12T14:10:46.969Z · LW(p) · GW(p)

It seems reasonable to me that people are afraid of being forced into whatever modes of communication they think they're bad at-- it's not a specific flaw of people who prefer verbal/written communication.

I wonder if the people who expect their partners to "just know" are confusing successful non-verbal communication with telepathy.

Replies from: Viliam_Bur, cameroncowan
comment by Viliam_Bur · 2014-09-13T19:54:55.811Z · LW(p) · GW(p)

I wonder if the people who expect their partners to "just know" are confusing successful non-verbal communication with telepathy.

I would guess typical mind fallacy, or illusion of transparency. Either they believe their signals are obvious, or they believe that any (sane) person would make the same guess in that specific situation. Or a combination thereof, i.e. that any (sane) person would only see two or three possible choices in that specific situation, and the signals are sufficient to differentiate among them.

Another interesting question would be whether these people are able to see the situation from both sides. Like, they can be angry at their partner for not reading their mind successfully, but do they believe they read the partner's mind successfully? Maybe they don't even realize that there is the other side, too. Or maybe they blame the partner for communication failures in both directions. ("They should know what I think about." "They shouldn't think such crazy things.") On the other hand, maybe the partner really is predictable. Or the partner communicates their thoughts explicitly, so one way the communication is clear, and the person simply does not realize that the clearness of communication is caused by the explicitness. (Or maybe they don't believe in symetry. Maybe they believe that being explicit is e.g. gender-specific, so it's okay that the partner is explicit, and it's okay that they aren't. Or perhaps that you should be explicit about some things, but not about other things.)

comment by cameroncowan · 2014-09-12T16:58:18.990Z · LW(p) · GW(p)

They may be, I think successful non-verbal communication takes time and learning. There can be many difficulties along the way to success.

comment by blacktrance · 2014-09-11T09:14:07.444Z · LW(p) · GW(p)

This model assumes that relationships are adversarial, which need not be the case, and isn't the case in a good relationship.

Replies from: VAuroch
comment by VAuroch · 2014-09-11T21:40:05.205Z · LW(p) · GW(p)

No, the model applies even if the relationship isn't adversarial. As long as you have different priorities and are not perfect at communicating, it applies.

comment by James_Miller · 2014-09-08T15:22:34.641Z · LW(p) · GW(p)

Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path? Assuming that the speed of light really is the maximum, our interstellar radio messages would outpace any paperclip maximizer. Obviously any such message would complicate future alien contact events as the aliens would worry that our ambassador was just an agent for a paperclipper. The act of warning others would be a good way to self-signal the dangers of AI.

Replies from: gjm, Lumifer, solipsist
comment by gjm · 2014-09-08T16:15:38.118Z · LW(p) · GW(p)

I'd have thought any extraterrestrial civilization capable of doing something useful with the information wouldn't need the explicit warning.

Replies from: James_Miller
comment by James_Miller · 2014-09-08T17:13:49.363Z · LW(p) · GW(p)

This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.

comment by Lumifer · 2014-09-08T15:43:21.349Z · LW(p) · GW(p)

Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path?

One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first.

And when the two (or more) collide, it would make a nice SF story :-)

Replies from: solipsist
comment by solipsist · 2014-09-08T22:39:35.168Z · LW(p) · GW(p)

This wouldn't be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.

Replies from: James_Miller, Lumifer
comment by James_Miller · 2014-09-08T22:57:51.512Z · LW(p) · GW(p)

Are you crazy! Think of all the potential paperclips that wouldn't come into being!!

comment by Lumifer · 2014-09-09T18:40:22.002Z · LW(p) · GW(p)

The light cones might not fully intersect, but humans do not expand at close to the speed of light. It's enough to be able to destroy the populated planets.

comment by solipsist · 2014-09-08T23:16:32.084Z · LW(p) · GW(p)

I love this idea! A few thoughts:

  1. What could the alien civilizations do? Suppose SETI decoded "Hi from the Andromeda Galaxy! BTW, nanobots might consume your planet in 23 years, so consider fleeing for your lives." Is there anything humans could do?

  2. The costs might be high. Suppose our message saves an alien civilization one thousand light-years away, but delays a positive singularity by three days. By the time our colonizers reach the alien planet, the opportunity cost would be a three-light-day deep shell of a thousand light-year sphere. Most of the volume of a sphere is close to the surface, so this cost is enormous. Giving the aliens an escape ark when we colonize their planet would be quintillions of times less expensive. Of course, a paperclipper would do no such thing.

  3. It may be presumptuous to warn about AI. Perhaps the correct message to say is something like "If you think of a clever experiment to measure dark energy density, don't do it."

Replies from: James_Miller, Luke_A_Somers
comment by James_Miller · 2014-09-09T00:48:07.943Z · LW(p) · GW(p)
  1. It depends on your stage of development. You might build a defense, flee at close to the speed of light and take advantage of the universe's expansion to get into a separate Hubble volume from mankind, accelerate your AI program, or prepare for the possibility of annihilation.
  1. Good point, and the resources we put into signaling could instead be used to research friendly AI.
  1. The warming should be honest and give our best estimates.
comment by Luke_A_Somers · 2014-09-09T18:13:31.281Z · LW(p) · GW(p)
  1. Quite.

  2. The outer thee days of a 1000 Ly sphere account for 0.0025% of its volume.

comment by JoshuaFox · 2014-09-08T14:45:38.940Z · LW(p) · GW(p)

Can someone point me to estimates given by Luke Muehlhauser and others as to MIRI's chances for success in its quest to ensure FAI? I recall some values (of course these were subjective probability estimates with large error bars) in some lesswrong.com post.

Replies from: peter_hurford
comment by pshc · 2014-09-12T04:57:26.728Z · LW(p) · GW(p)

Would there be any interest in an iPhone app for LessWrong? I was thinking it might be a fun side project for learning Swift, and I didn't see any search results on the App Store.

Replies from: Ixiel, ChristianKl, Richard_Kennaway, lmm
comment by Ixiel · 2014-09-12T11:44:04.515Z · LW(p) · GW(p)

I bet some folks would love you forever if you gave them reply notification

comment by ChristianKl · 2014-09-12T13:23:56.893Z · LW(p) · GW(p)

I think a predictionbook app or an app version of the credence game, would be more useful than a app for LessWrong.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-12T14:14:59.112Z · LW(p) · GW(p)

an app version of the credence game

There already is one for Android.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-12T18:27:22.163Z · LW(p) · GW(p)

I wasn't aware of the Android app.

On the other hand the existence doesn't mean that a new attempt at the same problem is worthless. I think it's very valuable to have multiple people try to solve the problem.

To me it seems like a much more interesting project than having another go at writing an app to parse an online forum. There are few people thinking in depth about designing apps to teach people to be calibrated.

The fact that you have a smartphone also allows additional questions:

You can ask calibration questions such as:

  • Did John or Joe send you more emails in the last year?

  • Is the air pressure more or less than X?

  • Is the temperature of the smart phone battery more or less than X?

  • Does this arrow point more North or more South?"

  • Is the distance between your work location and where you are at the moment more or less than X?

  • Is the distance between your home location and where you are at the moment more or less than X?

  • Is the distance between where John lives and where you are at the moment more or less than X?

  • What was the average speed at which you where traveling in the last minute (if you sit in public transportation)

  • Is the average pitch of the background noise over the last minute more or less than X?

  • Is the longest email that you received in the past week more or less than X characters long?

  • What's the chance that you will get a call today?"

  • Is the average of beeminder value X that you tracked over the last week (month) more or less than X?

All those questions are more interesting then whether postmaster general X served before or after postmaster general Y or the boiling temperatures of various metals. Building an app around the issue might be more complicated than simply providing an new interface for LessWrong, but the payoff for getting Credence training right is also so much higher.

Even if you simply focus on building a beeminder history credence game that might not be too complicated but really useful. Too me it feels like a waste to have valuable development resources wasted on building a Lesswrong app when there are much more valuable projects.

Replies from: pshc, Viliam_Bur
comment by pshc · 2014-09-12T19:46:31.894Z · LW(p) · GW(p)

Just wanted to say: thanks for the ideas!

comment by Viliam_Bur · 2014-09-13T19:38:46.146Z · LW(p) · GW(p)

A personal prediction book?

Simple version: You provide your own predictions, and state your credence. Later you say whether you were right or wrong. The app displays statistics of your calibration.

This is simple in essence, but there will be many design decisions, and many little details that can make the UI better. For example, I guess you should choose the credence from, say, 50%, 60%, 70%, 80%, 90%, 95%, and 99%, instead of typing your own value, because this way it will be easier to make statistics. Also, choosing one option is easier than typing two digits, although most of the work will be typing the questions. It should be possible to edit the text later (noticing a typo too late would drive me crazy). The app should also remember the date each question was entered, so it can give you statistics like: how well calibrated you are in the last 30 days (compared with the previous 30 days).

Maybe the data should be stored online, so you can edit them both from the mobile and from the PC. Although, I would prefer if the application works offline, too. These are two contradictory demands, so you have to find a solution. Perhaps each user should choose in settings whether their data should be kept in the mobile or on the web? And perhaps allow to change this setting later, and the data will be copied? Or maybe even keeping only the recent data in the mobile, and the full archive online? There are many decisions here.

A nice function would be to save some work typing repeated questions. For example, if I want to make a bet every morning "will I exercise today?", there should be an option to repeat one of the recent questions with current date. (By the way, if you always display the date along the question, you can write things like "today" or "this month" without having to always write the specific date.)

A more advanced version (don't do this as the first version; remember the planning fallacy!) would allow some kind of "multiplayer". You could add friends, and offer to share some bets with your friends. Anyone can create a question and offer it to other people; they can accept (by writing their credence) or reject it. Then there would be a summary comparing the members of the group.

Again, here are many design choices and UI improvements. How specifically will you add friends? Will you also have groups of friends, so you share some questions only with some groups? Who can answer the multiplayer question: the person who wrote it, anyone, or the person who wrote it chooses one of the former options?

Integrate the whole thing with Facebook, especially the multiplayer version? That could make the app wildly popular! (But I heard that the Facebook API is less than friendly.)

comment by Richard_Kennaway · 2014-09-12T05:45:16.970Z · LW(p) · GW(p)

What do you see it doing that the web site doesn't?

Replies from: pshc
comment by pshc · 2014-09-12T08:50:23.840Z · LW(p) · GW(p)

Imagining:

  • Easy to read layout (I find myself doing a lot of zooming and panning in Mobile Safari)
  • Download articles and sequences for offline reading
  • Comments that are easy to read, vote on, and reply to on mobile, similar to e.g. popular reddit apps
  • Free as in beer and speech

Welcoming other features that would draw users, too. I have to wonder if there are open source Reddit clients I could adapt, given the forked codebase...

Replies from: philh
comment by philh · 2014-09-12T12:39:38.716Z · LW(p) · GW(p)

I expect that forking a reddit client is the way to go for UI (if you don't have any in mind, I think AlienBlue and Reddit is Fun are probably worth looking into for this).

For the backend, reddit exposes itself through json, which LW doesn't seem to; e.g. http://www.reddit.com/user/philh/.json works, but http://lesswrong.com/user/philh/.json (and http://lesswrong.com/user/philh/overview/.json ) don't. I expect clients to mostly use this, so you'll need to rewrite those portions of the code.

Replies from: pshc
comment by pshc · 2014-09-12T19:31:35.457Z · LW(p) · GW(p)

Turns out AlienBlue did release their original version as open source, but the code is four years out of date! Hmmm.

Yeah, I would probably end up scraping the HTML. I filed a bug about .json being broken two years ago, but even if it were fixed, it seems that LW has quite a few customizations that the JSON output likely has not caught up to...

comment by lmm · 2014-09-15T12:15:32.698Z · LW(p) · GW(p)

I would expect most LWers to prefer Android. Certainly I do.

Replies from: gjm
comment by gjm · 2014-09-15T12:29:14.154Z · LW(p) · GW(p)

Interesting. I have no particular expectation about LWers' preference. I'm an Android guy too. Let's have a poll. What do you use for your mobile devices, if any?

For smartphones:

[pollid:768]

For tablets:

[pollid:769]

[EDITED to add: If the answer to either question is "Multiple different OSes", please select whichever you think is better. Or flip a coin or something.]

comment by Adam Zerner (adamzerner) · 2014-09-11T16:55:20.092Z · LW(p) · GW(p)

Does anyone have any good ideas about how to be productive while commuting? I'll be starting a program soon where I'll be spending about 2 hours a day commuting, and don't want these hours to go to waste. Note: I have interests similar to a typical LessWrong reader, and am particularly interested in startups.

My brainstorming:

  • Audio books and podcasts. This sounds like the most promising thing. However, the things I want to learn about are the hard sciences and those require pictures and diagrams to explain (you can't learn biology or math with an audiobook). I'm also in the process of learning web development and design, but these things also seem too visual to work as an audiobook.

  • Economics audiobooks might work, idk. I could also listen to books about startups/business, but I'm at the point where I know enough about these things that diminishing returns have kicked in.

  • I've read a good amount about psychology already, and feel like diminishing returns have kicked in. Although psychology seems like it'd work well with an audiobook.

  • Perhaps sci-fi audiobooks would be good? Would I learn from these or would it just be entertaining? Any suggestions (I read 1984, Enders Game and Brave New World. I liked them, but didn't learn too much from them.

  • I read HPMOR and loved it. Anything similar to that?

  • Other than audiobooks, I could spend the time brainstorming. Startup ideas, thought experiments, stuff like that.

Replies from: Torello, Lumifer, CWG
comment by Torello · 2014-09-11T22:06:30.523Z · LW(p) · GW(p)

Not really what you're looking for, but I feel obligated:

Move or get a different job. Reduce your commute by 1 or 1.5 hours. This is the best way to increase the productivity of your commute.

I read (can't remember source) that commuting was the worst part of the people's day (they were unhappy, or experienced the lowest levels of their self-assess subjective well being).

Replies from: adamzerner, Douglas_Knight
comment by Adam Zerner (adamzerner) · 2014-09-11T22:47:10.408Z · LW(p) · GW(p)

I'm doing a coding bootcamp (Fullstack Academy). It's in NYC and I live with my parents in Long Island now. It's only 13 weeks so it's not that bad, especially if I could make it productive. If it was long term I'd probably agree with you though.

comment by Douglas_Knight · 2014-09-12T06:23:14.567Z · LW(p) · GW(p)

Commuting by car is terrible. Commuting by foot is great. There is not a lot of data on commuting by subway, but it does not look good.

Replies from: eeuuah
comment by eeuuah · 2014-09-21T22:27:30.424Z · LW(p) · GW(p)

Long distance foot commuting is still pretty bad. In my experience I don't hate the world as much, but burning two plus hours a day commuting sucks no matter what. The subway is definitely much better than car commuting, but not as nice as biking or walking. I think subway commuting is vastly improved by good distractions available through a smartphone, though.

comment by Lumifer · 2014-09-11T17:39:38.823Z · LW(p) · GW(p)

Does anyone have any good ideas about how to be productive while commuting?

Driving or public transportation?

If driving, don't forget that you have a limited amount of attention available and being "productive" as a driver involves some trade-offs X-)

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-11T17:47:10.121Z · LW(p) · GW(p)

I should have mentioned that, it's all public transportation (train + subway). If I get a seat on the train and it's not too crowded I could use my laptop to code or to read, but it's difficult to get a seat.

Replies from: Lumifer
comment by Lumifer · 2014-09-11T17:51:58.059Z · LW(p) · GW(p)

You can read easily enough if you have a tablet or an e-reader.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-11T18:11:17.192Z · LW(p) · GW(p)

It'd be really tough on a NYC subway. On the train, I could read if I get a seat (because I could use my laptop). A tablet would help for the train when I don't have a seat, but I don't really think it's worth it for that one case

Replies from: palladias
comment by palladias · 2014-09-12T13:46:27.536Z · LW(p) · GW(p)

I read my kindle pretty easily on the NYC subway by keeping it near my face/within my personal bubble. I've also read paperbacks there, turning pages one handed in an awk way, but I recommend kindle.

It's also very easy to read while walking with a kindle!

Replies from: adamzerner, adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-12T14:22:04.237Z · LW(p) · GW(p)

Hm, I think you're right.

  1. I came to my original conclusion too quickly and without thinking enough about it.
  2. It doesn't seem that hard actually.
  3. I've seen people read on the subway before (although they're definitely the minority).
comment by Adam Zerner (adamzerner) · 2014-09-12T14:20:22.462Z · LW(p) · GW(p)

Hm, I think you're right.

1) I came to my original conclusion too quickly and without thinking about it enough. 2) That sounds doable. 3) I've seen people read on the subway before (although it seems rare enough that it allowed me to draw my initial conclusion).

Replies from: satt
comment by satt · 2014-09-13T16:42:47.956Z · LW(p) · GW(p)

A potentially cheap, easy way to get more information about the ease of using an e-reader: get someone you know to lend or give you an old one.

comment by CWG · 2014-09-12T04:36:17.932Z · LW(p) · GW(p)

Given the limitations (that you describe in other replies) I think you've got a good list.

Regarding podcasts, this could be a great time to experiment with new ones & decide which you want to listen to longer term.

Perhaps there are some short activities of value to you, such as Anki (assuming you have a smartphone), mentally reviewing your memory palace, or mindfulness exercises. Mindfulness exercises on public transport may seem a little odd, but the distractions may make it more effective as exercise - just be patient with yourself.

comment by DataPacRat · 2014-09-10T20:01:48.086Z · LW(p) · GW(p)

Can Bayesian inference be applied to quantum immortality?

I'm writing an odd science fiction story, in which I'd like to express an idea; but I'd like to get the details correct. Another redditor suggested that I might find someone here with enough of an understanding of Bayesian theory, the Multiple Worlds interpretation of quantum mechanics, quantum suicide, that I might be able to get some feedback in time:

Assuming the Multiple Worlds Interpretation of quantum theory is true, then buying lottery tickets can be looked at in an interesting way: it can be viewed as an individual funneling money from the timelines where the buyer loses to the timelines where the buyer wins. While there is a great degree of 'friction' in this funneling (if a lottery has an average 45% payout, then 55% of the money is lost to the "friction"), it is the method that has, perhaps, the lowest barrier to entry: it only costs as much as a lottery ticket, and doesn't require significant education into abstruse financial instruments.

While, on the whole, buying a lottery ticket may have a negative expected utility (due to that "friction"), there is at least one set of circumstances where making the purchase is warranted: if a disaster is forthcoming, which requires a certain minimal amount of wealth to survive. As a simplification, if the only future timelines in which you continue to live are ones in which you've won the lottery, then buying tickets increases the portion of timelines in which you live. (Another redditor phrased it thusly: Hypothetically, let's say you have special knowledge that at 5pm next Wednesday the evil future government is going to deactivate the cortical implants of the poorest 80% of the population, killing them all swiftly and painlessly. In that circumstance, there would be positive expected utility, because you wouldn't be alive if you lost.)

Which brings us to the final bit: If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive. That is, according to the idea of quantum immortality, if you never experience a timeline in which you've permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.

However, I'm not /quite/ sure that I've got all my inferential ducks lined up in a row there. So if anyone reading this could point out whether anything like the idea I'm trying to describe could be considered reasonably accurate, then I'd appreciate the heads-up. (I'm reasonably confident that it would be trivial to point out some error in the above paragraphs; you could say that I'm trying to figure out the details of the steelmanned version.)

(My original formulation of the question was posted to https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ .)

Replies from: gjm, NancyLebovitz
comment by gjm · 2014-09-10T23:53:47.632Z · LW(p) · GW(p)

Just out of curiosity: How (if at all) is this related to your LW post about a year ago?

I think surely the following has to be wrong:

if you never experience a timeline in which you've permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.

because you can't get that kind of information about the future ("are going to be sufficient") just from the fact that you haven't died in the past.

As for the more central issue:

If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive.

this also seems terribly wrong to me, at least if the situation I'm supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don't see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings.

Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before -- but you need to conditionalize on all the relevant evidence. Let's suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they're still 10^6:1 (10^3 out of danger, 10^-3 in).

Perhaps I'm missing something important; I've never found the idea of "quantum immortality" compelling, and I think the modes of thought that make it compelling involve wrongheadedness about probability and QM, but maybe I'm the one who's wrongheaded...

Replies from: DataPacRat
comment by DataPacRat · 2014-09-11T05:37:35.719Z · LW(p) · GW(p)

How (if at all) is this related to your LW post about a year ago?

Same general assumptions, taken in a somewhat different direction.

(I'm just browsing messages in the middle of the night, so will have to wait to respond to the rest of your post for some hours. In the meantime, the response to my question at https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ckex8ul seems worth reading.)

Replies from: gjm
comment by gjm · 2014-09-11T18:45:11.735Z · LW(p) · GW(p)

So, suppose I rig up a machine with the following behaviour. It "flips a coin" (actually, in case it matters, exploiting some source of quantum randomness so that heads and tails have more or less exactly equal quantum measure). If it comes up heads, it arranges that in ten years' time you will be very decisively killed.

If we take "Pr(L)=1" (in that comment's notation) seriously then it follows that Pr(tails)=1 too. But if there are 100 of you using these machines, then about 50 are going to see heads; and if you are confident of getting tails -- in fact, if your estimate of Pr(tails) is substantially bigger than 1/2 -- you're liable to get money-pumped.

One possible conclusion: Pr(L)=1 is the wrong way to think about quantum immortality if you believe in it.

Another: the situation I described isn't really possible, because the machine can't make it certain that you will die in 10 years, and the correct conclusion is simply that if it comes up heads then the universe will find some way to keep you alive despite whatever it does.

But note that that objection applies just as well to the original scenario. Any disaster that you can survive with the help of an extra $10M, you can probably survive without the $10M but with a lot of luck. Or without the $10M from the lottery but with $10M that unexpectedly reaches you by other means.

Replies from: DataPacRat
comment by DataPacRat · 2014-09-11T19:47:32.599Z · LW(p) · GW(p)

Your last paragraph is leading me to consider an alternative scenario: There are two ways to survive the disaster, either pleasantly by having enough money (via winning the lottery) or unpleasantly (such as by having to amputate most of your limbs to reduce your bodymass to have enough delta-vee). I'm currently trying to use Venn-like overlapping categories to see if I can figure out any "If X then Y" conclusions. The basic parameters of the setting seem to rule out all but five combinations (using ! to mean 'not'):

WinLotto, !Disaster, !Amputee, Live: All Good WinLotto, Disaster, !Amputee, Live: Buy survival !WinLotto, !Disaster, !Amputee, Live: Nothing happens !WinLotto, Disaster, Amputee, Live: Unpleasant survival !WinLotto, Disaster, !Amputee, !Live: Dead.

At this very moment, I'm trying to figure out what happens if quantum immortality means the 'dead' line doesn't exist...

... But I'm as likely as not to miss some consequence of this. Anyone care to take a shot at how to set things up so that any Bayesian calculations on the matter have at least a shot at reflecting reality?

comment by NancyLebovitz · 2014-09-10T22:06:04.836Z · LW(p) · GW(p)

I think you're leaving out that disasters which require a lot of money to survive are fairly rare and hard to predict.

Replies from: DataPacRat
comment by DataPacRat · 2014-09-10T22:24:37.286Z · LW(p) · GW(p)

The character has come uncomfortably close to dying several times in a relatively short period, having had to use one or another rare or unusual skill or piece of equipment just to survive each time. (In other words, she's a Protagonist.)

comment by Ronak · 2014-09-09T02:29:57.259Z · LW(p) · GW(p)

From http://www.preposterousuniverse.com/blog/2013/08/22/the-higgs-boson-vs-boltzmann-brains/

A room full of monkeys, hitting keys randomly on a typewriter, will eventually bang out a perfect copy of Hamlet. Assuming, of course, that their typing is perfectly random, and that it keeps up for a long time. An extremely long time indeed, much longer than the current age of the universe. So this is an amusing thought experiment, not a viable proposal for creating new works of literature (or old ones).

There’s an interesting feature of what these thought-experiment monkeys end up producing. Let’s say you find a monkey who has just typed Act I of Hamlet with perfect fidelity. You might think “aha, here’s when it happens,” and expect Act II to come next. But by the conditions of the experiment, the next thing the monkey types should be perfectly random (by which we mean, chosen from a uniform distribution among all allowed typographical characters), and therefore independent of what has come before. The chances that you will actually get Act II next, just because you got Act I, are extraordinarily tiny. For every one time that your monkeys type Hamlet correctly, they will type it incorrectly an enormous number of times — small errors, large errors, all of the words but in random order, the entire text backwards, some scenes but not others, all of the lines but with different characters assigned to them, and so forth. Given that any one passage matches the original text, it is still overwhelmingly likely that the passages before and after are random nonsense.

That’s the Boltzmann Brain problem in a nutshell. Replace your typing monkeys with a box of atoms at some temperature, and let the atoms randomly bump into each other for an indefinite period of time. Almost all the time they will be in a disordered, high-entropy, equilibrium state. Eventually, just by chance, they will take the form of a smiley face, or Michelangelo’s David, or absolutely any configuration that is compatible with what’s inside the box. If you wait long enough, and your box is sufficiently large, you will get a person, a planet, a galaxy, the whole universe as we now know it. But given that some of the atoms fall into a familiar-looking arrangement, we still expect the rest of the atoms to be completely random. Just because you find a copy of the Mona Lisa, in other words, doesn’t mean that it was actually painted by Leonardo or anyone else; with overwhelming probability it simply coalesced gradually out of random motions. Just because you see what looks like a photograph, there’s no reason to believe it was preceded by an actual event that the photo purports to represent. If the random motions of the atoms create a person with firm memories of the past, all of those memories are overwhelmingly likely to be false.

This thought experiment was originally relevant because Boltzmann himself (and before him Lucretius, Hume, etc.) suggested that our world might be exactly this: a big box of gas, evolving for all eternity, out of which our current low-entropy state emerged as a random fluctuation. As was pointed out by Eddington, Feynman, and others, this idea doesn’t work, for the reasons just stated; given any one bit of universe that you might want to make (a person, a solar system, a galaxy, and exact duplicate of your current self), the rest of the world should still be in a maximum-entropy state, and it clearly is not. This is called the “Boltzmann Brain problem,” because one way of thinking about it is that the vast majority of intelligent observers in the universe should be disembodied brains that have randomly fluctuated out of the surrounding chaos, rather than evolving conventionally from a low-entropy past. That’s not really the point, though; the real problem is that such a fluctuation scenario is cognitively unstable — you can’t simultaneously believe it’s true, and have good reason for believing its true, because it predicts that all the “reasons” you think are so good have just randomly fluctuated into your head!

So, before reading the last sentence quoted I had no issue with the idea that I turned up as a random fluctuation, but that last sentence gives me pause - and my brain refuses to cross it and give useful thoughts.

Anyone have any useful comments? Thanks.

Replies from: None
comment by [deleted] · 2014-09-09T13:30:47.140Z · LW(p) · GW(p)

The expansion of the universe blows up the Boltzmann Brain problem. The universe is not of uniform density over time, and far into the future things get thinner and thinner on average with more and more concentrated local knots of matter of changing atomic/etc composition.

It pushes the question to why we see the universe as it is rather than something smaller in space rather than in time, which becomes a question about the properties of the event we call the big bang, which nobody really understands - was it a singular event or one of many such events and what was its/their scale?

Replies from: Ronak
comment by Ronak · 2014-09-09T13:54:40.608Z · LW(p) · GW(p)

Thanks for your comment.

My issue is much 'earlier' in terms of logic. When I started reading that post, the Boltzmann brain problem seemed like a non-problem; an inevitable conclusion that people were unwilling to accept for reasons of personal bias - analogous to how most LWers would view someone who insists on metaphysical free will.

Even if certain facts about the universe didn't solve the issue, it seems to me that Carroll would still want to find reasons that we weren't Boltzmann brains. Now, from my own interest in entropy and heat death, I had long ago concluded that I might, right now, be part of a random fluctuation that is gone the next moment; in fact, I had concluded that every moment of my existence turns up somewhere during heat death. That's not an issue, as far as I can see - whatever fact we see about the universe, that would just be part of this fluctuation (I don't know about this acceleration thing - my technical understanding of the issue is not good enough, but I'm willing to take your and Carroll's words for it). At this level, 'we're part of a random fluctuation' is one of those uninteresting hypotheses like maya that could very well be true but are unverifiable. (Continued adherence to ordered laws can't really be considered evidence, since we may have just popped into existence a second ago with memories as we have. It truly can predict everything.)

But then, Carroll argues that believing you're a Boltzmann brain is inconsistent, since you can't trust your own brain which is a product of a random fluctuation. Of course, I don't believe I'm a Boltzmann brain, I just note that no experience (modulo expanding universe) contradicts the hypothesis and therefore I should reason without giving a shit about it. However, Carroll's argument gives me pause, and I can't really see whether I should consider it seriously.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-09-10T23:32:23.697Z · LW(p) · GW(p)

It's not necessarily an either/or situation. Maybe this universe has started a few billions of years ago in a Boltzmann-like event, but since then it evolves, uhm, just like we think it does.

The analogy of the monkeys with typewriters is misleading. The laws of physics are local: what happens next does depend on what happens now; that's unlike the monkey with the typewriter where the following letter is completely independent on the previous part of the book. If some random process would create a brain, in a body, in a room, then even if the room is immediately destroyed at the speed of light, still, during those few microseconds until the destruction reaches the brain, the brain would operate logically.

On the other hand, random processes creating the brain in the body in the room are much less likely than random processes creating only the brain, or only parts of the brain. So this requires some more though, and I am too tired now to make it.

But my point is that if you are randomly created exactly in this moment, you don't have a reason to trust your reason... but if you were created a while ago, and your reason had some time to work, that's not the same situation. In the extreme situation, if the universe was created randomly billions of years ago and then we have evolved lawfully, that's business as usual: the details of random creation of the universe long ago should not be relevant for our reasoning about our reason now.

Replies from: Ronak
comment by Ronak · 2014-09-17T02:29:22.070Z · LW(p) · GW(p)

I think this is a good argument. Thanks.

After some thought on why your argument sounded unsatisfatory to me, I decided that I have a much more abstract, much less precise argument, to do with things like the beginning of epistemology.

In the logcial beginning, I know nothing about the territory. However, I notice that I have 'experiences.' However, I have nore ason for believing that these experiences are 'real' in any useful sense. So, I decide to base my idea of truth on the usefulness of helping me predict further experiences. 'The sun rises every morning,' in this view, is actually 'it will seem to me that every time there's this morning-thing I'll see the sun rise.' All hypotheses (liike maya and boltzmann brains) that say that these experiences are not 'real,' as long as I have no reason to doubt 'reality,' form part of this inscrutable probability noise in my probability assignments. Therefore, even if I was randomed into existence a second ago, it's still rational to do everything and say "I have no issues with being a boltzmann brain - however it's just part of my probability noise.'

I haven't fleshed out precisely the connection between this reasoning and not worrying about Carroll's argument - it seems as if I'm viewing myself as an implementation-independent process trying to reason about its implementation, and asking what reasoning holds up in that view.

comment by MaximumLiberty · 2014-09-08T15:54:55.225Z · LW(p) · GW(p)

This is a god read: http://www.newrepublic.com/article/119321/harvard-ivy-league-should-judge-students-standardized-tests

Excerpt:

It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition. On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and statistically, avoiding the fallacies and biases to which the untutored human mind is vulnerable. They should think causally rather than magically, and know what it takes to distinguish causation from correlation and coincidence. They should be acutely aware of human fallibility, most notably their own, and appreciate that people who disagree with them are not stupid or evil. Accordingly, they should appreciate the value of trying to change minds by persuasion rather than intimidation or demagoguery.

Max L.

Replies from: Lumifer, cameroncowan
comment by Lumifer · 2014-09-08T16:01:55.151Z · LW(p) · GW(p)

Looks like they agree that specialization is for insects :-)

Replies from: ChristianKl
comment by ChristianKl · 2014-09-08T23:17:09.686Z · LW(p) · GW(p)

"They"? The author is Steven Pinker.

Replies from: None
comment by [deleted] · 2014-09-10T07:07:26.173Z · LW(p) · GW(p)

"They" can be singular or plural.

Replies from: Ixiel
comment by Ixiel · 2014-09-10T23:50:26.975Z · LW(p) · GW(p)

It is correct in the latter case, incorrect in the former. It largely doesn't matter, but recruiters I know, for example, throw out resumes for this particular error (though one had heard some schools actually encourage the practice, to the student's disservice) and some people (myself included until I thought better of it) think less of authors who make it. Linguistics as a discipline is descriptive, but people who are not linguists treat people differently for making errors.

Replies from: None, ShardPhoenix
comment by [deleted] · 2014-09-13T04:02:47.253Z · LW(p) · GW(p)

It's a bit more complicated than correct or incorrect:

http://en.wikipedia.org/wiki/Singular_they

Replies from: Ixiel
comment by Ixiel · 2014-09-16T11:51:14.355Z · LW(p) · GW(p)

I agree with you as literally started, and am not a Wikipedia naysayer, but that again is descriptive linguistics. People do say that. People also do say "y'all aints gots no Beefaronis?" (One of my favorite examples heard by my own ears in a c store), and people do think differently of either than they do as what is sometimes called "blackboard grammar." I would recommend John McWhorter as a linguist who describes this better than I can. Or just say to yourself "huh, interesting opinion" and walk away; I swear I won't be offended :-)

comment by ShardPhoenix · 2014-09-13T09:19:28.133Z · LW(p) · GW(p)

but recruiters I know, for example, throw out resumes for this particular error

That's nuts.

Replies from: Ixiel, Azathoth123
comment by Ixiel · 2014-09-16T11:41:30.965Z · LW(p) · GW(p)

I don't think so, but either way, if one wants a job at GE, to use a recognizable example, one might want to know.

comment by Azathoth123 · 2014-09-15T02:56:20.853Z · LW(p) · GW(p)

Why? It strikes me as a good way to sort out people who have bad attention to detail, as well as avoiding the SJW-types more interested in accusing everyone in the company of sexism than doing any actual work.

comment by cameroncowan · 2014-09-11T19:42:50.365Z · LW(p) · GW(p)

The idea of the well rounded human being strikes again! That is why we moved away from the structure of classical education and towards the free-form well rounded-ness of the liberal arts education. It allows for curiosity and testing out your own ability.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-13T02:12:32.000Z · LW(p) · GW(p)

You know, sarcasm doesn't work well on the internet.

Replies from: Lumifer
comment by Lumifer · 2014-09-13T03:45:56.161Z · LW(p) · GW(p)

sarcasm doesn't work well on the internet

Oh, but it does, it does :-D

comment by A1987dM (army1987) · 2014-09-08T13:39:13.199Z · LW(p) · GW(p)

Quite a few people will pay $10 in order to not know whether they have herpes.

Replies from: sixes_and_sevens, Ronak, kalium, fubarobfusco
comment by sixes_and_sevens · 2014-09-08T15:59:22.220Z · LW(p) · GW(p)

"Whether you have herpes" is not as clearly-defined a category as it sounds. The blood test will tell you which types of HSV antibodies you have. If you're asymptomatic, it won't tell you the site of the infection, if you're communicable, or if you will ever experience an outbreak.

I had an HSV test a while ago (all clear, thankfully), and my impression from speaking to the medical staff was that given the prevalence and relative harmlessness of the disease, (compared to, say, HIV or hepatitis or something), the doubt surrounding a positive test result was enough of a psychological hazard for them to actively dissuade some people from taking it, and many sexual health clinics don't even offer it for this reason.

comment by Ronak · 2014-09-09T02:59:59.467Z · LW(p) · GW(p)

From Poor Economics by Esther Duflo and Abhijit Bannerjee

There is potentially another reason the poor may hold on to beliefs that might seem indefensible: When there is little else they can do, hope becomes essential. One of the Bengali doctors we spoke to explained the role he plays in the lives of the poor as follows: “The poor cannot really afford to get treated for anything major, because that involves expensive things like tests and hospitalization, which is why they come to me with their minor ailments, and I give them some little medicines which make them feel better.” In other words, it is important to keep doing something about your health, even if you know that you are not doing anything about the big problem. In fact, the poor are much less likely to go to the doctor for potentially life-threatening conditions like chest pains and blood in their urine than with fevers and diarrhea. The poor in Delhi spend as much on short-duration ailments as the rich, but the rich spend much more on chronic diseases.34 So it may well be that the reason chest pains are a natural candidate for being a bhopa disease (an older woman once explained to us the dual concepts of bhopa diseases and doctor diseases—bhopa diseases are caused by ghosts, she insisted, and need to be treated by traditional healers), as are strokes, is precisely that most people cannot afford to get them treated by doctors.

Replies from: chaosmage
comment by chaosmage · 2014-09-09T11:35:03.595Z · LW(p) · GW(p)

Thank you, that was very interesting.

It seems to me these people are paying in sanity what they can't pay in money - and the price they're paying is arguably higher than what the rich are paying, not even considering the physical health effects.

This might be one of the ways that being poor is expensive.

Replies from: Ronak, Ronak
comment by Ronak · 2014-09-09T13:56:32.234Z · LW(p) · GW(p)

Indeed, 'being poor is expensive' is related to how they frame this fact. From the end of the same chapter:

The poor seem to be trapped by the same kinds of problems that afflict the rest of us—lack of information, weak beliefs, and procrastination among them. It is true that we who are not poor are somewhat better educated and informed, but the difference is small because, in the end, we actually know very little, and almost surely less than we imagine. Our real advantage comes from the many things that we take as given. We live in houses where clean water gets piped in—we do not need to remember to add Chlorin to the water supply every morning. The sewage goes away on its own—we do not actually know how. We can (mostly) trust our doctors to do the best they can and can trust the public health system to figure out what we should and should not do. We have no choice but to get our children immunized—public schools will not take them if they aren’t—and even if we somehow manage to fail to do it, our children will probably be safe because everyone else is immunized. Our health insurers reward us for joining the gym, because they are concerned that we will not do it otherwise. And perhaps most important, most of us do not have to worry where our next meal will come from. In other words, we rarely need to draw upon our limited endowment of self-control and decisiveness, while the poor are constantly being required to do so. We should recognize that no one is wise, patient, or knowledgeable enough to be fully responsible for making the right decisions for his or her own health. For the same reason that those who live in rich countries live a life surrounded by invisible nudges, the primary goal of health-care policy in poor countries should be to make it as easy as possible for the poor to obtain preventive care, while at the same time regulating the quality of treatment that people can get. An obvious place to start, given the high sensitivity to prices, is delivering preventive services for free or even rewarding households for getting them, and making getting them the natural default option when possible. Free Chlorin dispensers should be put next to water sources; parents should be rewarded for immunizing their children; children should be given free deworming medicines and nutritional supplements at school; and there should be public investment in water and sanitation infrastructure, at least in densely populated areas. As public health investments, many of these subsidies will more than pay for themselves in the value of reduced illness and death, and higher wages—children who are sick less often go to school more and earn more as adults. This does not mean that we can assume that these will automatically happen without intervention, however. Imperfect information about benefits and the strong emphasis people put on the immediate present limit how much effort and money people are willing to invest even in very inexpensive preventive strategies. And when they are not inexpensive, there is of course always the question of money. As far as treatment is concerned, the challenge is twofold: making sure that people can afford the medicines they need (Ibu Emptat, for one, clearly could not afford the asthma medicine that her son needed), but also restricting access to medicines they don’t need as a way to prevent growing drug resistance. Because regulating who sets up a practice and decides to call himself a doctor seems to be beyond the control of most governments in developing countries, the only way to reduce the spread of antibiotic resistance and the overuse of high-potency drugs may be to put maximal effort into controlling the sale of these drugs. All this sounds paternalistic, and in a way, it certainly is. But then it is easy, too easy, to sermonize about the dangers of paternalism and the need to take responsibility for our own lives, from the comfort of our couch in our safe and sanitary home. Aren’t we, those who live in the rich world, the constant beneficiaries of a paternalism now so thoroughly embedded into the system that we hardly notice it? It not only ensures that we take care of ourselves better than we would if we had to be on top of every decision, but also, by freeing us from having to think about these issues, it gives us the mental space we need to focus on the rest of our lives. This does not absolve us of the responsibility of educating people about public health. We do owe everyone, the poor included, as clear an explanation as possible of why immunization is important and why they have to complete their course of antibiotics. But we should recognize—indeed assume—that information alone will not do the trick. This is just how things are, for the poor, as for us.

Replies from: cameroncowan
comment by cameroncowan · 2014-09-11T19:57:15.269Z · LW(p) · GW(p)

These are all nice ideas but someone has to pay for them and it won't be cheap and 2nd of all. I know of plenty of people who are living in terrible conditions right here in this country. When one is poor everything is harder because you have to do everything yourself and pay out the nose for services that the wealthy get for far less. Whether in Africa or the US, poverty has a cost.

comment by Ronak · 2014-09-09T13:58:25.875Z · LW(p) · GW(p)

I'm interested in your calling it 'paying in sanity.' Are you referring to the insanity of believing in Bengali babus, or the fact that they're preserving their own sanity in some way by not going to a real doctor for things they know they can't afford?

Replies from: chaosmage
comment by chaosmage · 2014-09-09T15:06:28.170Z · LW(p) · GW(p)

The former. I'm speculating this tendency to rely on hope for serious problems while relying on science for small ones creates compartmentalization, which impairs rationality and increases religiosity.

The correlation between poverty and religiosity is obvious, this is just a speculative direction of causation. Irrationality would probably lead to poverty, but if poverty also led to irrationality, the two causations would reinforce each other and explain the robustness of the correlation.

comment by kalium · 2014-09-08T16:25:15.850Z · LW(p) · GW(p)

Thanks to its multiple infection sites, herpes has the unusual property that two people, neither of whom have an STI, can have sex that leads to one of them having an STI. It's a spontaneous creation of stigma! And if you have an asymptomatic infection (very common), there's no way to know whether it's oral (non-stigmatized, not an STI) or genital (stigmatized, STI) since the major strains are only moderately selective.

comment by fubarobfusco · 2014-09-08T15:21:25.441Z · LW(p) · GW(p)

... and that's why you should prefer to sleep with rationalists. :)

Replies from: James_Miller
comment by James_Miller · 2014-09-08T15:31:34.057Z · LW(p) · GW(p)

But it might be rational to not find out if you believed you would have a duty to warn potential lovers if you tested positive, or were willing to lie but believed yourself to be a bad actor.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-08T15:54:45.668Z · LW(p) · GW(p)

How is it rational to willfully keep others in ignorance of a risk they have every right to know about? The discomfort of honest disclosure is a minor inconvenience when compared to the disease.

Replies from: James_Miller, None, sixes_and_sevens, ChristianKl
comment by James_Miller · 2014-09-08T16:02:40.538Z · LW(p) · GW(p)

You are right for the rationalist who gives substantial weight to the welfare of his or her lovers. But being rational doesn't necessarily imply you that care much about other people.

Replies from: army1987, polymathwannabe
comment by A1987dM (army1987) · 2014-09-08T18:49:33.048Z · LW(p) · GW(p)

A rationalist that doesn't care about the welfare of their lovers and yet believes they have a duty to warn them about if they tested positive (but no duty to get tested in the first place, even if the cost is nonpositive)?

comment by polymathwannabe · 2014-09-08T16:45:12.086Z · LW(p) · GW(p)

Are you advocating for prisoner defection?

Replies from: James_Miller
comment by James_Miller · 2014-09-08T17:07:11.290Z · LW(p) · GW(p)

In my game theory class I teach that rational people will defect in the prisoner's dilemma game, although I stress that you should try to change the game so it is no longer a prisoner's dilemma.

Replies from: shminux, Toggle
comment by shminux · 2014-09-08T19:57:10.454Z · LW(p) · GW(p)

I hope you also talk about Parfit's hitchhiker, credible precommitment and morals (e.g. honor, honesty) as one of its aspects.

Replies from: James_Miller
comment by James_Miller · 2014-09-09T14:02:42.866Z · LW(p) · GW(p)

I spend a lot of time on credible threats and promises, but I don't do Parft's hitchhicker as it doesn't seem realistic.

comment by Toggle · 2014-09-09T17:46:59.896Z · LW(p) · GW(p)

Can this situation be modeled as a prisoner's dilemma in a useful way? There seem to be some important differences.

For example, if both 'prisoners' have the same strain of herpes, then the utility for mutual defection is positive for both participants. That is, they get the sex they were looking for, with no further herpes.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-09-10T00:01:59.689Z · LW(p) · GW(p)

Not prisoner's dilemma, but successful coordination to which a decrease in the spread of HIV in the gay community is attributed: serosorting.

comment by [deleted] · 2014-09-08T17:29:25.513Z · LW(p) · GW(p)

A classic example of confusing is with ought...

comment by sixes_and_sevens · 2014-09-08T16:11:53.021Z · LW(p) · GW(p)

The base rate of HSV2 in US adults is ~20%. I would argue that if you're sexually active, and don't get an HSV test between partners (which is typically not part of the standard barrage of STD tests), you're maintaining the same sort of plausible deniability strategy as those who pay to not see the results of their apropos-of-nothing tests.

comment by ChristianKl · 2014-09-08T21:01:38.741Z · LW(p) · GW(p)

If you do think you have an ethical obligation to inform others of a risk like this, do when did you test yourself the last time for herpes?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-08T21:30:18.135Z · LW(p) · GW(p)

If you must know, I'm a virgin. I have, however, engaged in erotic practices not involving genital contact.

Replies from: ChristianKl
comment by ChristianKl · 2014-09-08T23:01:53.791Z · LW(p) · GW(p)

If that wouldn't be the case, how often would you think you would test yourself?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-09T02:24:13.776Z · LW(p) · GW(p)

I guess a minimum should be before and after each new partner, plus additional tests if I suspect infidelity.

comment by Lumifer · 2014-09-12T15:55:01.303Z · LW(p) · GW(p)

Peter Thiel gave an AMA at Reddit, mentioned friendly AI and such (and even neoreaction :-D).

Replies from: ChristianKl, Nectanebo
comment by ChristianKl · 2014-09-13T19:31:28.926Z · LW(p) · GW(p)

His answer to "Peter, what's the worst investment you've ever made? What lessons did you learn from it?" is intersting. He focuses on not investing more on facebook. The shift of focus says a lot about his mindset.

comment by Nectanebo · 2014-09-12T20:03:50.852Z · LW(p) · GW(p)

One of the better AMAs I've read.

Peter is an interesting guy. Is his book worth reading?

Replies from: Lumifer
comment by Lumifer · 2014-09-13T03:48:51.004Z · LW(p) · GW(p)

I read/scanned the predecessor of that book, the transcripts of his Stanford classes where he taught one course. They were quite interesting and worth reading.

comment by beoShaffer · 2014-09-09T01:27:06.551Z · LW(p) · GW(p)

Is there still a rewards credit card that autodonates to MIRI or CfAR? I've seen them mentioned, but can't find any sign up links that are still live.

Replies from: malo, None
comment by Malo (malo) · 2014-09-09T20:47:40.411Z · LW(p) · GW(p)

Unfortunately the program has been discontinued by Capital One :(

We have it in our queue to look into alternatives.

One thing you might want to look into is that many cards will allow you to donate your reward points etc. to charity. For many credit cards, this generates more value for the charity you choose to donate to.

comment by [deleted] · 2014-09-09T12:43:49.667Z · LW(p) · GW(p)

I think they stopped distributing them. The last I saw, they had that entry struck out on their support page.

comment by [deleted] · 2014-09-09T13:42:54.459Z · LW(p) · GW(p)

Has anyone ever worked for Varsity Tutors before? I'm looking at applying to them as an online tutor, but I don't know their track record from a tutor point of view. Has anyone had any experience with them?

Replies from: free_rip, James_Miller, None
comment by free_rip · 2014-09-10T21:55:29.186Z · LW(p) · GW(p)

Never worked for them in particular, but my experience with such online tutoring businesses hasn't been great: generally don't get many hours, are expected to commit fully to being available at certain times every week (which when in uni, with tests etc. at unexpected times, isn't too possible - might be possible for you in your situation) and they take a fair chunk of your earnings. On one occasion I put a lot of time into signing up, getting documents etc. to verify myself, and then never got a single student. On the other hand, signing up for services such as www.firsttutors.com has been great (not sure if this is international, I've been using the NZ site, but think it is). Basically it's a repository of tutors, people come and leave messages for you to see if you'd be a good fit and if you have times you could both make it, and then you each pay a small one-off fee (usually <$20 for the tutor) for the website providing the interface and get eachother's contact details. I've set up both online and in-person tutoring through this, online being about a fifth of all requests. The first year I used it I got about 3 or 4 students through it (each of whom I met for one or two hours a week and lasted on average ~6 months). Nowadays, with a few good reviews on there, I've put up my fees to double what they used to be and still get about 15 requests a year, each of which is good for about 2 hours tutoring a week - I don't take them all, but I could. And the fee the website charges is nothing in comparison to the hours I get out of it, usually it's less than an hour's work to make it back.

Replies from: None
comment by [deleted] · 2014-09-20T16:53:04.947Z · LW(p) · GW(p)

Thank you for the link. I had not heard of First Tutors before, but they seem to be a solid choice and one I'll research more. The flexibility is a very enticing quality, considering the high level of control I've seen in other service providers.

comment by James_Miller · 2014-09-09T14:13:36.413Z · LW(p) · GW(p)

Tutoring seems like a great way for lots of LW people to earn extra money. Apparently at least one high end tutor earns $1000 an hour.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-09-09T22:16:09.132Z · LW(p) · GW(p)

Interesting article, but that tutor is in a fairly small niche-- test prep tutoring for the children of very rich parents.

It's major that (when he tells the reporter how to solve a math problem), that he starts with teaching the reporter how to lower his panic level.

comment by [deleted] · 2014-09-10T21:12:25.254Z · LW(p) · GW(p)

I haven't worked with that specific company before, but there are a lot of mixed incentives in the tutor market.

If we can believe Glassdoor, they offer around $20/hour. (I suspect the 2 reported at $30/hour are either grad students or some other specialization). Here are some employee reviews, but I expect two of the five-stars are faked.

Judging by the reviews and my own experience in the past, I think you can expect to get around five hours a week of tutoring this way. That doesn't include time spent preparing for topics and other overhead. I imagine the only way the company maintains quality control is by reducing hours on tutors after one or two bad reviews, or after a single refund request. Office politics wrt the director-tutor relationship are probably going to be brutal, and there's not going to be any reward or incentive for doing an above-average job. It seems reasonable to assume turnover is high.

Since you're looking for online tutoring, I assume it's not possible to tutor in person in your local area?

Replies from: None
comment by [deleted] · 2014-09-20T16:51:04.162Z · LW(p) · GW(p)

That is correct. While I am looking into possibly offering Common Core math tutoring in the local area (there is an intense dislike of Common Core among parents here and I feel a tutoring service specifically for it may relieve some burdens and be worth the expense to these families), for the moment I am looking entirely online.

Thank you for the links. The information really does not surprise me. I have an expectation of $15/hour unless I prove to be extremely effective as a tutor. The overhead is the central unknown to me as it's something I won't have clear numbers on until I actively deal with such an issue. Using friends of mine who currently tutor (one through Varsity) as examples, I'm not too worried about the overhead.

comment by NancyLebovitz · 2014-09-14T20:35:52.030Z · LW(p) · GW(p)

Research about online communities with upvotes and downvotes

We find that negative feedback leads to significant changes in the author’s behavior, which are much more salient than the effects of positive feedback. These effects are detrimental to the community: authors of negatively evaluated content are encouraged to post more, and their future posts are also of lower quality. Moreover, these punished authors are more likely to later evaluate their fellow users negatively, percolating these undesired effects through the community.

I don't think things are quite that bad here.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-09-18T20:07:36.819Z · LW(p) · GW(p)

Didn't read the PDF, but I suspect the main problem is that not all "communities with downvotes" are the same. Some websites have downvotes where the downvoted comments are just as visible as the upvoted ones; so there is no punishment.

Not sure if the causality can't go the other way round: crazy people having more time to post comments. Or even that members are more tolerant to new crazy people and annoyed by them later (which causes the decrease of comment karma), while the crazy people gradually write more and more comments because they become more comfortable or have more open battles to fight.

comment by Lumifer · 2014-09-12T19:20:04.791Z · LW(p) · GW(p)

Y Combinator published a list of requests for startups.

Replies from: Azathoth123, drethelin
comment by Azathoth123 · 2014-09-13T01:54:09.548Z · LW(p) · GW(p)

The list makes for interesting ideas. Most of them seem good, but a few make we wonder about Paul Graham. Some of the ideas (e.g., Government) make me wonder if he's starting to drink his own cool-aid and it has caused him to forget everything he has learned along the way. With others (e.g., Diversity) one almost gets the impression that the SJ crowd is putting the screws on silicon valley and he has to at least through them some bone (the since deleted "Female Founders" essay reads similarly).

Replies from: Pfft, Lumifer, Izeinwinter
comment by Pfft · 2014-09-13T15:28:56.058Z · LW(p) · GW(p)

make me wonder about Paul Graham

I think this list is due to Sam Altman. He has written about wanting to fund breakthrough technologies, and shortly after he became Y Combinator president they invested in a fusion energy company.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-13T18:06:29.159Z · LW(p) · GW(p)

Well, that would explain why the list ignores Paul Graham's advise of investing in fields one understands.

comment by Lumifer · 2014-09-13T03:53:49.165Z · LW(p) · GW(p)

I am not terribly impressed by that list as it looks like a collection of wouldn't-it-be-nice-to-have wishes.

The Government section looks fine -- the government is a big customer and does have very bad software. But yeah, the Diversity section is... weird. At least there is no Save the Environment section.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-13T18:04:46.635Z · LW(p) · GW(p)

The Government section looks fine

It suggests someone at Y Combinator now alieves he has magical superpowers about cutting through government procurement bureaucracy.

Replies from: Lumifer, None
comment by Lumifer · 2014-09-16T14:56:22.326Z · LW(p) · GW(p)

Not quite. This is a list of requests -- the Y Combinator would like to find ways to achieve magical superpowers to cut through the government procurement bureaucracy.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-17T04:17:45.852Z · LW(p) · GW(p)

Then why did the section talk about how inefficient government software was rather than cutting through procurement bureaucracy?

Replies from: Lumifer
comment by Lumifer · 2014-09-17T14:42:17.842Z · LW(p) · GW(p)

Because you need to have what's called a "market opportunity" to start with.

comment by [deleted] · 2014-09-14T19:01:30.935Z · LW(p) · GW(p)

A while back I worked at a startup (20-ish people) that had (UK) local government as their main customer.

Large companies don't have a monopoly on providing bad software to governments, even if they have advantages.

comment by Izeinwinter · 2014-09-13T10:13:18.540Z · LW(p) · GW(p)

It seems very credible that "Write better software for the US government" is a field that is shockingly underexploited, simply because of the ideological biases and likely background of the typical american start-up entrepreneur. Do you have the faintest Idea what software to make social services more efficient ought to look like ? Because I don't and I figure very few people looking to start a coding shop do either. The only idea in this field I can think of with any chance of working is to try and run arbitrage against "not invented here" and check what tools are in use in the rest of the first world.

Replies from: Lumifer
comment by Lumifer · 2014-09-13T14:36:03.623Z · LW(p) · GW(p)

It seems very credible that "Write better software for the US government" is a field that is shockingly underexploited, simply because of the ideological biases and likely background of the typical american start-up entrepreneur.

It seems very credible that this field is "underexploited" for two main reasons.

One is that business dealings with the US government are very stupid, inconvenient, and annoying. You drown in paperwork, you have to certifiy all kinds of silly things, etc. It's OK for a large organization with a compliance department, it's not so good for a startup.

Two is that government contracts are a prime field for crony capitalism. You will be competing not only on price and quality but also on the depth of the old-boy network and the ability to provide invisible kickbacks -- again, not a strength of startups.

Replies from: Izeinwinter
comment by Izeinwinter · 2014-09-13T20:06:29.783Z · LW(p) · GW(p)

.. Are you speaking from personal experience of selling things and services to the government here? Because if the answer to that is "No" you may, possibly, want to check if you remembered to remove those ideological blinders I mentioned. The main point of the paperwork vendors to the state have to do is to make sure that crony capitalism doesn't happen. If the process is very badly designed, that fails, but I've never worked anywhere that found it more obnoxious to do business with the government than with any other large customer. Usually it is less so.. The USG can't be that much worse than the nordic countries. It's still a first world state.

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-14T20:39:33.051Z · LW(p) · GW(p)

I can speak from personal experience. An executive for the contractor I work for was caught with a massive undisclosed conflict of interests.

This had two main effects: we must now listen to an annual mandatory ethics briefing, in addition to all the briefings and paper work inflicted on us from previous misbehavior. (Note the things talked about during said briefing generally have nothing to do with what the executive was caught doing.) Second, the executive was summarily fired and managed to fall upward into a high level job with the agency we contract with.

comment by drethelin · 2014-09-13T19:26:36.631Z · LW(p) · GW(p)

These are all crazy vague.

comment by Cube · 2014-09-08T20:43:58.517Z · LW(p) · GW(p)

A friend of mine has started going into REM in frequent 5 minutes cycles during the day, in order to boost his learning potential. He developed this via multiple acid trips. Is that safe? It seems like there should be some sort of disadvantage to this system but so far he seems fine.

Replies from: ChristianKl, skeptical_lurker, Douglas_Knight
comment by ChristianKl · 2014-09-08T23:24:11.243Z · LW(p) · GW(p)

How does he know that he actually is in REM? How does he know it boosts his learning potential?

comment by skeptical_lurker · 2014-09-09T15:40:04.202Z · LW(p) · GW(p)

How does LSD help you get develop an ability to get to sleep faster? LSD makes one less sleepy, so this seems like an improbably ability to ascribe to it. But if it actually works, its a really useful ability.

You might want to try asking this question to a polyphasic sleeping community BTW.

comment by Douglas_Knight · 2014-09-08T22:20:58.345Z · LW(p) · GW(p)

He developed this

What is "this"? this ability?

Does he also get a full night's sleep? Eliminating other stages of sleep is almost certainly bad, but supplementing with REM seems to me unlikely to be bad.

People with narcolepsy basically only have REM sleep. Narcolepsy is very bad, but many people who eventually develop it seemed to have only had REM sleep when they were functional with no ill effects. In particular, they greatly benefit from naps (both before and after developing full-blown narcolepsy).

comment by [deleted] · 2014-09-08T12:59:04.291Z · LW(p) · GW(p)

Cryonics vs. Investment:

This is a question I have already made a decision on but would like some outside opinions for while it's still fresh. My beliefs have recently changed from "cryonics is not worth the investment" to "cryonics seems to be worth the investment but greater certainty for a decision is still wanting" (CStbWtIbGCoaDiSW for short). I've explored my options with Rudi Hoffman and found that while my primary choice of provider, Alcor, is out of my current range, my options are not unobtainable. CI with the bare basics, lowest pay option is within my budget, and Alcor is likely to be in my budget within a few years if my career plans continue working as they are.

There's the context, here's the question: which seems more effective, applying now with a cryonics provider under conditions I consider less than ideal (for me, CI using a term life policy rather than a whole life policy with Alcor, which is what I want) or saving for a short time (some odd months) so I can open up my mutual funds portfolio?

Why these are at odds: because of my income, starting up even a low pay cryonics plan now would set back my ability to invest likely to my next job. The longer I wait on investing, the less effective the investments will be. If all cryonics plans were equal, this would still be a fairly easy decision, but as my beliefs stand, CI is an option I currently do not favor and term life is a policy I definitely do not favor. Why? Because there is a very real probability that, once the policy expires, renewing or changing will incur very large costs should my health conditions change (probable enough to be a concern). So whole life or universal life with Alcor is, at the moment, what I favor.

So, my question again: invest in a cryonics option I do not want now or more quickly develop my portfolio, improving my finances, and allowing for better options in the near future? You can probably guess I have chosen the later option, putting my efforts into securing an investment. If no path can take me to the cryonics option I want now, then the best path is to minimize the distance between me and what I consider to be the best path. But, I am not the only one who has made decisions like this, so any second thoughts or considerations would be welcome.

Replies from: James_Miller, Larks
comment by James_Miller · 2014-09-08T15:29:20.407Z · LW(p) · GW(p)

Consider other possible tradeoffs such as engaging in less leisure activities so you can take a part time job that will pay for cryonics, or saving money by reducing consumption.

Replies from: None
comment by [deleted] · 2014-09-08T15:53:09.335Z · LW(p) · GW(p)

These are worthwhile tips and ones I've explored. I've reduced consumption down to bare minimums already. Most of my time out of work is spent in activities for work as my position requires time spent with the community and networking, but I still look for opportunities on the side. Still, these are useful and assist with either option. Thanks.

comment by Larks · 2014-09-09T23:16:27.427Z · LW(p) · GW(p)

Have you considered Term life insurance vs Whole-of-Life insurance? Salemen will try to push you towards the latter but the former can have much lower premiums (especially if your time horizon is < 40 years)

Replies from: None, ciphergoth
comment by [deleted] · 2014-09-20T16:47:41.645Z · LW(p) · GW(p)

I have considered it. The <40 years horizon is especially affecting because, while my condition is not currently life threatening (or much to note at all), I'm still young and active in controlling it. As I get older, it may be harder for my body to avoid the adverse affects, and I could be dead by 60.

My biggest concern with term is going to term life now, only to be in much worse condition when the term expires, causing a renewal to raise my payments heavily. Since it's a point brought up by Larks, I'll say here I have no expectation to self-finance, and I don't know how much worse my condition will be when it comes time to renew.

comment by Paul Crowley (ciphergoth) · 2014-09-15T19:50:01.100Z · LW(p) · GW(p)

Surely for cryonics you want whole-of-life?

Replies from: Larks
comment by Larks · 2014-09-17T02:27:50.448Z · LW(p) · GW(p)

There are various reasons you would not want this:

  • You intend to save a lot of money and self-finance when able
  • You think you might change your mind
  • You think you will die in the next 40 years
  • You think you will be unusually healthy, and thus renewing will be cheaper
  • You have a higher discount rate than the market, and value paying $10/month rather than $60/month a great deal.
comment by JoshuaFox · 2014-09-11T12:42:00.368Z · LW(p) · GW(p)

Could someone recommend an article (at advanced pop-sci level) providing the best arguments against the multiverse approach to quantum mechanics.

What is the best textbook that explains quantum mechanics from a multiverse perspective (rather than following the Copenhagen school and then bringing in the multiverse as an alternative)? This should be a textbook, not pop-sci, but at a basic a level as possible.

Replies from: pragmatist
comment by pragmatist · 2014-09-12T14:41:24.791Z · LW(p) · GW(p)

David Wallace's The Emergent Multiverse is an excellent introduction to the many-worlds interpretation, written by its best defender. Most of it should be accessible to a layperson, although there are technical sections. You can't use it to fully learn quantum mechanics from scratch, though. But if you learn the basic formalism from another textbook (I recommend this one; the first eight chapters should suffice) you'll be able to follow almost all of Wallace.

As for criticism, this is the best non-technical article I know of. It does presume some knowledge of quantum mechanics and many-worlds, but not deep technical knowledge.

comment by Adam Zerner (adamzerner) · 2014-09-14T01:53:55.977Z · LW(p) · GW(p)

How useful would it be to have more people working on AI/FAI? Would it be a big help to have another 1,000 researchers working on it making $200,000 a year? Or does an incredibly disproportionate amount of the contribution come from big names like Eliezer?

comment by Adam Zerner (adamzerner) · 2014-09-13T19:50:19.578Z · LW(p) · GW(p)

What do we want out of AI? Is it happiness? If so, then why not just research wireheading itself and not encounter the risks of an unfriendly AI?

Replies from: hairyfigment, None
comment by hairyfigment · 2014-09-13T20:52:46.483Z · LW(p) · GW(p)

We don't know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don't want that at all. I recall some commenter here seemed to want a long-term ruler AI.

Replies from: Leonhart
comment by Leonhart · 2014-09-15T20:00:11.042Z · LW(p) · GW(p)

I am generally in favour of a long-term ruler AI; though I don't think I'm the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.

comment by [deleted] · 2014-09-14T00:00:47.635Z · LW(p) · GW(p)

The promise of AI is irresistibly seductive because an FAI would make everything easier, including wireheading and survival.

comment by [deleted] · 2014-09-09T18:12:49.219Z · LW(p) · GW(p)

If I understand correctly, people become utilitarians because they think that global suffering/well-being have such big values that all the other values don't really matter (this is what I see every time someone tries to argue for utilitarianism, (2) please correct me if I'm wrong). I think a lot of people don't share this view, and therefore, before trying to convince them they should choose utilitarianism as their morality, you first need to convince them about the value of harm-pleasure.

Replies from: mare-of-night
comment by mare-of-night · 2014-09-10T05:41:13.931Z · LW(p) · GW(p)

I think it depends? People around here use utilitarianism to mean a few different things. I imagine that's the version talked about the most because the people involved in EA tend to be those types (since it's easier to get extra value via hacking if your most important values are something very specific and somewhat measurable). I think that might also be the usual philosopher's definition. But then Eliezer (in the metaethics sequence) used "utilitarianism" to mean a general approach to ethics where you add up all the values involved and pick the best outcome, regardless of what your values are and how you weight them. So it's sometimes a little confusing to know what utilitarianism means around here.

(Edited for spelling.)

Replies from: Douglas_Knight, Viliam_Bur
comment by Douglas_Knight · 2014-09-12T06:33:32.571Z · LW(p) · GW(p)

I do not believe Eliezer makes that mistake.

Replies from: mare-of-night
comment by mare-of-night · 2014-09-12T16:54:04.543Z · LW(p) · GW(p)

I might have misremembered. Sorry about that.

comment by Viliam_Bur · 2014-09-12T08:18:03.516Z · LW(p) · GW(p)

People around here use utilitarianism to mean a few different things.

I don't understand. One of those things is "compare the options, and choose the one with the best consequences". What are the other things?

Replies from: Lumifer, pragmatist, mare-of-night
comment by Lumifer · 2014-09-12T15:04:19.700Z · LW(p) · GW(p)

One of those things is "compare the options, and choose the one with the best consequences".

You are illustrating the issue :-) That is consequentialism, not utilitarianism.

comment by pragmatist · 2014-09-12T18:47:42.224Z · LW(p) · GW(p)

Differences arise when you try to flesh out what "best consequences" means. A lot of people on this site seem to think utilitarianism interprets "best consequences" as "best consequences according to your own utility function". This is actually not what ethicists mean when they talk about utilitarianism. They might mean something like "best consequences according to some aggregation of the utility functions of all agents" (where there is disagreement about what the right aggregation mechanism is or what counts as an agent). Or they might interpret "best consequences" as "consequences that maximize the aggregate pleasure experienced by agents" (usually treating suffering as negative pleasure). Other interpretations also exist.

Replies from: Nornagest
comment by Nornagest · 2014-09-12T19:35:21.846Z · LW(p) · GW(p)

As far as I've read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that's what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties.

I agree that the term tends to be used too broadly around here -- probably because the term sounds like it points to something along the lines of "an ethic based on evaluating a utility function against options", which is actually closer to a working definition of consequentialism. It's not a word that's especially well defined, though, even in philosophy.

comment by mare-of-night · 2014-09-12T16:52:44.905Z · LW(p) · GW(p)

"Compare the options, and choose the one that results in the greatest (pleasure - suffering)."

comment by Lumifer · 2014-09-08T17:28:03.246Z · LW(p) · GW(p)

What's supposed to happen if an expanding FAI friendly to civilization X collides with an expanding FAI friendly to civilization Y?

Replies from: bogus, ChristianKl, Nectanebo, None, Izeinwinter
comment by bogus · 2014-09-10T19:58:40.921Z · LW(p) · GW(p)

If both FAIs use TDT or a comparable decision theory, then (under plausible assumptions), they will both maximize an aggregate of both civilizations' welfare.

Replies from: Lumifer
comment by Lumifer · 2014-09-10T20:23:48.925Z · LW(p) · GW(p)

Each FAI is friendly to its creators, not necessarily to the rest of the universe. Why would a FAI be interested in the welfare of aliens?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-09-10T22:01:19.510Z · LW(p) · GW(p)

You might need a coalition against less tractable aliens, and you also might need a coalition to deal with something the non-living universe is going to throw at you.

If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.

Replies from: Lumifer
comment by Lumifer · 2014-09-11T16:23:36.518Z · LW(p) · GW(p)

If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.

Heh. The situation is symmetric, so the humanity is also novelty for aliens. And how much value does novelty has? It it similar to having some exotic pets? X-D

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-09-11T16:39:37.610Z · LW(p) · GW(p)

I meant novelty in a broad sense-- not just like having an exotic pet. I'd expect different sensoria leading to somewhat different angles on the universe, and better understanding of biology and material science, at least.

comment by ChristianKl · 2014-09-08T23:19:27.593Z · LW(p) · GW(p)

It's not clear that territory that already has a FAI watching over it can be overtaken by another FAI. A FAI might expand to inhibit territory by sending small probes. I think those probes are unlikely to have any effect in territory already occupied by another FAI.

I'm also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.

Replies from: Lumifer
comment by Lumifer · 2014-09-09T18:43:22.016Z · LW(p) · GW(p)

I'm also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.

That's a valid point. An AI can rapidly expand across interstellar distances only by replicating and sending out clones. Assuming the speed of light limit, the clones would be essentially isolated from each other and likely to develop independently. So while we talk about "AI expanding through the light cone", it's actually a large set of diverging clones that's expanding. It's an interesting question how far could they diverge from one another.

comment by Nectanebo · 2014-09-08T20:20:41.232Z · LW(p) · GW(p)

If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be 'stronger' than the other, and that there will be a winner-take-all(-of-the-universe?) resolution?

If there is some compatibility, perhaps a merge, a la Three Worlds Collide?

Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction?

It's a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we're talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.

Replies from: Lumifer
comment by Lumifer · 2014-09-09T18:37:15.571Z · LW(p) · GW(p)

This question is taking a few extra steps beyond that, even.

Oh, sure, it's much more of a flight-of-fantasy question than a realistic one. An invitation to consider the tactical benefits of bombarding galaxies with black holes accelerated to a high fraction of c, maybe X-D

But the original impetus was the curiosity about the status of intelligent aliens for a FAI mathematically proven to be friendly to humans.

comment by [deleted] · 2014-09-08T17:55:09.597Z · LW(p) · GW(p)

Neither defects?

Replies from: Lumifer
comment by Lumifer · 2014-09-08T18:13:01.780Z · LW(p) · GW(p)

Why do you think it's going to be a prisoner's dilemma type of situation?

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-08T19:13:26.212Z · LW(p) · GW(p)

In the intersection of their future light cones, each FAI can either try to accommodate the other (C) or try to get its own way (D). If one plays C and one plays D, the latter's values are enforced in the intersection of light cones; if both play C, they'll enforce some kind of compromise values; if they both play D, they will fight. So the payoff matrix is either PD-like or Chicken-like depending on how bloody the fight would be and how bad their values are by each other's standards.

Or am I missing something?

Replies from: Lumifer, Eugene
comment by Lumifer · 2014-09-08T19:33:11.348Z · LW(p) · GW(p)

The contact between the FAIs is not a one-decision-to-fight-or-share deal. It's a process that will take some time and each party will have to take many decisions during that process. Besides, the payoff matrix is quite uncertain -- if one initially cooperates and one initially defects does the defecting one get more? No one knows. For example, the start of the hostilities between Hitler and Stalin was the case where Stalin (initially) cooperated and Hitler (initially) defected. The end result -- not so good for Hitler.

There are many options here -- fully cooperate (and potentially merge), fight till death, divide spheres of influence, set up a DMZ with shared control, modify self, etc.

The first interesting question is, I guess, how friendly to aliens will a FAI be? Will it perceive another alien FAI as an intolerable obstacle in its way to implement friendliness as it understands it?

More questions go along the lines of how likely it is that one FAI will be stronger (or smarter) than the other one. If they fight, what might it look like (assume interstellar distances and speed of light limits). How might an AI modify itself on meeting another AI, etc. etc.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-09-10T23:47:44.882Z · LW(p) · GW(p)

how friendly to aliens will a FAI be?

As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate.

The costs of peace will depend on the differences between those two AIs. "Let's both self-modify to become compatible" is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be "winner takes all" or "let's split the universe" or "let's merge into one" or maybe something else I didn't think about.

Replies from: Lumifer
comment by Lumifer · 2014-09-11T16:25:29.618Z · LW(p) · GW(p)

the utility maximizing choice

The critical question is, whose utility?

The Aumann theorem will not help here since the FAIs will start with different values and different priors.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-09-12T08:04:22.792Z · LW(p) · GW(p)

Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T14:33:48.521Z · LW(p) · GW(p)

Each AI tries to maximize their own utility, of course.

Then each AI makes its own choice and the two choices might well turn out to be incompatible.

There is also the issue of information exchange -- basically, it will be hard for the two AIs to trust each other.

comment by Eugene · 2014-09-08T22:32:01.015Z · LW(p) · GW(p)

Or am I missing something?

Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn't want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well)

Both of those scenarios result in C out-competing D.

comment by Izeinwinter · 2014-09-15T19:35:56.357Z · LW(p) · GW(p)

... Since I am, of course a FAI (Sarcasm!) I can tell you the answer to this. They obviously split the future time-streams of the universe by each committing instant civilization-wide suicide or not based on a quantum lottery. Anthropic engineering in this way ensures they do not have to fight each other at all, which would entail actual risk of people getting hurt,

No, seriously, you want us to take guesses at how weakly godlike entities are going to interact ? Pftrttfffff,mwhahahahahaahaaa.

Replies from: Lumifer
comment by Lumifer · 2014-09-16T14:59:40.705Z · LW(p) · GW(p)

you want us to take guesses at how weakly godlike entities are going to interact ?

Sure. I find such speculations fun. YMMV, of course.

comment by Warrigal3 · 2014-09-13T14:13:10.393Z · LW(p) · GW(p)

So, I read textbooks "wrong".

The "standard" way of reading a textbook (a math textbook or something) is, at least I imagine, to read it in order. When you get to exercises, do them until you don't think you'd get any value out of the remaining exercises. If you come across something that you don't want to learn, skip forwards. If you come across something that's difficult to understand because you don't fully understand a previous concept, skip backwards.

I almost never read textbooks this way. I essentially read them in an arbitrary order. I tend to start near the beginning and move forwards. If I encounter something boring, I tend to skip it even if it's something I expect to have to understand eventually. If I encounter something I have difficulty understanding because I don't fully understand a previous concept, I skip backwards in order to review the previous concept. Or I skip forwards in the hopes that the previous concept will somehow become clear later. Or I forget about it and skip to an arbitrary different interesting section. I don't do exercises unless either they seem particularly interesting, or I feel like I have to do them in order to understand the material.

I know that I can sometimes get away with the second method even when other people wouldn't be able to. If I were to read a first-year undergraduate physics textbook, I imagine I could read it in essentially any order without trouble, even though I never took undergraduate physics. But I tend to use this method for all textbooks, including textbooks that are at or above my level (Awodey's Category Theory, Homotopy Type Theory, David Tong's Quantum Field Theory, Figure Drawing for All It's Worth).

Is the second method a perfectly good alternative to the "standard" method? Am I completely shooting myself in the foot by using the second method for difficult textbooks? Is the second method actually better than the "standard" method?

Replies from: Adele_L, polymathwannabe
comment by Adele_L · 2014-09-13T15:13:49.324Z · LW(p) · GW(p)

This is how I read too, usually. I think it's one of those things that works better for some people but not others. I've tried reading things the standard way, and it works for some books, but for other books I just get too bored trudging through the boring parts.

BTW, I've also been reading HoTT, so if you want to talk about it or something feel free to message me!

comment by polymathwannabe · 2014-09-13T14:31:23.347Z · LW(p) · GW(p)

On one hand, it's a good sign that you have a keen sense of what you need to know, how and where to look for it, and at what pace. On the other hand, authors who know more about a subject than you do must have had their reasons to choose the order in which they present their material. I'd say keep listening to your gut on what is important to read, but at least try to get acquainted with the other topics you're choosing not to go deeply into.

comment by Adam Zerner (adamzerner) · 2014-09-12T01:33:16.804Z · LW(p) · GW(p)

What do you guys think about having an ideas/brainstorming section? I don't see too much brainstorming of ideas here. Most posts seem to be very refined thoughts. What about a place to brainstorm some of the less refined thoughts?

Replies from: pcm, polymathwannabe
comment by pcm · 2014-09-12T14:38:19.275Z · LW(p) · GW(p)

What would a separate section accomplish that couldn't be done by putting tags in posts/comments?

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-12T15:07:22.663Z · LW(p) · GW(p)

Making it explicit that it's available. I think people are hesitant to make such posts because they don't know if doing so is appropriate because it is infrequently done.

comment by polymathwannabe · 2014-09-12T02:07:55.246Z · LW(p) · GW(p)

This seems to be LW's collected wisdom on the matter.

http://wiki.lesswrong.com/wiki/Futility_of_chaos

Replies from: pcm, adamzerner
comment by pcm · 2014-09-12T14:34:28.273Z · LW(p) · GW(p)

Brainstorming does not rely on chaos. It's a method of using System 1 which delays any censoring by System 2.

Some evidence of LW beliefs about it: here and here. CFAR teaches people to brainstorm more often.

comment by Adam Zerner (adamzerner) · 2014-09-12T03:00:37.521Z · LW(p) · GW(p)

I'm a bit confused by what is meant by futility of chaos so forgive me if I misinterpreted it. Let me try to be a bit more clear about what I'm proposing and let me know if futility of chaos addresses it.

I'm saying that there are ideas that you think are worth brainstorming, and there are ideas that you feel confident enough about to write a post about to get some feedback. Right now it seems that people don't post about the "ideas worth brainstorming" and I suspect that it'd be beneficial if they did and we discussed them.

Futility of chaos seems to be addressing more "chaotic and random" ideas. I don't know enough about math to really know what that means, but I sense that it's different from ideas that smart people on LW judge to be worth brainstorming.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-12T03:57:31.134Z · LW(p) · GW(p)

Brainstorming is too unstructured and unpredictable, a form of "creative disorder" that has received more credit than it deserves.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-12T04:13:38.614Z · LW(p) · GW(p)

What about discussing ideas that you think have a decent shot at being good and important, but that you can't explain fully and still aren't that confident in?

Replies from: Lumifer
comment by Lumifer · 2014-09-12T04:43:00.981Z · LW(p) · GW(p)

Sure, that's what the open thread is for.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-12T14:37:17.045Z · LW(p) · GW(p)

I haven't spent enough time commenting on LW to be sure enough of this, but it doesn't seemed to be used that way. Do you think it's used that way? If not, perhaps it would be beneficial to make it more clear that it can/should be used that way.

Also, maybe it'd be a good idea to break the open thread into categories. Eg:

  • Ideas you're willing to work on to implement.
  • Refined ideas.
  • Unrefined ideas.
  • Requests for advice.
  • Small practical questions.
  • Links to articles.
  • Friendly conversation.
  • Discussions that don't leave a trail. One use case would be if you want to talk about something personal but don't want a record of it on the internet.
Replies from: Lumifer
comment by Lumifer · 2014-09-12T14:54:40.088Z · LW(p) · GW(p)

it doesn't seemed to be used that way.

Then start using it this way. You don't need a permission.

maybe it'd be a good idea to break the open thread into categories

LW has periodic discussions about the need / desirability / implementation of a more granular system of organizing posts. So far these discussions have resulted in nothing, apparently "it's too hard to do" (tm).

But again, if you think it's a good idea, just do it and we'll see how that experiment will play out. For an example you can see how the media thread works.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-12T15:02:22.923Z · LW(p) · GW(p)

Then start using it this way. You don't need a permission.

True, but I think permission would be beneficial because I sense that people are hesitant to go against the norms.

So far these discussions have resulted in nothing, apparently "it's too hard to do" (tm).

I don't think I'm a good enough coder yet to contribute, but I'm starting a coding bootcamp on Monday and I do hope to be able to contribute in the next few months.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T15:08:18.087Z · LW(p) · GW(p)

I think permission would be beneficial because I sense that people are hesitant to go against the norms.

Heh. Who, do you think, should be the one to give you the permission? :-) And how do you feel about permission culture in general? X-D

Besides, I don't think there are any strong existing norms about putting brainstorming posts into the open thread.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2014-09-12T15:27:16.183Z · LW(p) · GW(p)

Who, do you think, should be the one to give you the permission?

Some sort of site guidelines and a UI that makes it clear. For example, if there were categories, it would be clear that the posts in the category are acceptable.

I'm far from a conformist, but I think norms definitely do have a purpose. I don't really have any strong opinions on permission culture in general that I could articulate well.

comment by Ixiel · 2014-09-11T19:54:19.554Z · LW(p) · GW(p)

Could someone please give me some good arguments for a work ethic? I tend to oppose it, but the debate seems too easy so I may be missing something.

Replies from: Torello, Baughn
comment by Torello · 2014-09-11T22:03:37.714Z · LW(p) · GW(p)

Having a work ethic might help you accomplish more things than you would without one.

It's a good reputation boost. "A highly-skilled, hard-working x" might be more flattering than "a highly skilled x."

Work ethic might be a signal/facet of conscientiousness, a desirable trait in many domains.

Replies from: Ixiel
comment by Ixiel · 2014-09-11T22:35:54.842Z · LW(p) · GW(p)

That makes sense; I hadn't thought of that. Thanks. Perhaps there would be a required critical mass of people to accept laziness as a virtue before it becomes "this good or that good" rather than "this good or lack of this good."

comment by Baughn · 2014-09-11T20:38:52.877Z · LW(p) · GW(p)

It'll build habits that also make it easier to do things you want when not at work?

That's the big one. I have things I want to do, in far mode, and I find that diligence at work translates to diligence off work. Admittedly I also love my job, but...

Replies from: Ixiel
comment by Ixiel · 2014-09-11T21:25:08.936Z · LW(p) · GW(p)

Thanks for reply! My question was unclear, but I meant the other meaning. I strongly do believe in doing whatever one does well, but not in seeking to do more work in the first place. I mean the idea that there's something more noble about working 40+ hours a week than not, and that people with sufficient means shouldn't retire in their thirties.

Sure, one can build habits at work, but one can do so cheaper than 2000 hours of one's life per year, net of compensation. Admittedly this does not apply so much if you love your job, but hypothetically if someone values leisure more, is there a way in which choosing that leisure is less ethical?

Replies from: Richard_Kennaway, Lumifer, banx, DanielLC
comment by Richard_Kennaway · 2014-09-12T06:28:25.202Z · LW(p) · GW(p)

"Work" can mean different things, and so also "work ethic".

The way I use it, "work" is whatever you are serious (or at least want to be) about doing, whether it's something that matters in the larger scheme of things or not, and whether or not it earns money. (But having to earn a living makes it a lot easier to be serious about it.)

"Leisure" is whatever you like doing but choose not to be serious about.

In that sense, I'm not much interested in leisure. Idling one's days away on a tropical island is not my idea of fun, and I do not watch television. Valuing seriousness is what I would mean by "work ethic". What one should be serious about is a separate ethical question.

When other people talk about "work", they might mean service to others, and by "leisure" service to oneself. I score low on the "service to others" metric, but for EA people, that is their work ethic.

To others, "work" is earning a living, and "leisure" is whatever you do when you're not doing that. The work ethic relative to that concept is that the pay you get for your work is a measure of the value you are creating for others. If you are idling then you are neglecting your duty to create value all the years that you can, for time is the most perishable of all commodities: a day unused is a day lost to our future light-cone for ever.

Replies from: Ixiel
comment by Ixiel · 2014-09-12T11:25:53.979Z · LW(p) · GW(p)

That is an interesting use of "work" and "leisure," and one with which I was not familiar. I am very serious about my leisure (depending how you use serious... I love semantic arguments for fun but not everybody does so I'll cut that here). The more frequent use I have heard is close to its etymology: what one is allowed to do, as opposed to what one has a duty to do. That is anecdotal to the people I know so may not be the standard. I am much more serious about what I am allowed to do, and what others are allowed to do, than even a self-created duty.

Very interesting and I'd be happy to continue, but to restate the original question with help from noticed ambiguity: is there a strong argument why spending 80000 hours in a job for jobs sake is ethically superior to selling enough time to meet ones need and using rest for ones own goals?

Replies from: Richard_Kennaway, Richard_Kennaway
comment by Richard_Kennaway · 2014-09-13T17:55:41.200Z · LW(p) · GW(p)

is there a strong argument why spending 80000 hours in a job for jobs sake is ethically superior to selling enough time to meet ones need and using rest for ones own goals?

To give a more direct answer, "a job for jobs sake" sounds like a lost purpose. In harder times, everyone had to work hard for as many years as they could, to support themselves, their household, and their community, and the community couldn't afford many passengers. Having broken free of the Malthusian wolves, the pressure is off, but the attitudes remain: idleness is sinful.

And then again, from the transhumanist point of view, the pressure isn't off at all, it's been replaced by a different one. We now have the prospect of a whole universe to conquer. How many passengers can the human race afford in that enterprise, among those able to contribute to it?

comment by Richard_Kennaway · 2014-09-12T12:21:03.038Z · LW(p) · GW(p)

is there a strong argument why spending 80000 hours in a job for jobs sake is ethically superior to selling enough time to meet ones need and using rest for ones own goals?

Meeting one's needs is, by definition, necessary, and one's goals are, by definition, what one pursues. Who doesn't do that, beyond people incapable of supporting themselves and people drifting through life with no particular goals?

Replies from: Ixiel
comment by Ixiel · 2014-09-12T15:06:51.933Z · LW(p) · GW(p)

Sure, that's true for both. The former is just more constrained, and I was looking for an argument for a over b.

And thanks for defining; I had thought those definitions too obvious to bear mention. My bad.

comment by Lumifer · 2014-09-12T00:42:08.406Z · LW(p) · GW(p)

The answer really depends on the underlying value system. For example, most varieties of hedonism would find nothing wrong with retiring to the life of leisure at thirty. But if you value, say, self-actualization (a la Maslow), retiring early is a bad idea.

Generally speaking, the experience of the so-called trust fund kids indicates that NOT having to work for a living is bad for you. You can also compare housewives to working women.

Replies from: Viliam_Bur, Ixiel
comment by Viliam_Bur · 2014-09-12T08:07:52.785Z · LW(p) · GW(p)

if you value, say, self-actualization (a la Maslow), retiring early is a bad idea.

If you want want to self-actualize in a way that does not (reliably, or soon enough) bring money, retiring early can be useful.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T14:38:47.149Z · LW(p) · GW(p)

I think there's some lack of clarity in this thread about what it means to "retire". There are two interpretations (see e.g. this post):

(1) Retire means financial independence, not having to work for a living, so that you can focus your energy on what you want to do instead of what you have to do.

(2) Retire means a carefree life of leisure where you maximize your hedonics by doing easy and pleasant things and not doing hard and stressful things.

I think these two ways of retiring are quite different and lead to different consequences.

Replies from: Ixiel, army1987
comment by Ixiel · 2014-09-12T15:14:50.322Z · LW(p) · GW(p)

I meant to imply the former, albeit with the possibility "what you want to do" is not restricted from including leisure/hedonics/pleasure.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T15:25:00.633Z · LW(p) · GW(p)

Technically, yes, though people mostly use (1) to mean doing something purposeful, an activity after which you can point and say "I made that", while (2) is essentially trying to get as close to wireheading as you currently can :-)

Replies from: Ixiel
comment by Ixiel · 2014-09-12T15:57:26.569Z · LW(p) · GW(p)

Fair enough :)

comment by A1987dM (army1987) · 2014-09-13T10:28:55.138Z · LW(p) · GW(p)

They aren't totally unrelated because easy and pleasant things are less likely to earn you a living than hard and stressful things for obvious supply reasons (unless you're unusual compared to the rest of the labour market with respect to which kinds of things are easy and pleasant to you).

comment by Ixiel · 2014-09-12T11:32:38.810Z · LW(p) · GW(p)

Thank you for responding. Is there a reason you think it is a bad idea beyond Lumifer says so?

I have thought about reading up on housewives, but not asking (the women's studies experts I know are VERY sensitive in their field, but quite engaging in others, so I'm afraid to talk shop). Could you recommend a source on each side?

Replies from: Lumifer
comment by Lumifer · 2014-09-12T14:41:13.034Z · LW(p) · GW(p)

Sorry, don't have any links handy, but you should be able to google up trust-fund kids' issues quite easily. With respect to housewives it's mostly personal observations aka anecdata. I would be wary of studies on the subject as it is a political minefield and a hard thing to research due to confounders and fuzzy definitions.

Replies from: Ixiel
comment by Ixiel · 2014-09-12T15:12:14.772Z · LW(p) · GW(p)

Yeah, likely to get hit over the latter :).

The former is very familiar to me in my circles, and if anything they are more happy/fulfilled/productive than the wage reliant, though both extremes exist in both groups.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T15:29:06.418Z · LW(p) · GW(p)

I am not saying that working for a living is necessarily better, my point is that being financially independent has its own particular failure mode the existence of which should be taken into account.

Replies from: Ixiel
comment by Ixiel · 2014-09-12T16:01:15.281Z · LW(p) · GW(p)

That's a very good point and too often neglected. There's too much betterness in folks' thoughts, not enough differentness, and the "best" situations fail in different ways than the "worst," which can succeed spectacularly in their own right.

comment by banx · 2014-09-11T22:03:09.903Z · LW(p) · GW(p)

It's less ethical if you think that you can get more resources by working, and that those resources can be used to create an ethically superior world.

Replies from: Ixiel
comment by Ixiel · 2014-09-12T11:37:52.429Z · LW(p) · GW(p)

We might be mutually holding the others point equal. Sure one can get more money working, but I meant aside from that. Did you mean aside from the best alternative use of 40 hours per week?

Replies from: banx
comment by banx · 2014-09-12T22:47:58.497Z · LW(p) · GW(p)

I just meant that working might be an opportunity to better accomplish some goal you deem ethically relevant (e.g., by earning money and donating it or by developing FAI or the cure for some disease). I'm not arguing that it is. That depends on what the goals are and what your opportunities (both "work" and "leisure" using your definitions) are.

comment by DanielLC · 2014-09-11T23:48:49.459Z · LW(p) · GW(p)

You shouldn't retire in your thirties because it limits the amount you can help others.

Replies from: Lumifer, Ixiel
comment by Lumifer · 2014-09-12T00:35:39.968Z · LW(p) · GW(p)

Aren't you assuming a particular value system?

Replies from: DanielLC
comment by DanielLC · 2014-09-12T03:39:21.823Z · LW(p) · GW(p)

Yeah. I don't know any good reason if you're an egoist.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T04:41:29.041Z · LW(p) · GW(p)

Self-actualization, for example.

comment by Ixiel · 2014-09-12T11:35:10.536Z · LW(p) · GW(p)

Are you saying the workplace is a uniquely strong opportunity to make the world better as opposed to other avenues, or more money more ability? If the former, why?

Replies from: DanielLC
comment by DanielLC · 2014-09-13T00:11:12.398Z · LW(p) · GW(p)

Division of labor. If you're not best suited to helping people, you're better off doing what you are best suited for and hiring someone else to help people.

comment by skeptical_lurker · 2014-09-12T17:10:34.181Z · LW(p) · GW(p)

My 30 day karma just jumped over 40 points since I checked LW this morning. Either I've said something really popular (and none of my recent comments have karma that high), or there's a bug.

Replies from: Richard_Kennaway, Adele_L, gjm, NancyLebovitz, army1987, polymathwannabe
comment by Richard_Kennaway · 2014-09-13T18:07:09.557Z · LW(p) · GW(p)

I got about +30 as well, ad only a small amount is due to recent upvotes. And despite the jump, I'm out of the top 30-day contributors list, which I've been in and out of the bottom of for some weeks. The other names in that list are regulars there, so they must have got some upvotes also.

Perhaps some systematic downvoter had all his votes reversed?

comment by Adele_L · 2014-09-12T22:12:12.448Z · LW(p) · GW(p)

My guess is that someone with a similar political ideology to you upvoted forty of your comments on the recent political post.

ETA: Well I've been struck by the mysterious mass-upvoter as well! I'm pretty sure the political motivation hypothesis is wrong now.

Replies from: None, skeptical_lurker
comment by [deleted] · 2014-09-13T07:44:38.722Z · LW(p) · GW(p)

The same thing happened to me today - within 12 hours I got at least +1 karma on every single post of mine from the last month and a half or so, which happened to be primarily on the history of life / 'great filter' threads.

I don't think it's ideological. Mysterious mass-upvoter?

comment by skeptical_lurker · 2014-09-14T23:07:43.077Z · LW(p) · GW(p)

Since my political ideology in that debate was trying to steelman both sides, I doubt this is the case, unless there is a fanatical steelmanner out there.

comment by gjm · 2014-09-13T18:21:47.388Z · LW(p) · GW(p)

I've seen several unexpected increases on the order of 10 points over the last couple of weeks. (I don't remember the exact dates.) My guess was gradual undoing of prior mass-downvoting, but a Mystery Mass Upvoter is certainly another possibility.

[EDITED to add ...] A possible variant of the Mystery Mass Upvoter hypothesis: we have a Mystery Small-Mass Upvoter, who is upvoting old posts in Main (maybe because s/he is new here and reading through old material). But that only works if everyone affected has old posts in Main, which I don't think is the case.

Replies from: None
comment by [deleted] · 2014-09-15T00:39:11.678Z · LW(p) · GW(p)

Hypothesis: we are the subjects of an experiment.

I seem to recall recent instances of a mysterious mass downvoter that produced several threads of people complaining / trying to figure out what could be done.

What if someone is doing the same thing, but with upvotes, to look for bias in community reactions?

Or they're just trolling. Whichever.

Replies from: gjm
comment by gjm · 2014-09-15T09:27:10.319Z · LW(p) · GW(p)

Interesting idea. Though I don't think it's really indicative of any bad sort of bias if people get angrier about gratuitous downvotes than about gratuitous upvotes.

comment by NancyLebovitz · 2014-09-13T00:01:43.865Z · LW(p) · GW(p)

My karma's been running higher than I expected, too.

I wish there was some way to track karma dif. So far as I know, there's no way to do it for older comments and posts.

comment by A1987dM (army1987) · 2014-09-12T22:47:47.786Z · LW(p) · GW(p)

So it's not just me? I also seemed to see something like that, but I assumed I just misremembered my previous 30-day karma score or something.

Replies from: army1987
comment by A1987dM (army1987) · 2014-09-14T20:11:29.451Z · LW(p) · GW(p)

Just gained another ~25 karma. Huh.

comment by polymathwannabe · 2014-09-12T21:41:55.242Z · LW(p) · GW(p)

Indeed, it sounds like a bug. It might need direct fixing. Here, have a downvote. :-D

comment by CWG · 2014-09-12T05:06:48.990Z · LW(p) · GW(p)

Trans-human thought experiment:

  • Scenario 1: A human brain is converted to a virtual brain through a destructive process (as described in many science-fiction stories). In what sense is this virtual intelligence the same "person" as the original, organic person?
  • Scenario 2: A human brain is converted to a virtual brain through a non-destructive process. The original, organic person lives on as before. In what sense is this virtual intelligence the same "person" as the original, organic person – is this the same as the answer in scenario 1?

Why this seems to matter: If a virtual version of me is not really me in the sense of being a continuation of my experience, then what does it matter to me if that virtual brain exists, as opposed to some other virtual brain? Is there actually any advantage to working out how to convert people en masse to virtual intelligences?

(I am aware that the questions of identity and "being a continuation of my experience" are vague but I anticipate that replies here will help me get clearer. )

Replies from: Viliam_Bur, polymathwannabe
comment by Viliam_Bur · 2014-09-12T08:27:31.792Z · LW(p) · GW(p)

I am not sure about this, but seems to me that in both cases it is the same person. Only in scenario 2 we have two copies that start to diverge at that point; so they are both continuations of the old one, but are not the same as each other.

This does not have a good equivalent in our intuition, because we usually don't "branch" this way. But you can imagine a magic spell that creates two identical humans from you. Both are you, but from the moment of copying, they start evolving differently, so after some time it is just like two twins, having a shared memory from before that moment.

comment by polymathwannabe · 2014-09-12T13:20:31.612Z · LW(p) · GW(p)

In both cases I'd say they're different persons.

I can see why a theory of consciousness that argues that you're not the atoms, but the pattern, wouldn't care whether that pattern is realized in meat or in silicon, but my subjective experience of continuity of memories is what confirms that I'm still me. Once you copy my mind with zero loss onto a digital, durable substrate, my original brain would still have strong objections to being switched off.

comment by [deleted] · 2014-09-11T14:59:52.030Z · LW(p) · GW(p)

Could someone recommend me a logic textbook? I need it to cover syntax and semantics for propositional and first-order classical logic, as well as preferably including material on intuitionistic logic and higher-order logics. I could really use material on any existing attempts to ground semantics or proof systems in computation, too.

"Computation and Logic" is my first candidate, though I want something else to go with it. This is for trying to work on logical probability research, and also because I've always been interested in type theory as a research field (hence wanting coverage of intuitionistic logic, which might as well be called computational logic what with the Curry-Howard Isomorphism).

Replies from: None, pragmatist
comment by [deleted] · 2014-09-11T16:24:49.736Z · LW(p) · GW(p)

The first five chapters of Marker's Model Theory will satisfy

syntax and semantics for propositional and first-order classical logic

and some information about type theory in the context of model theory. I know it doesn't satisfy all of your requirements, but it is a seriously good book with an excellent learning curve. I took a semester course covering the first three chapters in undergrad. It almost convinced me to work in mathematical logic, but sadly economic incentives trumped aesthetic ones.

comment by pragmatist · 2014-09-12T15:06:32.332Z · LW(p) · GW(p)

This is the textbook we used in graduate school, and it is very good. Not sure if this is what you were referring to as "Computation and Logic". It covers second order logic, but not intuitionistic logic as far as I can remember.

Replies from: None
comment by [deleted] · 2014-09-12T20:04:13.216Z · LW(p) · GW(p)

That's indeed the one I was referring to.

comment by Lumifer · 2014-09-10T14:39:45.156Z · LW(p) · GW(p)

Searching for genes that make people smart -- we still have no idea...

Replies from: gwern
comment by gwern · 2014-09-10T23:09:41.673Z · LW(p) · GW(p)

we still have no idea...

No, this is an unmitigated triumph. It's amazing how people take such a negative view of this.

So let me get this straight: over the past few decades we have slowly moved from a viewpoint where Gould is a saint, intelligence doesn't exist and has no predictive value since it's a racist made-up concept promoted by incompetent hacks and it has no genetic component and definitely nothing which could possibly differ between any groups at all, to a viewpoint where the validity of intelligence tests in multiple senses have been shown, the amount of genetic contribution has been accurately estimated, the architecture nailed down as highly polygenic & additive, the likely number of variants, and we've started accumulating the sample size to start detecting variants, and not just have we detected 60+ variants with >90% probability* (see the remarks on the Bayesian posterior probability in the supplementary material), we even have 3 which pass the usual (moronic, arbitrary, unjustified) statistical-significance thresholds - and wait, there's more, they also predict IQ out of sample and many of the implicated variants are known to relate to the central nervous system! - and this is a disappointment where 'we still have no idea' and the findings are 'maddeningly small' with 'inconclusive findings'?

* which imply you can predict much better than the article's calculation of 1.8 points

You've got to be kidding me. Or is this how zeitgeists change? They get walked back step by step and people pretend nothing has changed? When the tests are shown to be unbiased and predictive, we stop talking about them; when the twin studies show in every variant genetic influences on intelligence, we talk about how very difficult causal inference is and how twin studies can go wrong; when genetics comes up, suddenly everyone is discussing how nonadditive and gene-environment effects will make identification impossible (never mind that there's no reason to expect them to be large parts of the genetics); when good genetic candidates are found which don't pass arbitrary thresholds, that's taken as evidence they don't exist and genetic influence is irrelevant; and when enough samples are taken to satisfy that, then each of the hits is now deprecated as small and irrelevant? And the changes and refutations quietly go down the memory hole. 'Of course some of intelligence is genetic, everyone knows that - but all the variants are small, so really, this changes nothing at all.'

No, the Rietveld papers this year and last were historic triumphs. The theory has been as proven as it needs to be. The fundamental points no longer need to be debated - the debate is over. In some respects, it's now a pretty boring topic.

All that's left is engineering and application: getting enough samples to infer the rest to sufficiently high posterior probabilities to make good-enough predictions, and exploiting new possibilities like embryo-selection.

Replies from: Lumifer
comment by Lumifer · 2014-09-11T16:20:26.207Z · LW(p) · GW(p)

No, this is an unmitigated triumph. It's amazing how people take such a negative view of this.

We're are looking at this in different context and are using different baselines.

You are talking about how long ago we started with the genetic component of intelligence being malicious fantasies of evil people and now it's just science. Sure (though you still can't discuss it publicly). I'm talking about this particular paper and how big of a step it is compared to, say, a couple of years ago.

My baseline is much more narrow and technical. It is "we look at the the genome of a baby and have no idea what will be its IQ when it grows up". That is still largely the case and the paper's ability to forecast does not look impressive to me.

The fact that intelligence is largely genetic and highly polygenic is already "normal" for me -- my attitude is "yeah, sure, we know this, what have you done for me lately".

I appreciate the historical context which we are not free of by any stretch of imagination (so, no, I don't see unmitigated triumphs), but I was not commenting on progress over the last half a century. I want out-of-sample predictions with noticeable magnitude and I think getting there will take a bit more than just engineering.

Replies from: gwern
comment by gwern · 2014-09-11T16:45:27.232Z · LW(p) · GW(p)

My baseline is much more narrow and technical. It is "we look at the the genome of a baby and have no idea what will be its IQ when it grows up". That is still largely the case and the paper's ability to forecast does not look impressive to me.

This paper validates the approach (something a lot of people, for a lot of different reasons, were skeptical of), and even on its own merits we still get some predictive power out of it: the 3 top hits cover a range of ~1.5 points, and the 69 variants with 90% confidence predict even more. (I'm not sure how much since they don't bother to use all their data, but if we assume the 69 are evenly distributed between 0-0.5 points, then the mean is 0.25 and the total predictive power is more than a few points.)

What use is this result? Well, what use is a new-born baby? As the cryptographers say, 'attacks only get better'.

I think getting there will take a bit more than just engineering.

And, uh, why would you think that? There's no secret sauce here. Just take a lot of samples and run a regression. I don't think they even used anything particularly complex like a lasso or elastic net.

Replies from: Lumifer, Azathoth123
comment by Lumifer · 2014-09-11T17:11:32.116Z · LW(p) · GW(p)

There's no secret sauce here. Just take a lot of samples and run a regression.

Pretend for a second it's a nutrition study and apply your usual scepticism :-) You know quite well that "just run a regression" is, um... rarely that simple.

To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.

Replies from: gwern
comment by gwern · 2014-09-11T23:10:44.504Z · LW(p) · GW(p)

Pretend for a second it's a nutrition study and apply your usual scepticism :-) You know quite well that "just run a regression" is, um... rarely that simple.

No, that's the great thing about genetic associations! First, genes don't change over a lifetime, so every association is in effect a longitudinal study where the arrow of time immediately rules out A<-B or reverse causation in which IQ somehow causes particular variants to be overrepresented; that takes out one of the three causal pathways. Then you're left with confounding - but there's almost no way for a third variable to pick out people with particular alleles and grant them higher intelligence, no greenbeard effect, and population differences are dealt with by using relatively homogenous samples & controlling for principal components - so you don't have to worry much about A<-C->B. So all you're left with is A->B.

To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.

But they're not. They're not a large part of what's going on. And they don't affect the associations you find through a straight analysis looking for additive effects.

Replies from: Lumifer
comment by Lumifer · 2014-09-12T00:46:32.784Z · LW(p) · GW(p)

genes don't change over a lifetime

But their expression does.

They're not a large part of what's going on.

How do you know?

Replies from: gwern
comment by gwern · 2014-09-14T21:37:12.366Z · LW(p) · GW(p)

But their expression does.

An expression in circumstances dictated by what genes one started with.

How do you know?

Because if they were a large part of what was going on, the estimates would not break down cleanly and the methods work so well.

comment by Azathoth123 · 2014-09-13T02:07:45.614Z · LW(p) · GW(p)

Keep in mind that the outside view of biological complexity is that

The known unknowns have tended to end up lower in complexity than we've predicted. But unknown unknowns continue to blindside us, unabated, adding to the total complexity of the human body.

Or to phrase this another way:

people accurately estimate the total complexity and then apportion it among the known unknowns, thus creating an overestimate.

Replies from: gwern
comment by gwern · 2014-09-14T21:35:19.562Z · LW(p) · GW(p)

I don't think the outside view is relevant here. We have coming up on a century of twin studies and behavioral genetics and very motivated people coming up with possibilities for problems, and so far the traditional estimates are looking pretty good: for example, when people go and look at genetics directly, the estimates for simple additive heritability look very similar to the traditional estimates. The other day offered an example of a SNP study confirming the estimates from twin studies, "Substantial SNP-based heritability estimates for working memory performance", Vogler et al 2014. If all these complexities were real and serious problems and the Outside View advises us to be skeptical, why do we keep finding the SNP/GCTA estimates look exactly like we would have predicted?

Replies from: Azathoth123
comment by Azathoth123 · 2014-09-15T02:46:58.486Z · LW(p) · GW(p)

Ok, I confess I have no idea what SNP and GCTA are. As for the study Lumifer linked to, Razib Khan's analysis of it is that it suggests intelligence is a complex polygenetic trait. This should not be surprising as it is certainly an extremely complex trait in terms of phenotype.

comment by ChristianKl · 2014-09-09T20:18:43.602Z · LW(p) · GW(p)

Does LW markup have anything for text formatting that does what the html tag does?

Replies from: gwern
comment by gwern · 2014-09-09T20:48:42.428Z · LW(p) · GW(p)

Some forms of Markdown do, but not LW's AFAIK. If you're patient, you can use the Latex formatting to do subscripts.

comment by Rare · 2014-09-16T19:19:20.402Z · LW(p) · GW(p)

I've been thinking about the Rokos Basilisk thought experiment, considering the drivers of creating a Basilisk and the next logical step such an entity might conceivably take, and the risk in presents in the temptation to protect ourselves. Namely, that we may be tempted to create an alternative FAI which would serve to protect humankind against uFAI, a protector AI, and how it distorts the Basilisk.

A protector AI would likely share, evolve, or copy from any future Basilisk or malevolent intelligence in order to protect and/or prevent us from it or its creation; much like antibodies mush first be exposed to a threat to protect us from them. If we created this AI, it would undoubtedly need to simulate the creation of every conceivable basilisk.

It would likely motivate any potential Basilisk creators to think is such a way that we would be creating or sandboxing the Basilisk for examination in some way; it would also encourage us to create the worst possible Basilisks with the most damaging consequences, so it could in turn examine them and vaccinate real humanity.

If we did create a protector, it is fully possible that a protector would, eventually, become an incubator for something completely unmanageable as it iterated through progressively superior Basilisk, risking our protector becoming corrupt.

Lastly, if we think too much about a protector AI, there's still the possibility we are in the Basilisks simulation; In this case, we may improve, vaccinate, or create an incubator - as an incubated basilisk would be interested in creating a weak protector.

So I just felt like I should share the thought experiment where there's a chance not creating a Basilisk or even creating an inferior Basilisk will be responsible for allowing the real event to bypass our protection, or that we may create a superior Basilisk through an incubator.

comment by advancedatheist · 2014-09-09T06:27:39.298Z · LW(p) · GW(p)

New York magazine features a lengthy profile of transgender transhumanist Martine Rothblatt:

The Trans-Everything CEO

http://nymag.com/news/features/martine-rothblatt-transgender-ceo/

Steve Sailer claims he crossed paths with "Martin" Rothblatt in UCLA's MBA program in the early 1980's:

More Than I Care to Know About Martine Luther Queen, Highest Paid "Female" CEO

http://www.unz.com/isteve/more-than-i-care-to-know-about-martine-luther-queen-highest-paid-female-ceo/

BTW, Sailer in a comment compares transhumanism to Scientology. I have my own issues with transhumanism along the lines of: "You guys still haven't figured out how to do it right!"

Replies from: pragmatist
comment by pragmatist · 2014-09-09T09:10:36.994Z · LW(p) · GW(p)

Why do you think that Sailer post is worthy of attention? It doesn't add much useful information or analysis, it's almost relentlessly unpleasant (consistently referring to Rothblatt with a pronoun she has disavowed and mocking her gender identification), and many of the comments are simply heinous ("Martine’s “wife” is black, which means that he was too socially awkward to find an attractive woman."). Also, the comparison of transhumanism and Scientology is pretty silly.

Replies from: bramflakes
comment by bramflakes · 2014-09-09T10:06:08.715Z · LW(p) · GW(p)

There's a large intersection of LW readers and Steve Sailer readers.

Replies from: pragmatist, None
comment by pragmatist · 2014-09-09T10:35:17.701Z · LW(p) · GW(p)

I'm aware of that. I'm guessing (and hoping) that's because the usual quality of his writing is significantly superior to that post. I'm wondering why advancedatheist saw fit to link to that piece. Surely it must be more than the mere fact that Steve Sailer wrote it. Presumably he thinks there is something useful to be gleaned from it, but I cannot see what that is.

Replies from: advancedatheist
comment by advancedatheist · 2014-09-10T00:25:35.507Z · LW(p) · GW(p)

Sailer apparently witnessed Rothblatt's behavior during an earlier episode of this individual's life and wrote about this a few months back. Sailer's recollection of his experience of Rothblatt has as much value as anyone else's, even if Rothblatt rubbed him the wrong way back then, and if Rothblatt's recent behavior strikes him as ridiculous.

I find Dale Carrico's mockery of Rothblatt interesting as well, because in other contexts Rothblatt - a Jewish sex changer who fathered children with two different black women, one of them a Kenyan - would make him a poster child for the left's diversity ideology.

I would add that Sailer has studied male to female sex transformers, and he thinks that many of them display no female characteristics at all. Instead they come across as aggressively male or nerdy.

Replies from: pragmatist
comment by pragmatist · 2014-09-10T16:42:14.584Z · LW(p) · GW(p)

I really doubt Sailer has studied trans women in any meaningful sense. Meeting a couple of them doesn't count as studying them. Actual scientific studies of trans women show a number of female-typical neurological features.

comment by [deleted] · 2014-09-10T18:31:31.327Z · LW(p) · GW(p)

I cannot imagine why. He is uniformly low on insight and high in insult.