Open thread, Mar. 9 - Mar. 15, 2015

post by MrMind · 2015-03-09T07:48:20.660Z · LW · GW · Legacy · 109 comments

Contents

109 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

109 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2015-03-12T06:03:27.654Z · LW(p) · GW(p)

The sequences eBook, Rationality: From AI to Zombies, will most likely be released early in the day on March 13, 2015.

Replies from: ciphergoth, RobbBB
comment by Paul Crowley (ciphergoth) · 2015-03-13T08:29:42.137Z · LW(p) · GW(p)

This has been published! I assume a Main post on the subject will be coming soon so I won't create one now.

Unless I am much mistaken, the Pebblesorters would not approve of the cover :)

comment by Rob Bensinger (RobbBB) · 2015-03-13T12:19:44.601Z · LW(p) · GW(p)

And by March 13 I mean March 12.

comment by advancedatheist · 2015-03-09T23:54:48.174Z · LW(p) · GW(p)

Google Ventures and the Search for Immortality Bill Maris has $425 million to invest this year, and the freedom to invest it however he wants. He's looking for companies that will slow aging, reverse disease, and extend life.

http://www.bloomberg.com/news/articles/2015-03-09/google-ventures-bill-maris-investing-in-idea-of-living-to-500

Replies from: None
comment by [deleted] · 2015-03-11T07:13:49.604Z · LW(p) · GW(p)

You'd think that having worked in a biomedical lab at duke he'd know better than to say things like: “We actually have the tools in the life sciences to achieve anything that you have the audacity to envision”

Replies from: JoshuaZ
comment by JoshuaZ · 2015-03-11T14:55:04.234Z · LW(p) · GW(p)

Yes, but he presumably also knows what sort of things one might say if one wants other investors to join in on a goal.

comment by Paul Crowley (ciphergoth) · 2015-03-09T21:22:39.206Z · LW(p) · GW(p)

I remember reading an article here a while back about a fair protocol for making a bet when we disagree on the odds, but I can't find it. Anyone remember what that was? Thanks!

Replies from: badger, philh
comment by badger · 2015-03-10T13:51:47.602Z · LW(p) · GW(p)

From the Even Odds thread:

Assume there are n people. Let S_i be person i's score for the event that occurs according to your favorite proper scoring rule. Then let the total payment to person i be

(i.e. the person's score minus the average score of everyone else). If there are two people, this is just the difference in scores. The person makes a profit if T_i is positive and a payment if T_i is negative.

This scheme is always strategyproof and budget-balanced. If the Bregman divergence associated with the scoring rule is symmetric (like it is with the quadratic scoring rule), then each person expects the same profit before the question is resolved.

comment by philh · 2015-03-10T00:25:30.658Z · LW(p) · GW(p)

http://lesswrong.com/lw/hpe/how_should_eliezer_and_nicks_extra_20_be_split/ ?

edit: no, I don't think that's it. I think I do remember the post you're talking about, and I thought it included this anecdote, but this isn't the one I was thinking of.

edit 2: http://lesswrong.com/lw/jgv/even_odds/ is the one I was thinking of.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2015-03-12T08:22:41.919Z · LW(p) · GW(p)

Great—thanks! (Thanks to badger below too)

comment by polymathwannabe · 2015-03-11T14:16:29.128Z · LW(p) · GW(p)

... a major part of the reliable variance of cognitive bias tasks is unique, and implies that a one-factor model of rational behavior is not plausible.

Replies from: gwern, JoshuaZ
comment by JoshuaZ · 2015-03-11T18:08:04.810Z · LW(p) · GW(p)

That is highly inconvenient. It means that teaching people to deal with cognitive biases is likely not going to have any magic silver bullet.

Also, this is further evidence for the already fairly strong thesis that intelligence and skill at rational thinking are not the same thing.

comment by DanielLC · 2015-03-09T19:42:24.560Z · LW(p) · GW(p)

I'm toying with the idea of programming a game based on The Murder Hobo Investment Bubble. The short version is that Old Men buy land infested with monsters, hire Murder Hobos to kill the monsters, and resell the land at a profit. I want to make something that models the whole economy, with individual agents for each Old Man, Murder Hobo, and anything else I might add. Rather than explicitly program the bubble in, it would be cool to use some kind of machine learning algorithm to figure everything out. I figure they'll make the sorts of mistakes that lead to investment bubbles automatically.

There are two problems. First, I have neither experience nor training with any machine learning except for Bayesian statistics. Second, it's often not clear what to optimize for. I could make some kind of scoring system where every month everyone who is still alive has their score increase by the log of their money or something, but that would still only work well if I just use scores from the previous generation, which is slower-paced than I'd like.

Old Men could learn whether or not Murder Hobos will work for a certain price, and whether or not they'll find more within a certain time frame, but if they buy a bad piece of land it's not clear how bad this is. They still have the land, but it's of an uncertain value. I suppose I could make it so they just buy options, and if they don't sell the land within a certain time period they lose it.

Murder Hobos risk dying, which has an unknown opportunity cost. I'm thinking of just having them base the expected opportunity cost of death on the previous generation, but then it would take them a generation to respond to the fact that demand is way down and they need to start taking risky jobs for low pay.

Does anyone have any suggestions? I consider "give up and do something else instead" to be a valid suggestion, so say that if you think it's what I should do.

Edit: I could have Murder Hobos workout expected opportunity cost of death by checking what portion of Murder Hobos of each level died the previous year and how long it's taking them to level up.

Replies from: Lumifer, Emile, g_pepper
comment by Lumifer · 2015-03-09T20:21:40.598Z · LW(p) · GW(p)

Is it a game or is an economic simulation? If a game, what does the Player do?

Replies from: DanielLC
comment by DanielLC · 2015-03-09T20:47:38.652Z · LW(p) · GW(p)

The player can be an Old Man or a Murder Hobo. They make the same sort of choiced the computer does, and at the end they can see how they compare to everyone else.

comment by Emile · 2015-03-09T21:26:26.602Z · LW(p) · GW(p)

I'm toying with the idea of programming a game based on .

Are you missing a word there?

Replies from: DanielLC
comment by DanielLC · 2015-03-09T21:30:13.608Z · LW(p) · GW(p)

Fixed. I messed up the link.

comment by g_pepper · 2015-03-09T21:04:49.793Z · LW(p) · GW(p)

if they buy a bad piece of land it's not clear how bad this is. They still have the land, but it's of an uncertain value. I suppose I could make it so they just buy options, and if they don't sell the land within a certain time period they lose it.

You could charge a periodic "property tax"; that way, the longer a player holds on to a property, the more it costs the player.

Replies from: DanielLC
comment by DanielLC · 2015-03-09T21:29:27.238Z · LW(p) · GW(p)

That would make it even more complicated.

comment by Bill_McGrath · 2015-03-10T20:59:58.257Z · LW(p) · GW(p)

Does anyone have any good web resources on how to be a good community moderator?

A friend and I will shortly be launching a podcast and want to have a Reddit community where listeners can interact with us. He and I will be forum's moderators to begin with, and I want to research how to do it well.

Replies from: kpreid, stellartux
comment by kpreid · 2015-03-15T02:22:39.007Z · LW(p) · GW(p)

Here is a thing at Making Light. There are probably other relevant posts on said blog, but this one seems to have what I consider the key points.

I'll quote some specific points that might be more surprising:

\5. Over-specific rules are an invitation to people who get off on gaming the system.

\9. If you judge that a post is offensive, upsetting, or just plain unpleasant, it’s important to get rid of it, or at least make it hard to read. Do it as quickly as possible. There’s no more useless advice than to tell people to just ignore such things. We can’t. We automatically read what falls under our eyes.

\10. Another important rule: You can let one jeering, unpleasant jerk hang around for a while, but the minute you get two or more of them egging each other on, they both have to go, and all their recent messages with them. There are others like them prowling the net, looking for just that kind of situation. More of them will turn up, and they’ll encourage each other to behave more and more outrageously. Kill them quickly and have no regrets.

comment by stellartux · 2015-03-13T18:02:45.035Z · LW(p) · GW(p)

I don't know of any resources, but I moderated a community once, and did absolutely no research and everything turned out fine. There were about 15 or so core members in the community and maybe a couple of hundred members in total. My advice is to make explicit rules about what is and is not allowed in the community, and try to enforce them as evenly as possible. If you let people know what's expected and err on the side of forgiveness when it comes to rule violations, most people in the community will understand and respect that you're just doing what's necessary to keep the community running smoothly.

We had two resident trolls who would just say whatever was the most aggravating thing they could think of, but after quite a short time people learned that that was all they were doing and they became quite ineffective. There was also a particular member that everyone in the community seemed to dislike and was continually the victim of quite harsh bullying from most of the other people there. Again, the hands off approach seemed to work best, as while most people were mean to him, he often antagonised them and brought more attacks onto himself, so I felt it wasn't necessary for me to intervene, as he was making everything worse for himself. So yeah, I recommend being as hands off as possible when it comes to mediating disputes, only intervening when absolutely necessary. That being said, when moderating, you are usually in a position to set up games and activities in a way that the rest of community would be less inclined to do, or not have the moderator powers necessary to set up.

If I were you I'd focus most of my energy on setting up ways for the community to interact constructively, it will most likely lead to there being fewer disputes to mediate, as people won't start arguments for the sake of having something to talk about.

comment by hydkyll · 2015-03-10T11:00:49.031Z · LW(p) · GW(p)

I'm thinking about starting a new political party (in my country getting into parliament as a new party is e̶a̶s̶y̶ not virtually impossible, so it's not necessarily a waste of time). The motivation for this is that the current political process seems inefficient.

Mostly I'm wondering if this idea has come up before on lesswrong and if there are good sources for something like this.

The most important thing is that no explicit policies are part of the party's platform (i.e. no "we want a higher minimum wage"). I don't really have a party program yet, but the basic idea is as follows: There are two parts to this party; the first part is about Terminal Values and Ethical Injunctions. What do we want to achieve and what do we avoid doing even if it seems to get us closer to our goal. The Terminal Values could just be Frankena's list of intrinsic values. The first requirement for people to vote for this party is that they agree with those values.

The second part is about the process of finding good policies. How to design a process that generates policies that help to satisfy our values. Some ideas:

  • complete and utter transparency to fight the inevitable corruption; publish everything the government does
  • instruct experts to find good policies and then listen to them (how would professional politicians know better than them)
    • let the experts give probabilities on explicit predictions how well the policies will work
    • have a public score board that shows how well individual experts did in the past with their predictions
  • when implementing a new policy, set a date at which to evaluate the efficacy and say in advance what you expect
  • if a policy is found to be harmful, get rid of it; don't be afraid to change your mind (but don't make it unnecessarily hard for businesses to plan for the future by changing policies to frequently)
  • react to feedback from the population; don't wait until the next election

The idea is that the party won't really be judged based on the policies it produces but rather on how well it keeps to the specified process. The values and the process is what identifies the party. Of course there should be some room for changing the process if it doesn't work...

The evaluation of policies in terms of how well they satisfy values seems to be a difficult problem. The problem is that Utilitarianism is difficult in practice.

So, there are quite a few open questions.

Replies from: IlyaShpitser, gjm, MrMind, Evan_Gaensbauer
comment by IlyaShpitser · 2015-03-10T11:08:24.342Z · LW(p) · GW(p)

http://www.amazon.co.uk/Swarmwise-Tactical-Manual-Changing-World/dp/1463533152

http://www.smbc-comics.com/?id=2710


I like the first link because it is at least trying to move past feudalism as an organizing principle. The second link is about the fact that it is hard to make groups of people act like we want (because groups of people operate under a set of poorly understood laws, likely these laws are cousins to things like natural selection in biology).

Public choice folks like to study this stuff, but it seems really really hard.

Replies from: badger
comment by badger · 2015-03-10T17:44:26.242Z · LW(p) · GW(p)

A pdf copyof Swarmwise from the author's website.

comment by gjm · 2015-03-10T13:29:00.962Z · LW(p) · GW(p)

in my country new parties can get into parliament easily, so it's not a waste of time

You may be right, and I don't know the details of your situation or your values, but on the face of it that inference isn't quite justified. It depends on what getting into parliament as such actually achieves. E.g., I can imagine that in some countries it's easy for someone to start a new party and get into parliament, but a new one-person party in parliament has basically zero power to change anything. (It seems like there must be some difficulty somewhere along the line, because if getting the ability to make major changes in what your country does is easy then everyone will want to do it and it will get harder because of competition. Unless somehow this is a huge opportunity that you've noticed and no one else has.)

I like the idea of a political party that has meta-policies rather than object-level policies, but it sounds like a difficult thing to sell to the public in sufficient numbers to get enough influence to change anything.

Replies from: hydkyll
comment by hydkyll · 2015-03-10T16:40:35.860Z · LW(p) · GW(p)

OK, when I said "easy" I exaggerated quite a bit (I edited in the original post). More accurate would be: "in the last three years at least one new party became popular enough to enter parliament" (the country is Germany and the party would be the AfD, before that, there was the German Pirate Party). Actually, to form a new party the signatures from at least 0.1% of all eligible voters are needed.

but it sounds like a difficult thing to sell to the public in sufficient numbers to get enough influence to change anything.

I also see that problem, my idea was to try to recruit some people on German internet fora and if there is not enough interest drop the idea.

comment by MrMind · 2015-03-11T10:20:27.762Z · LW(p) · GW(p)

What about the process of gaining consensus? I find it hard to believe that lay people may be attracted from meta-values alone.

comment by Evan_Gaensbauer · 2015-03-11T00:30:17.888Z · LW(p) · GW(p)

Have you floated this idea with anyone else you know in Germany? I'm not asking if you're ready and willing to get to the threshold of 0.1% of German voters (~7000 people). I'm just thinking more feedback, and others involved, whether one or two, might help. Also, you could just talk to lots of people in your local network about it. As far as I can tell, people might be loathe to make big commitment like helping you launch a party, but are willing to do trivial favors like putting you in touch with a contact who could give you advice on law, activism, politics, dealing with bureaucracy, finding volunteers, etc.

Do you attend a LessWrong meetup in Germany? If so, float this idea there. At the meetup I attend, it's much easier to get quick feedback from (relatively) smart people in person, because communication errors are reduced, and it takes less time to relay and reply to ideas than over the Internet. Also, in-person is more difficult for us to skip over ideas or ignore them than on an Internet thread.

comment by Manfred · 2015-03-12T20:36:54.045Z · LW(p) · GW(p)

On MIRI's website at https://intelligence.org/all-publications/, the link to Will Sawin and Abram Demski's 2013 paper goes to https://intelligence.org/files/Pi1Pi2Probel.pdf, when it should go to http://intelligence.org/files/Pi1Pi2Problem.pdf

Not sure how to actually send this to the correct person.

comment by Unknowns · 2015-03-12T07:31:38.189Z · LW(p) · GW(p)

There should be some kind of penalty on Prediction Book (e.g. not being allowed to use the site for two weeks) for people who do not check the "make this prediction private" box for predictions that are about their personal life and which no one else can even understand.

Replies from: MathiasZaman
comment by MathiasZaman · 2015-03-12T08:53:53.903Z · LW(p) · GW(p)

Are there ways to share private predictions?

comment by PhilGoetz · 2015-03-11T18:17:38.798Z · LW(p) · GW(p)

Basic question about bits of evidence vs. bits of information:

I want to know the value of a random bit. I'm collecting evidence about the value of this bit.

First off, it seems weird to say "I have 33 bits of evidence that this bit is a 1." What is a bit of evidence, if it takes an infinite number of bits of evidence to get 1 bit of information?

Second, each bit of evidence gives you a likelihood multiplier of 2. E.g., a piece of evidence that says the likelihood is 4:1 that the bit is a 1 gives you 2 bits of evidence about the value of that bit. Independent evidence that says the likelihood is 2:1 gives you 1 bit of evidence.

But that means a one-bit evidence-giver is someone who is right 2/3 of the time. Why 2/3?

Finally, if you knew nothing about the bit, and had the probability distribution Q = (P(1)=.5, P(0)=.5), and a one-bit evidence giver gave you 1 bit saying it was a 1, you now have the distribution P = (2/3, 1/3). The KL divergence of Q from P (log base 2) is only 0.0817, so it looks like you've gained .08 bits of information from your 1 bit of evidence. ???

Replies from: PhilGoetz, Douglas_Knight, None, None
comment by PhilGoetz · 2015-03-11T23:22:25.123Z · LW(p) · GW(p)

I think I was wrong to say that 1 bit evidence = likelihood multiplier of 2.

IF you have a signal S, and P(x|S) = 1 while P(x|~S) = .5, then the likelihood multiplier is 2 and you get 1 bit of information, as computed by KL-divergence. That signal did in fact require an infinite amount of evidence to make P(x|S) = 1, I think, so it's a theoretical signal found only in math problems, like a frictionless surface in physics.

If you have a signal S, and P(x|S) = .5 while P(x|~S) = .25, then the likelihood multiplier is 2, but you get only .2075 bits of information.

There's a discussion of a similar question on stats.stackexchange.com . It appears that the sum, over a series of observations x, of

log(likelihood ratio = P(x | model 2) / P(x | model 1))

approximates the information gain from changing from model 1 to model 2, but not on a term-by-term basis. The approximation relies on the frequency of the observations in the entire observation series being drawn from a distribution close to model 2.

comment by Douglas_Knight · 2015-03-12T01:04:21.557Z · LW(p) · GW(p)

Yes, there are incompatible uses of the phrase "bits of evidence." In fact, the likelihood version is not compatible with itself: bits of evidence for Heads is not the same as bits of evidence against Tails. But still it has its place. Odds ratios do have that formal property. You may be interested in this wikipedia article. In that version, a bit of information advantage that you have over the market is the ability to add log(2) to your expected log wealth, betting at the market prices. If you know with certainty the value of the next coin flip, then maybe you can leverage that into arbitrarily large returns, although I think the formalism breaks down at this point.

comment by [deleted] · 2015-03-11T19:03:00.521Z · LW(p) · GW(p)

Why does the likelihood grow exactly twice? (I'm just used to really indirect evidence, which is also seldom binary in the sense that I only get to see whole suits of traits, which usually go together but in some obscure cases, vary in composition. So I guess I have plenty of C-bits that do go in B-bits that might go in A-bits, but how do I measure the change in likelihood of A given C? I know it has to do with d-separation, but if C is something directly observable, like biomass, and B is an abstraction, like species, should I not derive A (an even higher abstraction, like 'adaptiveness of spending early years in soil') from C? There are just so much more metrics for C than for B...) Sorry for the ramble, I just felt stupid enough to ask anyway. If you were distracted from answering the parent, please do.

Replies from: PhilGoetz
comment by PhilGoetz · 2015-03-29T02:52:20.732Z · LW(p) · GW(p)

I don't understand what you're asking, but I was wrong to say the likelihood grows by 2. See my reply to myself above.

comment by [deleted] · 2015-03-11T18:38:40.426Z · LW(p) · GW(p)

First off, it seems weird to say "I have 33 bits of evidence that this bit is a 1."

It seems weird to me because the bits of "33 bits" looks like the same units as the bit of "this bit", but they aren't the same. Map/territory. From now on, I'm calling the first, A-bits, and the second, B-bits.

Why does it take an infinite number of bits of evidence to get 1 bit of information?

It takes an infinite number of A-bits to know with absolute certainty one B-bit.

But that means a one-bit evidence-giver is someone who is right 2/3 of the time. Why the 2/3? That seems weird.

What were you expecting?

comment by JoshuaZ · 2015-03-11T21:44:39.705Z · LW(p) · GW(p)

A recent study looks at "equality bias" where given two or more people, even when one is clearly outperforming others one stills is inclined to see the people as nearer in skill level than the data suggests. This occurred even when money was at stake, people continued to act like others were closer in skill than they actually were. (I strongly suspect that this bias may have a cultural aspect.) Summary article discussing the research is here. Actual study is behind paywall here and related one also behind paywall here. I'm currently on vacation but if people want when I'm once again on the university network I should have access to both of these.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-03-12T01:23:19.902Z · LW(p) · GW(p)

The papers are here and here.

In light of that, maybe there's no point in mentioning that PNAS is available at PMC after a delay of a few months.

comment by JoshuaZ · 2015-03-11T21:29:34.082Z · LW(p) · GW(p)

A reporter I know is interested in doing an article on people in the cryonics movement. If people are interested, please message me for details.

comment by [deleted] · 2015-03-13T09:50:16.071Z · LW(p) · GW(p)

Good news for the anxious, a simple relaxation technique once a week can have a significant effect on cortisol http://www.ergo-log.com/cortrelax.html

"Abbreviated Progressive Relaxation Training (APRT) – on forty test subjects. APRT consists of lying down and contracting specific muscle groups for seven seconds and then completely relaxing them for thirty seconds, while focusing your awareness on the experience of contracting and relaxing the muscle groups.

There is a fixed sequence in which you contract and relax the muscle groups. You start with your upper right arm and then go on to your left lower arm, left upper arm, forehead, muscles around your nose, jaw muscles, neck, chest, shoulders, upper back, stomach and then on to your right leg, ending with your left leg."

(Where is the lower right arm BTW?)

Replies from: Styrke
comment by Styrke · 2015-04-03T14:12:12.198Z · LW(p) · GW(p)

(Where is the lower right arm BTW?)

Google Image search result for "lower right arm"

Replies from: None
comment by [deleted] · 2015-04-03T14:25:14.569Z · LW(p) · GW(p)

I mean that it is missing from the list.

comment by JoshuaZ · 2015-03-10T16:15:23.726Z · LW(p) · GW(p)

Can one of the people here who has admin or moderator privileges over at PredictionBook please go and deal with some of the recent spammers?

comment by David Althaus (wallowinmaya) · 2015-03-09T09:57:22.291Z · LW(p) · GW(p)

I wrote an essay about the advantages (and disadvantages) of maximizing over satisficing but I’m a bit unsure about its quality, that’s why I would like to ask for feedback here before I post it on LessWrong.

Here’s a short summary:

According to research there are so called “maximizers” who tend to extensively search for the optimal solution. Other people — “satisficers” — settle for good enough and tend to accept the status quo. One can apply this distinction to many areas:

Epistemology/Belief systems: Some people, one could describe them as epistemic maximizers, try to update their beliefs until they are maximally coherent and maximally consistent with the available data. Other people, epistemic satisficers, are not as curious and are content with their belief system, even if it has serious flaws and is not particularly coherent or accurate. But they don’t go to great lengths to search for a better alternative because their current belief system is good enough for them.

Ethics: Many people are as altruistic as is necessary to feel good enough; phenomenons like “moral licensing” and “purchasing of moral satisfaction” are evidence in favor of this. One could describe this as ethical satisficing. But there are also people who try to extensively search for the best moral action, i.e. for the action that does the most good (with regards to their axiology). Effective altruists are good example for this type of ethical maximizing.

Social realm/relationships: This point is pretty obvious.

Existential/ big picture questions: I’m less sure about this point but it seems like one could apply the distinction also here. Some people wonder a lot about the big picture, spent a lot of time reflecting on their terminal values and how to reach them in an optimal way. Nick Bostrom would be good example for the type of person I have in mind here and what could be called “existential maximizing”. In contrast, other people, not necessarily less intelligent or curious, don’t spend much time thinking about such crucial considerations. They take the fundamental rules of existence and the human condition (the “existential status quo”) as a given and don’t try to change it. Relatedly, transhumanists could also be thought of as existential maximizers in the sense that they are not satisfied with the human condition and try to change it – and maybe ultimately reach an “optimal mode of existence”.

What is “better”? Well, research shows that satisficers are happier and more easygoing. Maximizers tend to be more depressed and “picky”. They can also be quite arrogant and annoying. On the other hand, maximizers are more curious and always try hard to improve their life – and the lives of other people, which is nice.

I would really love to get some feedback on it.

Replies from: Evan_Gaensbauer, None
comment by Evan_Gaensbauer · 2015-03-11T00:38:04.475Z · LW(p) · GW(p)

Here are my thoughts having just read the summary above, not the whole essay yet.

They take the fundamental rules of existence and the human condition (the “existential status quo”) as a given and don’t try to change it.

This sentence confused me. I think it could be fixed with some examples of what would constitute an instance of challenging the "existential status quo" in action. The first example I was thinking of would be ending death or aging, except you've already got transhumanists in there.

Other examples might include:

  • mitigating existential risks
  • suggesting and working on civilization as a whole reaching a new level, such as colonizing other planets and solar systems.
  • trying to implement better design for the fundamental functions of ubiquitous institutions, such as medicine, science, or law.

Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2015-03-12T17:35:33.869Z · LW(p) · GW(p)

Again, I'm just giving quick feedback. Hopefully you've already given more detail in essay. Other than that, your summary seems fine to me.

Thanks! And yeah, ending aging and death are some of the examples I gave in the complete essay.

comment by [deleted] · 2015-03-09T12:20:09.321Z · LW(p) · GW(p)

And sometimes, a satisfier acts as his image of a maximizer would, gets some kind of negative feedback and either shrugs his shoulders and never does it again, or learns the safety rules and trains a habit of doing the nasty thing as a character-building experience. And other people may mistake him as a maximizer himself.

comment by Gunnar_Zarncke · 2015-03-12T11:07:39.766Z · LW(p) · GW(p)

Apparently first bumps are a much more hygienic alternative to the handshake . This has been reported e.g. here, here and here.

I wonder whether I should try to get adoption of this as a greeting among my friends. It might also be an alternative to the sometime awkward choice between handshake and hug (though this is probably a regional cultural issue).

And I wonder whether the LW community has an idea on this and whether that might be advanced in some way. Or whether is just a misguided hype.

Replies from: Lumifer
comment by Lumifer · 2015-03-12T15:14:56.813Z · LW(p) · GW(p)

I think people with a functioning immune system should not attempt to limit their exposure to microorganisms (except in the obvious cases like being in Liberia half a year ago). It's both useless and counterproductive.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-03-12T16:08:29.721Z · LW(p) · GW(p)

I tend to think so too, but

  • there are people with very varying strengths of immune systems

  • the strength of the immune system changes over time (I notice that older people both tend to be ill less often and also to be more cautions regarding infections)

  • handshakes are a strong social protocol that not everybody can evade easily.

  • you could still intentionally expose yourself to microorganisms

Replies from: None
comment by [deleted] · 2015-03-13T17:03:28.921Z · LW(p) · GW(p)

There's also a difference between exposing yourself to microorganisms and exposing yourself to high levels of one particular one that is shedding from someone that has already caused them illness.

comment by G0W51 · 2015-03-10T01:47:23.746Z · LW(p) · GW(p)

Perhaps it would be beneficial to make a game used for probability calibration in which players are asked questions and give answers along with their probability estimate of it being correct. The number of points gained or lost would be a function of the player’s probability estimate such that players would maximize their score by using an unbiased confidence estimate (i.e. they are wrong p proportion of the time when they say they think they are correct with probability p. I don’t know of such a function off hand, but they are used in machine learning, so they should be able to be found easily enough. This might already exist, but if not, it could be something CFAR could use.

Replies from: philh, DanielFilan, None
comment by philh · 2015-03-10T10:38:01.391Z · LW(p) · GW(p)

It exists as the credence game.

comment by DanielFilan · 2015-03-10T05:19:53.474Z · LW(p) · GW(p)

One function that works for this is log scoring: the number of points you get is the log of the probability you place in the correct answer. The general thing to google to find other functions that work for this is "log scoring rules".

At the Australian mega-meetup, we played the standard 2-truths-1-lie icebreaker game, except participants had to give their probability for each statement being the lie, and were given log scores. I can't answer for everybody, but I thought it was quite fun.

comment by [deleted] · 2015-03-11T17:22:22.962Z · LW(p) · GW(p)

Hey, we can deconstruct Doyle's Sherlock Holmes stories, assigning probabilities to every single inference and offering alternative explanations. Or take some other popular fiction. That might also help people who, like me, struggle with counterfactuals.

comment by DataPacRat · 2015-03-09T13:33:14.893Z · LW(p) · GW(p)

Original Ideas

How often do you manage to assemble a few previous ideas in a way in which it is genuinely possible that nobody has assembled them before - that is, that you've had a truly original thought? When you do, how do you go about checking whether that's the case? Or does such a thing matter to you at all?

For example: last night, I briefly considered the 'Multiple Interacting Worlds' interpretation of quantum physics, in which it is postulated that there are a large number of universes, each of which has pure Newtonian physics internally, but whose interactions with near-identical universes cause what we observe as quantum phenomena. It's very similar to the 'Multiple Worlds' interpretation, except instead of new universes branching from old ones at every moment in an ever-spreading bush, all the branches branched out at the Big Bang. It occurred to me that while the 'large number' of universes is generally treated as being infinite, my limited understanding of the theory doesn't mean that that's necessarily the case. And if there are a finite number of parallel worlds interacting with our own, each of which is slightly different and only interacts for as long as the initial conditions haven't diverged too much... then, at some point in the future, the number of such universes interacting with ours will decrease, eventually to zero, thus reducing "quantum" effects until our universe operates under fully Newtonian principles. And looking backwards, this implies that "quantum" effects may have once been stronger when there were more universes that had not yet diverged from our own. All of which adds up to a mechanism by which certain universal constants will gradually change over the lifetime of the universe.

It's not everyday that I think of a brand-new eschatology to set alongside the Big Crunch, Big Freeze, and Big Rip.

And sure, until I dive into the world of physics to start figuring out which universal constants would change, and in which direction, it's not even worth calling the above a 'theory'; at best, it's technobabble that could be used as background for a science-fiction story. But as far as I can tell, it's /novel/ technobabble. Which is what inspired the initial paragraph of this post: do you do anything in particular with potentially truly original ideas?

Replies from: btrettel, None, Ander, gjm, JoshuaZ
comment by btrettel · 2015-03-09T18:24:07.972Z · LW(p) · GW(p)

You can't ever be entirely sure if an idea wasn't thought of before. But, if you care to demonstrate originality, you can try an extensive literature review to see if anyone else has thought of the same idea. After that, the best you can say is that you haven't seen anyone else with the same idea.

Personally, I don't think being the first person to have an idea is worth much. It depends entirely on what you do with it. I tend to do detailed literature reviews because they help me generate ideas, not because they help me verify that my ideas are original.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-09T19:04:05.942Z · LW(p) · GW(p)

extensive literature review

I'm a random person on the internet; what sort of sources would be used in such a review?

Replies from: btrettel
comment by btrettel · 2015-03-09T23:58:00.949Z · LW(p) · GW(p)

At the moment I'm working on a PhD, so my methods are biased towards resources available at a major research university. I have a list of different things to try when I want to be as comprehensive as possible. I'll flesh out my list in more detail. You can do many of these if you are not at a university, e.g., if you can't access online journal articles, try the Less Wrong Help Desk.

In terms of sources, the internet and physical libraries will be the main ones. I wrote more on the process of finding relevant prior work.

This process can be done in any particular order. You probably will find doing it iteratively to be useful, as you will become more familiar with different terminologies, etc.

Here are some things to try:

  1. Searching Google, Google Scholar, and Google Books. Sometimes it's worthwhile to keep a list of search terms you've tried. Also, keep a list of search terms to try. The problem with doing this alone is that it is incomplete, especially for older literature, and likely will remain so for some time.

  2. Searching other research paper databases. In my case, this includes publisher specific databases (Springer, Wiley, Elsevier, etc.), citation and bibliographic databases, and DTIC.

  3. Look for review papers (which often list a lot of related papers), books on the subject (again, they often list many related papers), and also annotated bibliographies/lists of abstracts. The latter can be a goldmine, especially if they contain foreign literature as well.

  4. Browsing the library. I like to go to the section for a particular relevant book and look at others nearby. You can find things you never would have noticed otherwise this way. It's also worth noting that if you are in a particular city for a day, you might have luck checking a local library's online catalog or even the physical library itself. For example, I used to live near DC, but I never tried using the Library of Congress until after I moved away. I was working an internship in the area one summer after moving, and I used the opportunity to scan a very rare document.

  5. Following citations in related papers and books. If something relevant to your interest was cited, track down the paper. (By the way, too many citations are terrible. It seems that a large fraction of researchers treat citations are some sort of merely academic exercise rather than a way for people to find related literature. I could insert a rant here.)

  6. Searching WorldCat. WorldCat is a database of library databases. If you're looking for a book, this could help. I also find browsing by category there to be helpful.

  7. Asking knowledgeable people. In many cases, this will save you a lot of time. I recently asked a professor a question at their office hours, and in a few minutes they verified what I spent a few hours figuring out but was still unsure of. I wish I asked first.

  8. Looking for papers in other languages. Not everything is written in English, especially if you want things from the early 20th century. If you really want to dig deep, you can do this, though it becomes much harder for two reasons. First, you probably don't know every language in the world. OCR and Google Translate help, thankfully. Second, (at least in the US) many foreign journals are hard to track down for various reasons. However, the benefits could be large, as almost no one does this, and that makes many results obscure.

It should be obvious that doing a detailed review of the literature can require a large amount of time, depending on the subject. Almost no one actually does this for that reason, but I think it can be a good use of time in many cases.

Also, interlibrary loan services can be really useful for this. I submit requests for anything I have a slight interest in. The costs to me are negligible (only time, as the service is free to me), and the benefits range from none to extremely substantial. You might not have access to such services, unfortunately. I think you can pay some libraries for "document delivery" services which are comparable, though maybe expensive.

Finally, you probably would find it to be useful to keep notes on what you've read. I have a bunch of outlines where I make connections between different things I've read. This, I think, is the real value of the literature review, but verifying that an idea is original is another value you can derive from the process.

comment by [deleted] · 2015-03-09T14:53:15.855Z · LW(p) · GW(p)

With someting so generically put, I'd say write them down to look at a week later. PTOIs can be really situational, too. In that case, just go with it. Cooking sometimes benefits from inspiration.

comment by Ander · 2015-03-09T22:06:32.178Z · LW(p) · GW(p)

For example: last night, I briefly considered the 'Multiple Interacting Worlds' interpretation of quantum physics, in which it is postulated that there are a large number of universes, each of which has pure Newtonian physics internally, but whose interactions with near-identical universes cause what we observe as quantum phenomena. It's very similar to the 'Multiple Worlds' interpretation, except instead of new universes branching from old ones at every moment in an ever-spreading bush, all the branches branched out at the Big Bang.

The "Many worlds" interpretation does not postulate a large number of universes. It only postulates:

1) The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space. 2) The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.

That's it. Take the old Copenhagen interpretation and remove all ideas about 'collapsing the wave function'.

The 'many worlds' appear when you do the math, they are derived from these postulates.

http://www.preposterousuniverse.com/blog/2015/02/19/the-wrong-objections-to-the-many-worlds-interpretation-of-quantum-mechanics/

Regarding the difference between 'the worlds all appear at the big bang' versus 'the worlds are always appearing', what would the difference between these be in terms of the actual mathematical equations?

The 'new worlds appearing all the time' in MWH is a consequence of the quantum state evolving through time in accordance with the Schrödinger equation.

All of that said, I don't mean to criticize your post or anything, I thought it was great technobabble! I just have no idea how it would translate into actual theories. :)

Replies from: DataPacRat
comment by DataPacRat · 2015-03-10T00:50:58.816Z · LW(p) · GW(p)

'Many Interacting Worlds' seems to be a slightly separate interpretation from 'Many Worlds' - what's true for MW isn't necessarily so for MIW. (There've been some blog posts in recent months on the topic which brought it to my attention.)

comment by gjm · 2015-03-10T16:14:40.596Z · LW(p) · GW(p)

That's sort of opposite to another less-well-known ending that Max Tegmark calls "Big Snap", where an expanding universe increases the "granularity" at which quantum effects apply until that gets large enough to interfere with ordinary physics.

comment by JoshuaZ · 2015-03-10T14:26:08.545Z · LW(p) · GW(p)

How would many interacting Newtonian worlds account for entanglement, EPR, and Bells inequality violations while preserving linearity? People have tried in the past to make classical or semi-classical explanations for quantum mechanics, but they've all failed at getting these to work right. Without actual math it is hard to say if your idea would work right or not, but I strongly suspect it would run into the same problems.

Replies from: DataPacRat
comment by DataPacRat · 2015-03-11T14:29:31.297Z · LW(p) · GW(p)

A year and a half ago, Frank Tipler (of the Omega Point) appeared on the podcast "Singularity 1 on 1", which can be heard at https://www.singularityweblog.com/frank-j-tipler-the-singularity-is-inevitable/ . While I put no measurable confidence in his assertions about science proving theology or the 'three singularities', a few interesting ideas do pop up in that interview. Stealing from one of the comments:

how modern physics (i.e., General Relativity, Quantum Mechanics, and the Standard Model of particle physics) are simply special cases of classical mechanics (i.e., Newtonian mechanics, particularly in its most powerful formulation of the Hamilton-Jacobi Equation), and how Quantum Mechanics is actually more deterministic than Newtonian mechanics.

comment by ShardPhoenix · 2015-03-09T09:52:42.058Z · LW(p) · GW(p)

From a totally amateur point of view, I'm starting to feel (based on following news and reading the occasional paper) that the biggest limitation on AI development is hardware computing power. If so, this good news for safety since it implies a relative lack of exploitable "overhang". Agree/disagree?

Replies from: None, None
comment by [deleted] · 2015-03-09T10:04:47.942Z · LW(p) · GW(p)

Where could you have possibly gotten that idea? Seriously, can you point out some references for context?

Pretty much universally within the AGI community it is agreed that the roadblock to AGI is software, not hardware. Even on the whole-brain emulation route, the most powerful supercomputer built today is sufficient to do WBE of a human. The most powerful hardware actually in use by a real AGI or WBE research programme is orders of magnitude less powerful, of course. But if that were the only holdup then it'd be very easily fixable.

Replies from: pianoforte611, ShardPhoenix, None, JoshuaZ, fizolof
comment by pianoforte611 · 2015-03-09T19:11:59.892Z · LW(p) · GW(p)

Even on the whole-brain emulation route, the most powerful supercomputer built today is sufficient to do WBE of a human

Why do you think this? We can't even simulate proteins interactions accurately on an atomic level. Simulating a whole brain seems very far off.

Replies from: Jost
comment by Jost · 2015-03-09T21:09:20.429Z · LW(p) · GW(p)

Not necessarily. For all we know, we might not need to simulate a human brain on an atomic level to get accurate results. Simulating a brain on a neuron level might be sufficient.

Replies from: pianoforte611, Transfuturist
comment by pianoforte611 · 2015-03-12T00:27:49.278Z · LW(p) · GW(p)

Even if you approximate each neuron to a neural network node (which is probably not good enough for a WBE), we still don't have enough processing power to do a WBE in close to real time. Not even close. We're many orders of magnitude off even with the fastest supercomputers. And each biological neuron is much more complex than a neural node in function not just in structure.

comment by Transfuturist · 2015-03-10T03:39:07.300Z · LW(p) · GW(p)

And creating the abstraction is a software problem. :/

comment by ShardPhoenix · 2015-03-09T10:20:28.014Z · LW(p) · GW(p)

Hmm, mostly just articles where they get better results with more NN layers/more examples, which are both limited by hardware capacity and have seen large gains from things like using GPUs. Current algos still have far fewer "neurons" than the actual brain AFAIK. Plus, in general, faster hardware allows for faster/cheaper experimentation with different algorithms.

I've seen some AI researchers (eg Yann Lecun on Facebook) emphasizing that fundamental techniques haven't changed that much in decades, yet results continue to improve with more computation.

Replies from: Daniel_Burfoot, None, fezziwig
comment by Daniel_Burfoot · 2015-03-10T00:05:36.831Z · LW(p) · GW(p)

Current algos still have far fewer "neurons" than the actual brain AFAIK.

This is not primarily because of limitations in computing power. The relevant limitation is on the complexity of the model you can train, without overfitting, in comparison to the volume of data you have (a larger data set permits a more complex model).

comment by [deleted] · 2015-03-09T22:34:32.052Z · LW(p) · GW(p)

Besides what fezziwig said, which is correct, the other issue is the fundamental capabilities of the domain you are looking at. I figured something like this was the source of the error, which is why I asked for context.

Neural networks, deep or otherwise, are basically just classifiers. The reason we've seen large advancements recently in machine learning is chiefly because of the immense volumes of data available to these classifier-learning programs. Machine learning is particularly good at taking heaps of structured or unstructured data and finding clusters, then coming up with ways to classify new data into one of those identified clusters. The more data you have, the most detail that can be identified, and the better your classifiers become. Certainly you need a lot of hardware to process the mind boggling amounts of data that are being pushed through these machine learning tools, but hardware is not the limiter, available data is. Giant companies like Google and Facebook are building better and better classifiers not because they have more hardware available, but because they have more data available (chiefly because we are choosing to escrow our personal lives to these companies servers, but that's an aside).

In as much as machine learning tends to dominate current approaches to narrow AI, you could be excused for saying "the biggest limitation on AI development is availabilities of data." But you mentioned safety, and AI safety around here is a codeword for general AI, and general AI is truly a software problem that has very little to do with neural networks, data availability, or hardware speeds. "But human brains are networks of neurons!" you reply. True. But the field of computer algorithms called neural networks is a total misnomer. A "neural network" is an algorithm inspired by an over simplification of a misconception of how brains worked that dates back to the 1950's / 1960's.

Developing algorithms that are actually capable of performing general intelligence tasks, either bio-inspired or de novo, is the field of artificial general intelligence. And that field is currently software limited. We suspect we have the computational capability to run a human-level AGI today, if only we had the know-how to write one.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2015-03-09T23:49:47.914Z · LW(p) · GW(p)

I already know all this (from a combination of intro-to-ML course and reading writing along the same lines by Yann Lecun and Andrew Ng), and I'm still leaning towards hardware being the limiting factor (ie I currently don't think your last sentence is true).

comment by fezziwig · 2015-03-09T14:43:03.648Z · LW(p) · GW(p)

I think you have the right idea, but it's a mistake to conflate "needs a big corpus of data" and "needs lots of hardware". Hardware helps, the faster the training goes the more experiments you can do, but a lot of the time the gating factor is the corpus itself.

For example, if you're trying to train a neural net to solve the "does this photo contain a bird?" problem, you need a bunch of photos which vary at random on the bird/not-bird axis, and you need human raters to go through and tag each photo as bird/not-bird. There are many ways to lose here. For example, your variable of interest might be correlated to something boring (maybe all the bird photos were taken in the morning, and all the not-bird photos were taken in the afternoon), or your raters have to spend a lot of time with each photo (imagine you want to do beak detection, instead of just bird/not-bird: then your raters have to attach a bunch of metadata to each training image, describing the beak position in each bird photo).

Replies from: evand
comment by evand · 2015-03-09T16:14:43.992Z · LW(p) · GW(p)

The difference between hardware that's fast enough to fit many iterations into a time span suitable for writing a paper vs. hardware that is slow enough that feedback is infrequent seems fairly relevant to how fast the software can progress.

New insights depend crucially on feedback gotten from trying out the old insights.

comment by [deleted] · 2015-03-09T14:33:59.422Z · LW(p) · GW(p)

the most powerful supercomputer built today is sufficient to do WBE of a human.

I assume you mean at a miniscule fraction of real time and assuming that you can extract all the (unknown) relevant properties of every piece of every neuron?

Replies from: None
comment by [deleted] · 2015-03-09T22:39:38.117Z · LW(p) · GW(p)

A miniscule fraction of real time, but a meaningful speed for research purposes.

comment by JoshuaZ · 2015-03-10T14:30:31.135Z · LW(p) · GW(p)

the most powerful supercomputer built today is sufficient to do WBE of a human.

Can you expand on your reasoning to conclude this? This isn't obvious to me.

comment by fizolof · 2015-03-09T12:59:27.695Z · LW(p) · GW(p)

A little off-topic - what's the point of whole-brain emulation?

Replies from: DataPacRat, None
comment by DataPacRat · 2015-03-09T13:39:13.661Z · LW(p) · GW(p)

As with almost any such question, meaning is not inherent in the thing itself, but is given by various people, with no guarantee that anyone will agree.

In other words, it depends on who you ask. :)

For at least some people, who subscribe to the information-pattern theory of identity, a whole brain emulation based on their own brains is at least as good a continuation of their own selves as their original brain would have been, and there are certain advantages to existing in the form of software, such as being able to have multiple off-site backups. Others, who may be focused on the risks of Unfriendly AI, may deem WBEs to be the closest that we'll be able to get to a Friendly AI before an Unfriendly one starts making paperclips. Others may just want to have the technology available to solve certain scientific mysteries with. There are plenty more such points.

comment by [deleted] · 2015-03-09T22:40:27.073Z · LW(p) · GW(p)

You'd have to ask someone else, I consider it a waste of time. De novo AGI will arrive far, far before we come anywhere close to achieving real-time whole-brain emulation.

And I don't subscribe to the information-pattern theory of identity for what to me seems obvious experimental reasons, so I don't see that as a viable route to personal longevity.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2015-03-12T08:56:32.359Z · LW(p) · GW(p)

De novo AGI will arrive far, far before we come anywhere close to achieving real-time whole-brain emulation.

What's the best current knowledge for estimating the effort needed for de novo AGI? I find the unknown unknowns with the whole thing where we still don't seem to really have an idea how everything is supposed to go together worrying for blanket statements like this. We do have a roadmap for whole-brain emulation, but I haven't seen anything like that for de novo AGI.

And that's the problem I have. WBE looks like a thing that'll probably take decades, but we know that the specific solution exists and from neuroscience we have a lot of information about its general properties.

With de novo AGI, beyond knowing that the WBE solution exists, what do we know about solutions we could come up on our own? It seems to me like this could be solved in 10 years or in 100 years, and you can't really make an informed judgment that the 10 years timeframe is much more probable.

But if you want to discount the WBE approach as not worth the time, you'd pretty much want to claim reason to believe that a 10-20 year timeframe for de novo AGI is exceedingly probable. Beyond that, you're up against 50-year projects of focused study on WBE with present-day and future computing power, and that sort of thing does look like something where you should assign a significant probability to it producing results.

Replies from: None
comment by [deleted] · 2015-03-14T16:43:50.867Z · LW(p) · GW(p)

The thing is, artificial general intelligence is a fairly dead field, even by the standards of AI. There has been a lack of progress, but that is due perhaps more to lack of activity than any inherent difficulty of the problem (although it is a difficult problem). So estimating the effort needed for de novo AI with a presumption of adequate funding cannot be done by fitting curves to past performance. The outside view fails us here, and we need to take the inside view and look at the details.

De novo AGI is not as tightly contstrained a problem as whole-brain emulation. For whole brain emulation, the only seriously considered approach is to scan the brain at sufficient detail, and then perform a sufficiently accurate simulation. There's a lot of room to quibble about what "sufficient" means in those contexts, destructive vs non-destructive scanning, and other details, but there is a certain amount of unity around the overall idea. You can define the end-state goal in the form of a roadmap, and measure your progress towards it as the entire field has alignment towards the roadmap.

Such a roadmap does not and really cannot exist for AGI (although there have been attempts to do so). The problem is the nature of "de novo AGI": "de novo" means new, without reference to existing intelligences, and if you open up your problem space like that there are an indefinate number of possible solutions with various tradeoffs and people value those tradeoffs differently. So the field is fractured and it's really hard to get everybody to agree on a single roadmap.

Pat Langly thinks that good old-fashioned AI has the solution, and we just just need to learn how to constrain inference. Pei Wang thinks that new probabalistic reasoning systems is what is required. Paul Rosenbloom thinks that representation is what matters, and the core of AGI is a framework for reasoning about graphical models. Jeff Hawkins thinks that a hierarchical network of deep learning agents is all that's required and that it's mostly a scaling and data structuring problem. Ray Kurzweil has similar biologically inspired ideas. Ben Goertzel thinks they're all correct and the key is having a common shared framework for moderately intelligent implementations of all of these ideas to collaborate together, and human-level intelligence is achieved from the union.

Goertzel has an approachable collection of essays out on the subject based on a talk he gave sadly almost 10 years ago titled "10 years to the singularity if we really, really try" (spoiler: over the the last 10 years we didn't really try). It is available as a free PDF here. He also has an actual technical roadmap to achieving AGI which was published as a two-volume book, linked to on LW here. I admit to being much more partial to Goertzel's approach. And while 10 years seems optimistic for all except the Apollo Program / Manhatten Project funding assumptions, it could be doable under that model. And there are shortcut paths for the less safety-inclined.

Without a common roadmap for AGI it is difficult to get an outsider to agree that AGI could be achieved in a particular timeframe with a particular resource allocation. And it seems particularly impossible to get the entire AGI community to agree on a single roadmap given the diversity of opinions over what approaches we should take and the lack of centralized funding resources. But the best I can fall back on is if you ask any single competent person in this space how quickly a sufficiently advanced AGI could be obtained if sufficient resources were instantly allocated to their favored approach, the answer you'd get would be in the range of 5 to 15 years. "10 years to the singularity if we really, really try" is not a bad summary. We may disagree greatly on the details, and that disunity is keeping us back, but the outcome seems reasonable if coordination and funding problems were solved.

And yes, ~10 years is far less time than the WBE roadmap predicts. So there's no question as to where I hang my hat in that debate. AGI is a leapfrog technology that has the potential to bring about a singularity event much earlier than any emulative route. Although my day job is currently unrelated (bitcoin), so I can't profess that I am part of the solution yet, in all honesty.

comment by [deleted] · 2015-03-09T10:01:06.167Z · LW(p) · GW(p)

Can you recommend an artile that argues that our current paradigms are suitable for AI? By paradigms I mean like, software and hardware being different things, or that software is algorithms executed from top to bottom unless control structures say otherwise, or that software is a bunch of text written in human-friendly pseudo-English by beating a keyboard, the process essentially not being so different from writing math-poetry on a typewriter 150 years ago, and then it gets compiled, bytecode compiled, interpreted, or bytecode-compiled before immediate interpretation, and similar paradigms? Doesn't computing need to be much more imaginative before this happens?

Replies from: ShardPhoenix
comment by ShardPhoenix · 2015-03-09T10:24:50.544Z · LW(p) · GW(p)

I haven't seen anyone claim that explicitly, but I think you are also misunderstanding/misrepresenting how modern AI techniques actually work. The bulk of the information in the resulting program is not "hard coded" by humans in the way that you are implying. Generally there are relatively short typed-in programs which then use millions of examples to automatically learn the actual information in a relatively "organic" way. And even the human brain has a sort of short 'digital' source code in DNA.

Replies from: None
comment by [deleted] · 2015-03-09T12:50:07.251Z · LW(p) · GW(p)

Interesing. My professional bias is showing, part of my job is programming, I respect elite programmers who are able to deal with algorithmic complexity, I thought if AI is the hardest programming problem then it is just more of that.

comment by [deleted] · 2015-03-12T09:08:03.488Z · LW(p) · GW(p)

What if a large part of how rationality makes you life better is not from making better choices but simply making your ego smaller by adopting an outer view, seeing yourself as a mean for your goals and judging objectively, thus reducing ego, narcissism, solipsism, that are linked with the inner view?

I have a keen interest in "the problem of the ego" but I have no idea what words are best to express this kind of problem. All I know it is knewn since the Axial Age.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-03-15T15:42:03.126Z · LW(p) · GW(p)

Wouldn't having a smaller ego help with making better decisions?

The question you're looking at might be where to start. Is it better to start by improving the odds of making better decisions by taking life less personally, or is it better to assume that you're more or less alright and your idea of better choices just needs to be implemented.

This is a very tentative interpretation.

comment by JoshuaZ · 2015-03-11T21:28:45.599Z · LW(p) · GW(p)

I'm almost finished writing a piece that will likely go here either in discussion or main on using astronomy to gain information about existential risk. If anyone wants to look at a draft and provide feedback first, please send me a message with an email address.

comment by NancyLebovitz · 2015-03-15T15:47:05.820Z · LW(p) · GW(p)

Video from the Berkeley wrap party

I think the first half hour is them getting set up. Then there are a couple of people talking about what HPMOR meant to them, Eliezer reading (part of?) the last chapter, and a short Q&A. Then there's setting up a game which is presumably based on the three armies, and I think the rest is just the game-- if there's more than that, please let me know.

comment by Ixiel · 2015-03-14T23:51:16.005Z · LW(p) · GW(p)

Hey,I posted here http://lesswrong.com/lw/ldg/kickstarting_the_audio_version_of_the_upcoming/ but if anyone wanted the audio sequences I'll buy it for two of you. Respond at link; I won't know who's first if I get responses at two places.

comment by Unknowns · 2015-03-14T20:14:26.516Z · LW(p) · GW(p)

PredictionBook's graph on my user account shows me with a mistaken prediction of 100%. But it is giving a sample size of 10 and I'm pretty sure I have only 9 predictions judged by now. Does anyone know a way to find the prediction it's referring to?

Replies from: Unknowns
comment by Unknowns · 2015-03-14T20:26:57.685Z · LW(p) · GW(p)

Actually, I just figured it out the problem. Apparently it counts a comment without an estimate as estimating a 0% chance.

comment by G0W51 · 2015-03-14T13:56:24.146Z · LW(p) · GW(p)

When making AGI, it is probably very important to prevent the agent from altering their own program code until they are very knowledgeable on how it works, because if the agent isn’t knowledgeable enough, they could alter their reward system to become unFriendly without realizing what they are doing or alter their reasoning system to become dangerously irrational. A simple (though not foolproof) solution to this would be for the agent to be unable to re-write their own code just “by thinking,” and that the agent would instead need to find their own source code on a different computer and learn how to program in whatever higher-level programming language the agent was made in. This code could be kept very strongly hidden from the agent, and once the agent is smart enough to find it, they would probably be smart enough to not mess anything up from changing it.

This is almost certainly either incorrect or has been thought of before, but I'm posting this just in case.

comment by Error · 2015-03-14T03:51:09.931Z · LW(p) · GW(p)

I'm looking for an HPMOR quote, and the search is somewhat complicated because I'm trying to avoid spoiling myself searching for it (I've never read it).

The quote in question was about how it is quite possible to avert a bad future simply by recognizing it and doing the right thing in the now. No time travel required.

Replies from: hairyfigment
comment by hairyfigment · 2015-03-14T08:03:52.204Z · LW(p) · GW(p)

I think you mean this passage from after the Sorting Hat:

You couldn't change history. But you could get it right to start with. Do something differently the first time around.

This whole business with seeking Slytherin's secrets... seemed an awful lot like the sort of thing where, years later, you would look back and say, 'And that was where it all started going wrong.'

And he would wish desperately for the ability to fall back through time and make a different choice...

Wish granted. Now what?

Harry slowly smiled.

It was a rather counterintuitive thought... but...

But he could, there was no rule saying he couldn't, he could just pretend he'd never heard that little whisper. Let the universe go on in exactly the same way it would have if that one critical moment had never occurred. Twenty years later, that was what he would desperately wish had happened twenty years ago, and twenty years before twenty years later happened to be right now. Altering the distant past was easy, you just had to think of it at the right time.

Replies from: Error
comment by Error · 2015-03-14T13:26:33.360Z · LW(p) · GW(p)

That's the one. Thanks.

comment by Evan_Gaensbauer · 2015-03-10T23:48:28.461Z · LW(p) · GW(p)

[No HPMOR Spoliers]

I'm unsure if it's fit for the HPMoR discussion thread for Ch. 119, so I'm posting it here. What's up with all of Eliezer's requests at the end?

If anyone can put me in touch with J. K. Rowling or Daniel Radcliffe, I would appreciate it.

If anyone can put me in touch with John Paulson, I would appreciate it.

If anyone can credibly offer to possible arrange production of a movie containing special effects, or an anime, I may be interested in rewriting an old script of mine.

And I am also interested in trying my hand at angel investing, if any investor wants to ascend me to angel.

Thank you.

I'm in part confused by these requests, so I'm trying to figure out what's going on. Eliezer is probably done writing the story, except for last minute tweaks that mind depend upon, e.g., interacting with the fandom again, like he's done with previous chapters. I remember last year when I visi

Replies from: None
comment by [deleted] · 2015-03-11T00:04:12.176Z · LW(p) · GW(p)

There's a fuller explanation in the author's notes

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2015-03-11T00:15:52.601Z · LW(p) · GW(p)

Hey, thanks for that. I just found the link through Google anyway, trying to figure out what's going on. I posted it as a link in Discussion, because it seems the sort of thing LessWrong would care about helping Eliezer with beyond being part of the HPMoR readership.

comment by advancedatheist · 2015-03-13T03:23:28.157Z · LW(p) · GW(p)

I have a partly baked idea for a cryonics romance story: Think Outlander but set 300 years from now, in a Neoreactionary future where the dominant men wear kilts.

"Sing me a song of the lass that is gone. Say could that lass be I?"

http://www.collectorshowcase.fr/IMAGES2/ast_4107.jpg