Posts

Changes in AI Safety Funding 2017-02-11T08:36:32.514Z · score: 3 (4 votes)
The true degree of our emotional disconnect 2016-10-31T19:07:42.333Z · score: 4 (5 votes)

Comments

Comment by siiver on Questions about AGI's Importance · 2017-11-01T10:30:43.966Z · score: 0 (0 votes) · LW · GW

It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.

AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

Comment by siiver on Questions about AGI's Importance · 2017-11-01T09:32:32.932Z · score: 0 (0 votes) · LW · GW

Because

"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s

AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution

Comment by siiver on Instrumental Rationality Sequence Finished! (w/ caveats) · 2017-09-09T11:44:11.244Z · score: 0 (0 votes) · LW · GW

"Less than a third of students by their own self-appointed worst-case estimate *1."

missing a word here, I think.

Comment by siiver on Inconsistent Beliefs and Charitable Giving · 2017-09-09T10:46:05.481Z · score: 0 (0 votes) · LW · GW

I think your post is spot on.

Comment by siiver on Is life worth living? · 2017-08-31T22:56:35.278Z · score: 1 (1 votes) · LW · GW

re-live. Although I'd rather live the same amount of time from now onward.

Comment by siiver on Sam Harris and Scott Adams debate Trump: a model rationalist disagreement · 2017-07-20T19:41:08.169Z · score: 4 (4 votes) · LW · GW

First question: I know you admire Trump's persuasion skills, but what I want to know is why you think he's a good person/president etc.

Answer: [talks about Trump's persuasion skills]

Yeah, okay.

Comment by siiver on Daniel Dewey on MIRI's Highly Reliable Agent Design Work · 2017-07-09T14:18:59.789Z · score: 0 (0 votes) · LW · GW

This is an exceptionally well reasoned article, I'd say. Particular props to the appropriate amount of uncertainty.

Comment by siiver on Against lone wolf self-improvement · 2017-07-07T20:52:27.008Z · score: 1 (1 votes) · LW · GW

Well, if you put it like that I fully agree. Generally, I believe that "if it doesn't work, try something else" isn't followed as often as it should. There's probably a fair number of people who'd benefit from following this article's advice.

Comment by siiver on Against lone wolf self-improvement · 2017-07-07T20:41:09.073Z · score: 0 (0 votes) · LW · GW

I don't quite know how to make this response more sophisticated than "I don't think this is true". It seems to me that whether classes ore lone-wolf improvement is better is a pretty complex question and the answer is fairly balanced, though overall I'd give the edge to lone-wolf.

Comment by siiver on We need a better theory of happiness and suffering · 2017-07-04T20:23:10.281Z · score: 1 (1 votes) · LW · GW

I don't know what our terminal goals are (more precisely than "positive emotions"). I think it doesn't matter insofar as the answer to "what should we do" is "work on AI alignment" either way. Modulo that, yeah there are some open questions.

On the thesis of suffering requiring higher order cognition in particular, I have to say that sounds incredibly implausible (for I think fairly obvious reasons involving evolution).

Comment by siiver on Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) · 2017-06-15T19:30:48.093Z · score: 1 (1 votes) · LW · GW

This looks solid.

Can you go into a bit of detail on the level / spectrum of difficulty of the courses you're aiming for, and the background knowledge that'll be expected? I suspect you don't want to discourage people, but realistically speaking, it can hardly be low enough to allow everyone who's interested to participate meaningfully.

Comment by siiver on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-10T15:01:42.295Z · score: 1 (1 votes) · LW · GW

Yeah, you're of course right. In the back of my mind I realized that the point I was making was flawed even as I was writing it. A much weaker version of the same would have been correct, "you should at least question whether your intuition is wrong." In this case it's just very obvious to me me that there is nothing to be fixed about utilitarianism.

Anyway, yeah, it wasn't a good reply.

Comment by siiver on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-08T16:01:56.776Z · score: 3 (3 votes) · LW · GW

This is the ultimate example of... there should be a name for this.

You figure out that something is true, like utilitarianism. Then you find a result that seems counter intuitive. Rather than going "huh, I guess my intuition was wrong, interesting" you go "LET ME FIX THAT" and change the system so that it does what you want...

man, if you trust your intuition more than the system, then there is no reason to have a system in the first place. Just do what is intuitive.

The whole point of having a system like utilitarinism is that we can figure out the correct answers in an abstarct, general way, but not necessarily for each particular situation. Having a system tells us what is correct in each situation, not vice versa.

The utility monster is nothing to be fixed. It's a natural consequence of doing the right thing, that just happens to make some people uncomfortable. It's hardly the only uncomfortable consequence of utilitarianism, either.

Comment by siiver on Existential risk from AI without an intelligence explosion · 2017-05-26T14:23:04.569Z · score: 0 (0 votes) · LW · GW

This seems like something we should talk about more.

Although, afaik there shouldn't be a decision between motivation selection and capability controlling measures – the former is obviously the more important part, but you can also always "box" the AI in addition (insofar as that's compatible with what you want it to do).

Comment by siiver on AGI and Mainstream Culture · 2017-05-23T16:32:14.806Z · score: 0 (0 votes) · LW · GW

That sounds dangerously like justifying inaction.

Literally speaking, I don't disagree. It's possible that spreading awareness has a net negative outcome. It's just not likely. I don't discourage looking into the question, and if facts start pointing the other way I can be convinced. But while we're still vaguely uncertain, we should act on what seems more likely right now.

Comment by siiver on AGI and Mainstream Culture · 2017-05-22T18:47:38.754Z · score: 0 (0 votes) · LW · GW

I guess it's a legit argument, but it doesn't have the research aspect and it's a sample size of one.

Comment by siiver on AGI and Mainstream Culture · 2017-05-21T18:32:41.620Z · score: 1 (1 votes) · LW · GW

This just seems like an incredibly weak argument to me. A) it seems to me that prior research will be influenced much more than the probability for an arms race, because the first is more directly linked to public perception, B) we're mostly trying to spread awareness of the risk not the capability, and C) how do we even know that more awareness on the top political levels would lead to a higher probability for an arms race, rather than a higher probability for an international cooperation?

I feel like raising awareness has a very clear and fairly safe upside, while the downside is highly uncertain.

Comment by siiver on Reaching out to people with the problems of friendly AI · 2017-05-17T20:50:11.549Z · score: 0 (0 votes) · LW · GW

Pretty sure it is. You have two factors, increasing the awareness of AI risk and of AI specifically. The first is good, the second may be bad but since the set of people caring about AI generally is so much larger, the second is also much less important.

Comment by siiver on Reaching out to people with the problems of friendly AI · 2017-05-16T20:08:48.902Z · score: 1 (1 votes) · LW · GW

I whole-heartedly agree with you, but I don't have anything better than "tell everyone you know about it." On that topic, what do you think is the best link to send to people? I use this, but it's not ideal.

Comment by siiver on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-13T16:50:47.052Z · score: 0 (0 votes) · LW · GW

Essentially:

Q: Evolution is a dumb algorithm, yet it produced halfway functional minds. How can it be that the problem isn't easy for humans, who are much smarter than evolution?

A: Evolution's output is not just one functional mind. Evolution put out billions of different minds, an extreme minority of them being functional. If we had a billion years of time and had a trillion chances to get it right, the problem would be easy. Since we only have around 30 years and exactly 1 chance, the problem is hard.

Comment by siiver on There is No Akrasia · 2017-05-01T07:06:00.744Z · score: 1 (1 votes) · LW · GW

I often ask myself the question of "is this really a thing" when it comes to high level concepts like this. I'm very unsure on Akrasia, and you make a decent enough argument. It could very well not actually be a thing (beyond referring to sub-things).

More importantly, though, even if it were a thing, I agree that the strategy you suggest of focusing on the smaller issues is likely the better one.

Comment by siiver on Nate Soares' Replacing Guilt Series compiled in epub Format · 2017-04-30T13:36:40.517Z · score: 2 (2 votes) · LW · GW

I read the first post, which is excellent. Thanks for sharing.

Comment by siiver on MIRI: Decisions are for making bad outcomes inconsistent · 2017-04-10T20:36:00.908Z · score: 1 (1 votes) · LW · GW

Thanks! So UDT is integrated. That's good to hear.

Comment by siiver on MIRI: Decisions are for making bad outcomes inconsistent · 2017-04-09T18:02:46.148Z · score: 1 (1 votes) · LW · GW

Can someone briefly explain to me the difference between functional and updateless decision theory / where FDT performs better? That would be much appreciated. I have not yet read FDT because it does not mention UDT (I checked) and I want to understand why UDT needs "fixing" before I invest the time.

Comment by siiver on OpenAI makes humanity less safe · 2017-04-05T06:55:13.623Z · score: 1 (1 votes) · LW · GW

Ugh. When I heard about this first I naively thought it was great news. Now I see it's a much harder question.

Comment by siiver on Alien Implant: Newcomb's Smoking Lesion · 2017-03-04T20:05:13.657Z · score: 0 (0 votes) · LW · GW

In my situation, it is the same: you can "determine" whether your dial is set to the first or second position by making a decision about whether to smoke.

No.

You can not. You can't.

I'm struggling with this reply. I almost decided to stop trying to convince you. I will try one more time, but I need you to consider the possibility that you are wrong before you continue to the next paragraph. Consider the outside view: if you were right, Yudkowksy would be wrong, Anna would be wrong, everyone who read your post here and didn't upvote this revolutionary, shocking insight would be wrong. Are you sufficiently more intelligent than any of them to be confident in your conclusion? I'm saying this only so you to consider the possibility, nothing more.

You do not have an impact. The reason why you believe otherwise is probably that in Newcomb's problem, you do have an impact in an unintuitive way, and you generalized this without fully understanding why you have an impact in Newcomb's problem. It is not because you can magically choose to live in a certain world despite no causal connection.

In Newcomb's problem, the kind of person you are causally determines the contents of the opaque box, and it causally determines your decision to open them. You have the option to change the kind of person you are, i.e. decide you'll one-box in Newcomb's problem at any given moment before you are confronted with it (such as right now), therefore you causally determine how much money you will receive once you play it in the future. The intuitive argument "it is already decided, therefore it doesn't matter what I do" is actually 100% correct. Your choice to one-box or two-box has no influence on the contents of the opaque box. But the fact that you are the kind of person who one-boxes does, and it happens to be that you (supposedly) can't two-box without being the kind of person who two-boxes.

In the Smoking Lesion, in your alien scenario, this impact is not there. An independent source determines both the state of your box and your decision to smoke or not to smoke. A snapshot at all humans at any given time, with no forecasting ability, reveals exactly who will die of cancer and who won't. If superomega comes from the sky and convinces everyone to stop smoking, the exact same people will die as before. If everyone stopped smoking immediately, the exact same people will die as before. In the future, the exact same people who would otherwise have died still die. People with the box on the wrong state who decide to stop smoking still die.

Comment by siiver on Alien Implant: Newcomb's Smoking Lesion · 2017-03-04T07:27:51.758Z · score: 0 (0 votes) · LW · GW

it will mean that everyone in the future had their dial set to the second position.

No it won't. Nothing you wrote into the story indicates that you can change the box (in case of no forecaster). If you could, that would change everything (and it wouldn't be the smoking lesion anymore).

Comment by siiver on Alien Implant: Newcomb's Smoking Lesion · 2017-03-03T18:34:04.404Z · score: 1 (1 votes) · LW · GW

I know it was the intention, but it doesn't actually work the way you think.

The thing that causes the confusion is that you introduced an infallible decision maker into the brain that takes all autonomy away from the human (in case of there being no forecaster). This is basically a logical impossibility, which is why I just said "this is newcomb's problem". There has to be a forecaster. But okay, suppose not. I'll show you why this does make a difference.

In Newcomb's problem, you do in fact influence the contents of the opaque box. Your decision doesn't, but the fact that you are the kind of person who makes this decision does. Your algorithm does. In the Alien Implant scenario case no forecaster, you don't affect the state of your box at all.

If there was a forecaster, you could prevent people from dying of cancer by telling them about Timeless Decision Theory. Their choice not to smoke wouldn't affect the state of their box, but the fact that you convince them would: the forecaster predicts that you prevent them from smoking, therefore they do not smoke, therefore it predicts they don't smoke, therefore the box is on state 2.

If there was no forecaster, whether or not you smoke has no effect on your box, causally or otherwise. The state of their box is already determined; if you convinced them not to smoke, they would still get cancer and die and the box would be on state 1. Now this never happens in your scenario, which like I said is pretty close to being impossible, hence the confusion.

But it doesn't matter! Not smoking means you live, smoking means you die!

No, it doesn't. Suppose the decision maker was infallible. Everyone who smokes dies. Sooner or later people would all stop smoking. And this is where the scenario doesn't work anymore. Because the number of people dying can't go down. So either it must be impossible to convince people – in that case, why try? – or the decision maker becomes fallible, in which case your whole argument breaks apart. You don't smoke and still die.

Think about this fact again: no forecaster means there is a fixed percentage of the population who has their box on state 1. If you are still not convinced, consider that "winning" by not smoking would then have to mean that someone else gets cancer instead, since you cannot change the number of people. Obviously, this is not what happens.

If there was a forecaster and everyone stopped smoking, no-one would die. If everyone one-boxes in Newcomb's problem, everyone gets rich.

Comment by siiver on Alien Implant: Newcomb's Smoking Lesion · 2017-03-03T06:21:49.834Z · score: 2 (2 votes) · LW · GW

I'm afraid you misunderstand the difference between the Smoking Lesion and Newcomb's problem. In the Smoking Lesion, if you are the kind of person who is affected by the thing which causes lung cancer and the desire to smoke, and you resist this desire, you still die of cancer. Your example is just Newcomb's problem with an infallible forecaster, where if you don't smoke you don't die of cancer. This is an inherent difference. They are not the same.

Comment by siiver on AI Research Discussed by Mainstream Media · 2017-03-02T18:20:49.536Z · score: 2 (2 votes) · LW · GW

I'm pretty happy with this article... though one of my concerns is that the section on how exactly AI could wipe out humanity was a bit short. It wants to cure cancer, it kills all humans, okay, but a reader might just think "well this is easy, tell it not to harm humans." I'd have liked if the article had at least hinted at why the problem is more difficult.

Still, all in all, this could have been much worse.

Comment by siiver on Stupidity as a mental illness · 2017-02-13T22:09:02.462Z · score: 3 (3 votes) · LW · GW

I feel like I am repeating myself. Here is the chain of arguments

1) A normal person seeing this article and its upvote count will walk away having a very negative view of LessWrong (reasons in my original reply)

2) Making the valid points of this article is in no way dependent on the negative consequences of 1). You could do the same (in fact, a better job at the same) without offending anyone.

3) LessWrong can be a gateway for people to care about existential risk and AI safety.

4) AI safety is arguably the biggest problem in the world right now and extremely low efforts go into solving it, globally speaking.

5) Due to 4) getting people to care about AI safety is extrmely important. Due to that and 3), harming the reputation of LessWrong is really bad

6) Therefore, this article is awful, harmful, and should be resented by everyone.

Comment by siiver on Stupidity as a mental illness · 2017-02-13T21:02:00.994Z · score: 2 (2 votes) · LW · GW

No, I fully acknowledge that the post tries to do those things, see the second half of my reply. I argue that it fails at doing so but is harmful for our reputation etc.

Comment by siiver on Stupidity as a mental illness · 2017-02-13T19:26:51.267Z · score: 2 (2 votes) · LW · GW

It's about a set of mannerisms which many people on LW have that are really bad. I don't know what you mean by woke.

Comment by siiver on Stupidity as a mental illness · 2017-02-12T19:27:24.524Z · score: 8 (8 votes) · LW · GW

L. : While obviously being rational is good, LW as a community seems to be promoting elitism and entitlement.

s: Rationality can be scary that way. But it is about seeking truth, and the community does happen to consist of smart people. Denying that is false humility. Similarly, a lot of ideas many people support just happen to be false. It's not our fault that our society got it wrong on so many issues. We're just after truth.

L. : How does it serve truth to label people which aren't smart as mentally ill?

s: That's terrible, of course. But that's not a flaw of rationality, nothing about rationality dictates "you have to be cruel to other people". In fact if you think about this really hard you'll see that rationality usually dictates being nice.

L: Then how come this post on LessWrong is the most upvoted thing of the last 20 submissions?

s: ...

s: I can't defend that.

L. : Oh, okay. So I'm right and Yudkowsky's site does promote entitlement and sexism.

s: wait, sexism?

L. : Yeah. The last thing I saw from LW was two men talking about what a woman needs to do to fit the role they want her to have in society.

s: Okay, but that's not Yudkowsky's fault! He is not responsible for everyone on LW! The sequences don't promote sexism-

L. : I heard HPMoR is sexist, too.

s: That's not remotely true. It actually promotes feminism. Hermione is-

L. : I'm sorry, but I think I value the thoughts of other people who are more knowledgeable about sexism over yours. At least you condemn this article, but you still hang out on this site.


Scott Alexander has said that it's society's biggest mistake to turn away from intelligence (can't find the article). Even minor increases of intelligence correlate meaningfully with all sorts of things (a negative correlation with crime being one of them afaik). Intelligence is the most powerful force in the universe. A few intelligence points on the people working on Friendly AI right now could determine the fate of our entire species. I want to make it extra clear that I think intelligence is ultra-important and almost universally good.

None of this excuses this article. None of it suggests that it's somehow okay to label stupid people as mentally ill. Rationality is about winning, and this article is losing in every sense of the word. It won't be good for the reputation of LW, it won't be good for our agenda, and it won't be good for the pursuit of truth. The only expected positive effect is making people who read it feel good. It essentially says "being intelligent is good. Being stupid is bad. Other people are stupid. They are the problem. We are better than them." Which is largely true, but as helpful as making an IQ test, and emailing a friend saying "look here I am verifiable smarter than you and being smart is the most important thing in our society!"

Okay, but that's not a content critique. I just said I think this is bad and went from there. If the article was actually making a strong case, well then it could still be bad for having an unnecessarily insulting and harmful framing that is bad for our cause, but it might be defend-able on other grounds. Maybe. We want to do both; to win and to pursue truth, and those aren't the same thing. But I strongly feel the article doesn't succeed on that front, either. Let's take a look.


It's great to make people more aware of bad mental habits and encourage better ones, as many people have done on LessWrong.

sure.

The way we deal with weak thinking is, however, like how people dealt with depression before the development of effective anti-depressants:

seems to be true.

"Stupidity," like "depression," is a sloppy "common-sense" word that we apply to different conditions, which may be caused by genetics (for instance, mutations in the M1 or M3 pathways, or two copies of Thr92Ala), deep subconscious conditioning (e.g., religion), general health issues (like not getting enough sleep), environment (ignorance, lack of reward for intelligent behavior), or bad habits of thought.

There is an implicit assumption here that being stupid requires some kind of explanation, but nothing at all in the article provides a reason of why this would be the case. Stupidity is not depression. The reason why it makes sense to label depression as a mental illness is (I assume) that it corresponds to an anomaly in the territory. Suppose we had a function, depressedness(human, time) which displayed how depressed each person on earth has been for, say, the past 10 years. I would expect to see weird behavior of that function, strange peaks over intervals of time on various people, many of whom don't have unusually high values most of the time. This would suggest that it is something to be treated.

If you did the same for intelligence, I'd expect relatively low change on the time axis (aside from an increase at young age and a decrease in the case of actual mental illnesses) and some kind of mathematically typical distribution among the person axis ranging from 60 to dunno 170 or something. I feel really strange about having to make this argument, but this is really the crux of the problem here. The article doesn't argue "here and here are stats suggesting that there are anomalies with this function, therefore there is a thing which we could sensibly describe as a mental illness" it just says "some people are dumb, here are some dumb things they do, let's label that mental illness." To sum the fallacy committed here up in one sentence, it talks about a thing without explaining why that thing should exist.

It is implied that people being ashamed of admitting to depression is a problem, and I infer that the intention is to make being stupid feel less bad by labeling their condition a "mental illness." But it clearly fails in this regard, and is almost certainly more likely to do the opposite.. It's sort of a Lose-Lose dynamic: it implies that there is some specific thing influencing a natural distribution of intelligence, some special condition that covers "stupid "people which explains why they are stupid – which likely isn't the case, in that way having low IQ is probably worse than the article was meant to imply, since there is no special condition, you just got the lower end of the stick – while also being framed in such a way that it will make unintelligent people feel worse than before, not better.

And where is the reverse causation of believing in religion causing stupidity coming from? Postulating an idea like this ought to require evidence.

The article goes on to say that we should do something to make people smarter. I totally, completely, whole-heartedly agree. But saying high IQ is better than low IQ is something that can and has been done without all of the other stuff attached to it. And research in that direction is being done already. If you wanted to make a case for why we should have more of that, then you could do that so much more effectively without all the negativity attached to it.

Here are the accusations I am making. I accuse this article of not making a good case for anything that is both true and non-obvious, on top of being offensive and harmful for our reputation, and consequently our agenda. (Even if it is correct and there is an irregularity in the intelligence function, it doesn't make a good case.) I believe that if arguments of the same quality were brought forth on any other topic, the article would be treated the same way most articles with weak content are treated: with indifference, few upvotes, and perhaps one or two comments pointing out some flaws in it (if Omega appeared before me, I would bet a lot of money on that theory with a pretty poor ratio). I'll go as far as to accuse upvoting this as a failure of rationality. I agree with Pimgd on everything they said, but I feel like it is important to point out how awful this article is, rather than treating it as a simple point of disagreement. The fact that this has 12 upvotes is really, really really bad, and a symptom of a much larger problem.

This is not how you are being nice. This is not how you promote rationality. This is not how you win.

Comment by siiver on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-06T17:39:03.160Z · score: 1 (1 votes) · LW · GW

I agree that it's clear that you should one box – I'm more talking about justifying why one-boxing is in fact correct when it can't logically influence whether there is money in the box. Initially I found this to be unnerving initially, but maybe I was the only one.

Comment by siiver on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-06T14:25:45.579Z · score: 0 (0 votes) · LW · GW

Reposting this from last week's open thread because it seemed to get buried

Is Newcomb's Paradox solved? I don't mean from a decision standpoint, but the logical knot of "it is clearly, obviously better two one-box, and it is clearly, logically proven better to two-box". I think I have a satisfying solution, but it might be old news.

Comment by siiver on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-06T02:33:16.097Z · score: 1 (1 votes) · LW · GW

But what makes you think that more complex story types allow many more possibilities?

Isn't that an inherent property of complexity? A larger set of elements -> a larger powerset of elements -> more possibilities. In fact the size of the powerset grows at 2^x. I think a second game of thrones would be less groundbreaking, but doesn't have to be worse... and the same goes for the 1000th GoT.

There seems to be a slowdown in more arty / complex stories this decade (than compared to the 90's for example).

With film and television creation being more democratized than ever, I don't see a reason why the creation of these type of films would slow down apart from the remaining stories requiring more complexity and skill to write than ever.

I don't know as much as you about the industry. These sound worrisome.

I still think it is more likely that there is another reason (not that bold of an assumption) than that we really run out of complex things to write, because that just doesn't seem to be true looking at how complexity works and how much seems to be doable just by tweaking those more complex pieces we have. Adaption is another great example.

But, I might be suffering from bias here, because I much prefer the world where I'm right to the one where I'm wrong.

Comment by siiver on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-05T21:20:37.234Z · score: 2 (2 votes) · LW · GW

Well, there is a provably finite possibility space for stories. You only have so many ways to arrange letters in a script. The question is whether it's meaningful.

To use some completely made-up numbers, I think the current possibility space for movies produced by the bottom 80% of people with the current style may be 90% covered. The space for the top 2%, on the other hand, is probably covered for less than 0.1% (and I resisted putting in more zeros there).

To get more concrete, I'll name some pieces (which I avoided doing in my initial post). Take Game of Thrones. It's a huge deal – why? Well, because there isn't really anything like it, But when you get rid of all the typical story tropes, like main characters with invulnerability, predictable plot progressions, a heroic minority lucking out against an evil majority, typical villains, etc etc, not only does the result get better, the possibility space actually widens. (I'm not saying scripts of this quality don't exist, but it seems to be the only show where a great script and a huge budget and a competent team came together. There could be thousands of shows like this, and there is just one).

Or take the movie Being John Malkovich. Basically, there is one supernatural element placed in an otherwise scientifically operating world, and you have a bunch of character who act like normal humans, meaning largely selfish and emotionally driven, acting over that element. Just thinking about how much you could do following that formula opens up a large area in that seems to be largely untouched.

I think we're shooting at Pluto over and over again while (for the most part) ignoring the rest of the universe. And it still works, because production quality and effects are still improving.

(edited)

Comment by siiver on Are we running out of new music/movies/art from a metaphysical perspective? · 2017-02-05T19:28:15.543Z · score: 2 (2 votes) · LW · GW

I'd say no to both. I don't think any genre has come meaningfully close to completion, though I don't know classic of jazz very well.

Let's talk film. If I take a random movie that I didn't like, I find it very similar to others. If, however, I take one that I really like, I find that frustratingly few movies exist that are even similar.

I consider the possibility space to be a function of creativity/intelligence/competence (let's call it skill) of writing, and one that grows faster-than-linearly. The space of medium-skill writing may be nearing completion (though I think this is arguable, too), but the space for more intelligent writing is vast and largely unexplored.

Just think of how many similarities most movies have, starting with the Hero's Journey archetype. This need not be. My two favorite non-animated pieces in film both don't have a main character.

Comment by siiver on A question about the rules · 2017-02-05T10:35:54.988Z · score: 0 (0 votes) · LW · GW

While I would agree that those kinds of accusations are used unfairly at times, I don't think it's unreasonable to assign Yudkowsky's statements a higher a priori chance of being true.

Comment by siiver on Open thread, Jan. 30 - Feb. 05, 2017 · 2017-02-02T15:01:37.727Z · score: 0 (0 votes) · LW · GW

Do people feel like the Newcomb paradox (one-boxing yields the better result, it is clearly preferable; two-boxing only means taking an additional 1000$ through a decision that can't possibly have an effect on the 1 million, it is clearly preferable) been resolved through Anna's post in the Sequences (or others)? I strongly feel that I have a solution with no contradictions, but don't want to post it if it's obvious.

Comment by siiver on First impressions... · 2017-01-24T18:21:17.397Z · score: 9 (9 votes) · LW · GW

In being ironically guilty of not addressing your actual argument here, I'll point out that flaws of LW, valid or otherwise, aren't flaws of rationality. Rationality just means avoiding biases/fallacies. Failure can only be in the community.

Comment by siiver on Willpower duality (a very short essay) · 2017-01-20T11:55:05.602Z · score: 2 (2 votes) · LW · GW

Yeah, this is pretty much my conclusion, too. If I had read this article a couple of years ago, it'd have helped me a lot.

I'd add that you should still overrule system 1 in some really important and rare cases, it's just not practical for recurring things.

Comment by siiver on [LINK] EA Has A Lying Problem · 2017-01-12T01:46:47.459Z · score: 8 (8 votes) · LW · GW

I'm pretty split on this. I found the quotes from Ben Todd and Robert Wiblin to be quite harmless, but the quotes from Jacy Reese to be quite bad. I don't think it's possible to judge the scope of the problem discussed here based on the post alone. In either case, I think the effort to hold EA to high standards is good.

Comment by siiver on Disjunctive AI scenarios: Individual or collective takeoff? · 2017-01-11T17:18:32.095Z · score: 1 (1 votes) · LW · GW

I don't find this convincing.

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee.

I think the same argument has been made by Hanson, and it doesn't seem to be true. Humans seem significantly superior based on the fact that they are capable of learning language. There is afaik no recorded instance of a chimpanzee doing that. The quote accurately points out that there are lots of things which an individual human or a tribe can't do that a chimpanzee can't do either, but it ignores the fact that there are also things which a human can in fact do and a chimpanzee can't. Moreover, even if it was true that a human brain isn't that much more awesome than a chimpanzee's, that doesn't imply that an AI can't be much more awesome than a human brain.

The remainder of the article argues that human capability is really based on a lot of implicit skills that aren't written down anywhere. I don't think this argument holds. If an AI is capable of reading much more quickly than humans, then it should also be able of watching video footage much more quickly than humans (if not by the same factor), and if it has access to the Internet, then I don't see why it shouldn't be able to learn how to turn the right knobs and handles on an oil rig and how to read the faces of humans – or literally anything else.

Am I missing something here?

Comment by siiver on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-09T21:12:53.858Z · score: 0 (0 votes) · LW · GW

Question: Regardless of the degree to which this is true, if everyone collectively assumed that Valence Utilitarianism (every conscious experience has value (positive or negative, depending on pleasantness/unpleasantness), each action's utility is the sum of all value it causes / changes / prevents) was universally true, how much would that change about Friendly AI research?

Comment by siiver on Epilogue: Atonement (8/8) · 2017-01-05T14:38:13.610Z · score: 0 (0 votes) · LW · GW

Well, I don't think this is even complicated. The super happies are right... it is normal for them to forcefully reform us, and it is moral for us to erase the babyeater species.

Suffice to say I preferred the normal ending.

Comment by siiver on Deconstructing overpopulation for life extensionists · 2017-01-02T01:20:01.698Z · score: 6 (6 votes) · LW · GW

Link is missing!

Comment by siiver on Expected Error, or how wrong you expect to be · 2016-12-25T10:16:54.671Z · score: 3 (3 votes) · LW · GW

One thing to keep in mind is that, just because something already exists somewhere on earth, doesn't make it useless on LW. The thing that – in theory – makes this site valuable in my experience, is that you have a guarantee of content being high quality if it is being received well. Sure I could study for years and read all content of the sequences from various fields, but I can't read them all in one place without anything wasting my time in between.

So I don't think "this has already been figured out in book XX" implies that it isn't worth reading. Because I won't go out to read book XX, but I might read this post.

Comment by siiver on Superintelligence: The Idea That Eats Smart People · 2016-12-24T17:48:10.788Z · score: 0 (0 votes) · LW · GW

Frankly I think that most people have no business having confident beliefs about any controversial topics. It's a bit weird to argue what an average IQ person "should" believe, because, applying a metric like "what is the average IQ of people holding this belief" is not something they're likely to do. But it would probably yield better results than whatever algorithm they're using.

Your first sentence isn't really a sentence so I'm not sure what you were trying so say. I'm also not sure if you're talking about the same thing I was talking about since you're using different words. I was talking specifically about the mean IQ of people holding a belief. Is this in fact higher or not?

I concede the point (not sure if you were trying to make it) that a high mean IQ of such a group could be because of filter effects. Let's say A is the set of all people, B ⊂ A the set of all people who think about Marxism, and C ⊂ B the set of all people who believe in Marxism. Then, even if the mean IQ of B and C are the same, meaning believing in Marxism is not correlated to IQ among those who know about it, the mean IQ of C would still be higher than that of A,, because the mean IQ of B is higher than that of A. because people who even know about Marxism are already smarter than those who don't.

So that effect is real and I'm sure applies to AI. Now if the claim is just "people who believe in the singularity are disproportionately smart" then that could be explained by the effect, and maybe that's the only claim the article made, but I got the impression that it also claimed "most people who know about this stuff believe in the singularity" which is a property of C, not B, and can't be explained away.