Posts

Comments

Comment by Lumifer on The map of "Levels of defence" in AI safety · 2018-01-05T16:35:27.114Z · LW · GW

There seems to be a complexity limit to what humans can build. A full GAI is likely to be somewhere beyond that limit.

The usual solution to that problem -- see the EY's fooming scenario -- is to make the process recursive: let a mediocre AI improve itself, and as it gets better it can improve itself more rapidly. Exponential growth can go fast and far.

This, of course, gives rise to another problem: you have no idea what the end product is going to look like. If you're looking at the gazillionth iteration, your compiler flags were probably lost around the thousandth iteration and your chained monitor system mutated into a cute puppy around the millionth iteration...

Probabilistic safety systems are indeed more tractable, but that's not the question. The question is whether they are good enough.

Comment by Lumifer on The map of "Levels of defence" in AI safety · 2018-01-04T15:47:35.431Z · LW · GW

Are you reinventing Asimov's Three Laws of Robotics?

Comment by Lumifer on Happiness Is a Chore · 2017-12-20T20:36:08.916Z · LW · GW

I suspect the solution is this.

Comment by Lumifer on Announcing the AI Alignment Prize · 2017-12-19T16:40:27.524Z · LW · GW

tomorrow

That's not conventionally considered to be "in the long run".

We don't have any theory that would stop AI from doing that

The primary reason is that we don't have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not "progress".

Comment by Lumifer on Why Bayesians should two-box in a one-shot · 2017-12-19T16:38:02.612Z · LW · GW

It seems weird that you'd deterministically two-box against such an Omega

Even in the case when the random noise dominates and the signal is imperceptibly small?

Comment by Lumifer on Why Bayesians should two-box in a one-shot · 2017-12-19T16:35:13.834Z · LW · GW

So the source-code of your brain just needs to decide whether it'll be a source-code that will be one-boxing or not.

First, in the classic Newcomb when you meet Omega that's a surprise to you. You don't get to precommit to deciding one way or the other because you had no idea such a situation will arise: you just get to decide now.

You can decide however whether you're the sort of person who accepts their decisions can be deterministically predicted in advance with sufficient certainty, or whether you'll be claiming that other people predicting your choice must be a violation of causality (it's not).

Why would you make such a decision if you don't expect to meet Omega and don't care much about philosophical head-scratchers?

And, by the way, predicting your choice is not a violation of causality, but believing that your choice (of the boxes, not of the source code) affects what's in the boxes is.

Second, you are assuming that the brain is free to reconfigure and rewrite its software which is clearly not true for humans and all existing agents.

Comment by Lumifer on Why Bayesians should two-box in a one-shot · 2017-12-19T16:30:54.700Z · LW · GW

Old and tired, maybe, but clearly there is not much consensus yet (even if, ahem, some people consider it to be as clear as day).

Note that who makes the decision is a matter of control and has nothing to do with freedom. A calculator controls its display and so the "decision" to output 4 in response to 2+2 it its own, in a way. But applying decision theory to a calculator is nonsensical and there is no free choice involved.

Comment by Lumifer on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) · 2017-12-12T15:27:26.625Z · LW · GW

LW is kinda dead (not entirely, there is still some shambling around happening, but the brains are in short supply) and is supposed to be replaced by a shinier reincarnated version which has been referred to as LW 2.0 and which is now in open beta at www.lesserwrong.com

LW 1.0 is still here, but if you're looking for active discussion, LW 2.0 might be a better bet.

Re qualia, I suggest that you start with trying to set up hard definitions for terms "qualia" and "exists". Once you do, you may find the problem disappears -- see e.g. this.

Re simulation, let me point out that the simulation hypothesis is conventionally known as "creationism". As to the probability not being calculable, I agree.

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-11T15:30:03.249Z · LW · GW

The truth that curi and myself are trying to get across to people here is... it is the unvarnished truth... know far more about epistemology than you. That again is an unvarnished truth

In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?

Lots of people claim to have access to Truth -- what makes you special?

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T19:43:38.537Z · LW · GW

LOL. You keep insisting that people have to play by your rules but really, they don't.

You can keep inventing your own games and declaring yourself winner by your own rules, but it doesn't look like a very useful activity to me.

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T07:23:48.703Z · LW · GW

genetic algorithms often write and later read data, just like e.g. video game enemies

Huh? First, the expression "genetic algorithms" doesn't mean what you think it means. Second, I don't understand the writing and reading data part. Write which data to what substrate?

your examples are irrelevant b/c you aren't addressing the key intellectual issues

I like dealing with reality. You like dealing with abstractions in your head. We talked about this -- we disagree. You know that.

But if you are uninterested in empirical evidence, why bother discussing it at all?

you won't want to learn or seriously discuss

Yes, I'm not going to do what you want me to do. You know that as well.

you will be hostile to the idea that you need a framework in which to interpret the evidence

I will be hostile to the idea that I need your framework to interpret the evidence, yes. You know that, too.

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T00:39:49.599Z · LW · GW

The problem is that very very few orcas do that -- only two pods in the world, as far as we know. Orcas which live elsewhere (e.g. the Pacific Northwest orcas which are very well-observed) do not do anything like this. Moreover, there is evidence that the technique is taught by adults to juvenile orcas. See e.g .here or here.

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T05:15:33.068Z · LW · GW

If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?

Oh, get stuffed. I tried debating you and the results were... discouraging.

Yes, I obviously think that CR is deluded.

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T00:34:00.755Z · LW · GW

This sentence from the OP:

Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts.

A bit more generally, the claim that humans are UKCs and nothing else can create knowledge which is defined as a way to solve a problem.

Comment by Lumifer on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-08T18:11:55.688Z · LW · GW

the AI risks starting these triggers when it starts to think first thoughts about existing of the triggers

So basically you have a trap which kills you the moment you become aware of it. The first-order effect will be a lot of random deaths from just blundering into such a trap while walking around.

I suspect that the second-order effect will be the rise of, basically, superstitions and some forms of magical thinking which will be able to provide incentives to not go "there" without actually naming "there". I am not sure this is a desirable outcome.

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T17:32:12.569Z · LW · GW

It's also rank nonsense -- this bit in particular:

dog genes contain behavioural algorithms pre-programmed by evolution

Some orcas hunt seal pups by temporarily stranding themselves on the beaches in order to reach their prey. Is that behaviour programmed in their genes? The genes of all orcas?

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T15:56:12.553Z · LW · GW

Show results in 3 separate domains.

  • Chess
  • Go
  • Shogi
Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T15:55:23.325Z · LW · GW

Unreason is accepting the claims of a paper at face value, appealing to its authority

Which particular claim that the paper makes I accepted at face value and which you think is false? Be specific.

I was aware of AlphaGo Zero before I posted -- check out my link

AlphaGo Zero and AlphaZero are different things -- check out my link.

In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it's a universal knowledge creator?

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T06:02:35.061Z · LW · GW

You sound less and less reasonable with every comment.

It doesn't look like you conversion attempts are working well. Why do you think this is so?

Comment by Lumifer on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T20:49:49.742Z · LW · GW

AlphaGo is a remarkable algorithm, but it cannot create knowledge

Funny you should mention that. AlphaGo has a successor, AlphaZero. Let me quote:

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

Note: "given no domain knowledge except the game rules"

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T21:45:14.918Z · LW · GW

No, what surprises me is your belief that you just figured it all out. Using philosophy. That's it, we're done, everyone can go home now.

And since everything is binary and you don't have any tools to talk about things like uncertainty, this is The Truth and anyone who doesn't recognize it as such is either a knave or a fool.

There also a delicious overtone of irony in that a guy as lacking in humility as you are, chooses to describe his system as "fallible ideas".

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T21:29:10.452Z · LW · GW

You don't think that figuring out which ideas are "best available" is the hard part? Everyone and his dog claims his idea is the best.

well, using philosophy i did that hard part and figured out which ones are good

LOL. Oh boy.

Really? So you just used t̶h̶e̶ ̶f̶o̶r̶c̶e̶ philosophy and figured it out? That's great! Just a minor thing I'm confused about -- why are you here chatting on the 'net instead of sitting on your megayacht with a line of VCs in front of your door, willing to pay you gazillions of dollars for telling them which ideas are actually good? This looks to be VERY valuable knowledge, surely you should be able to exchange it for lots and lots of money in this capitalist economy?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T20:59:46.048Z · LW · GW

why are you trying to make claims about them?

I didn't think that stating that libertarians like Ayn Rand was controversial. We are talking about political power and neither libertarians nor objectivists have any. In this context the fact that they don't like each other is a small family squabble in some far-off room of the Grand Political Palace.

intellectual fixing of errors

What is an "intellectual" fixing of an error instead of a plain-vanilla fixing of an error?

Aubrey de Grey says there's a 50% chance it's 100 million a year for 10 years away.

What's the % chance that he is correct? AFAIK he has been saying the same thing for years.

it's most scientists and funders not wanting the best available ideas

You don't think that figuring out which ideas are "best available" is the hard part? Everyone and his dog claims his idea is the best.

most people are pro-aging and pro-death

I don't think that's true. Most people don't want to live for a long time as wrecks with Alzheimer's and pains in every joint, but invent a treatment that lets you stay at, say, the the 30-year-old level of health indefinitely and I bet few people will refuse (at least the non-religious ones).

can and should be explained in terms of culture, memes, education, human choice, environment, etc

Why is there a "should"?

The twin studies are garbage, btw

All of them?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T20:16:58.353Z · LW · GW

Where can I find them?

I'm not plugged into these networks, but Cato will probably be a good start.

apparently thinks that homosexuality is a disease

Kinda. As far as I remember, homosexuality is an interesting thing because it's not very heritable (something like 20% for MZ twins), but also tends to persist in all cultures and ages which points to a biological aspect. It should be heavily disfavoured by evolution, but apparently isn't. So it's an evolutionary puzzle. Cochran's theory -- which he freely admits lacks any evidence in its favour -- is that there is some pathogen which operates in utero or at a very early age and which pushes the neurohormonal balance towards homosexuality.

This is clearly spitballing in the dark and Cochran, as far as I know, doesn't insist that it's The Truth. It's just an interesting alternative that everyone else ignores.

scientific racism

Generally translated as "I don't like the conclusions which science came up with" :-D

I might or might not disagree with you politically, but I believe myself to be capable of distinguishing normative statements (this is what it is) from prescriptive ones (this is what it should be).

I don't wanna pick a random paper from one of them

I am not expecting you to go critique their science. Their names were a handwave in the direction of what kind of heritability studies we're talking about.

might actually agree with my broader point (about the possibility of going into fields and pointing out inadequacies if you know what you're doing, due to the fields being inadequate)

It's a bit more complicated. Scientific fields have a lot of diverse content. Some of it is invariably garbage and it's not hard to go into any field, find some idiots, and point out their inadequacies. However it's not a particularly difficult or worthwhile activity and certainly one that can be done by non-philosophers :-D In particular, during the last decade or so people who understand statistics have been having at lot of fun at the expense of domain "experts" who don't.

I would generally expect that in every field there would be a relatively small core of clueful people who are actually pushing the frontier and a lot of deadweight just hanging on. I would also expect that it would be difficult to identify this core without doing a deep dive into the literature or going to conferences and actually talking to people.

However the thing is, I like empirical results. So if you claim to be able to go into a field and "fix massive errors", I don't think that merely pointing at the idiots and their publications is going to be sufficient. Fixing these errors should produce tangible results and if the errors are massive, the results should be massive as well. So where is my cure for aging? frozen and fully revived large mammals? better batteries, flying cars, teleportation devices, etc.?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T16:43:31.853Z · LW · GW

A pharmaceutical company with a strategy "let's try random molecules and do scientific studies whether they cure X" would go out of business.

Funny you should mention this.

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. ...Eve’s robotic system is capable of screening over 10,000 compounds per day.

source

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T15:25:08.198Z · LW · GW

Considering Rand was anti-libertarianism

Funny how a great deal of libertarians like her a lot... But we were talking about transforming the world. How did she transform the world?

wanna do heritability studies? cryonics?

Cryonics is not a science. It's an attempt to develop a specific technology which isn't working all that well so far. By heritability do you mean evo bio? Keep in mind that I read people like Gregory Cochran and Razib Khan so I would expect you to fix massive errors in their approaches.

Pointing me to large amounts of idiocy in published literature isn't a convincing argument: I know it's there, all reasonable people know it's there, it's a function of the incentives in academia and doesn't have much to do with science proper.

he came up with much better ones

You are a proponent of one-bit thinking, are you not? In Yes/No terms de Grey set himself a goal and failed at it.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T05:57:31.439Z · LW · GW

consider the influence Ayn Rand had

Let's see... Soviet Russia lived (relatively) happily until 1991 when it imploded through no effort of Ayn Rand. Libertarianism is not a major political force in any country that I know of. So, not that much influence.

What could stop them?

Oh dear, there is such a long list. A gun, for example. Men in uniform who are accustomed to following orders. Public indifference (a Kardashian lost 10 lbs through her special diet!).

some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc

Are you familiar with the term "magical thinking"? Popper couldn't do it. Ayn Rand couldn't do it. DD can't do it. You can't do it. So why would suddenly you have this thousand of god-emperors who can do anything they want to, purely through the force of reasoning?

Trump only won because his campaign was run, to a partial extent, by lesser philosophers

I think our evaluations of the latest presidential elections... differ.

a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking

You are a good philosopher, yes? Would you like to demonstrate this with some scientific field?

Even a mediocre philosopher like Aubrey de Grey managed to do something like that.

de Grey runs a medical think tank that so far has failed at its goal. In which way did he "fix massive errors"?

Have you read Atlas Shrugged? It's a book in which a philosophy teacher and his 3 star students change the world.

... (you do understand that this is fiction?)

try to imagine someone with ~100x better ideas and how much more effective that would be

We're back to magical thinking (I can imagine a lot of things, but presumably we are talking about reality), but even then, what will that someone do against a few grams of lead at high velocity?

He spread bad ideas

Did he believe they were bad ideas? How is his belief in his ideas different from your belief in your ideas?

a few people survive childhood

Since my childhood was sufficiently ordinary, I presume that I did not survive. Oops, you're talking to a zombie...

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T05:42:21.918Z · LW · GW

i don't suppose you or anyone else wrote down your reasoning

Correct! :-)

i disagree that it's false. you aren't giving an argument.

This is false under my understanding of the standard English usage of the word "torture".

then i guess you can continue your life of sin

Woohoo! Life of sin! Bring on the seven deadlies!!

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:23:48.387Z · LW · GW

So, a professor of physics failed to convert the world to his philosophy. Why are you surprised? That's an entirely normal thing, exactly what you'd expect to happen. Status has nothing to do with it, this is like discussing the color of your shirt while trying to figure out why you can't fly by flapping your arms.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:19:24.329Z · LW · GW

I don't see what's to envy about Marx.

His ideas got to be very very popular.

I estimate 1000 great people with the right philosopher is enough to promptly transform the world

ROFL. OK, so one philosopher and 1000 great people. Presumably specially selected since early childhood since normal upbringing produces mental cripples? Now, keeping in mind that you can only persuade people with reason, what next? How does this transformation of the world work?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T04:15:00.587Z · LW · GW

ppl don't need to die, that's wrong

And yet everyone dies.

that's the part where you give an argument

Nope, that's true only if I want to engage in this discussion and I don't. Been there, done that, waiting for the t-shirt.

"torture" has an English meaning separate from emotional impact

Yes. Using that meaning, the sentence "I mean psychological "torture" literally" is false. Or did you mean something by these scare quotes?

if you wanted to have a productive conversation

LOL. Now, if you wanted to have a productive conversation you would have defined your terms. See how easy it is? :-D

you don't seem to be aware that you're reading a summary essay

Oh, I am.

are you aware of many common ways force is initiated against children?

Of course. So?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:29:02.014Z · LW · GW

It hasn't worked for him.

It didn't? What's your criterion for "worked", then? If you want to convert most of the world to your ideology you better call yourself a god then, or at least a prophet -- not a mere philosopher.

I guess Karl Marx is a counterexample, but maybe you don't want to use these particular methods of "persuasion".

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:23:03.984Z · LW · GW

everything good in all of history is from voluntary means

I understand this assertion. I don't think I believe it.

ppl initiate force when they fail to persuade

Kinda. When using force is simpler/cheaper than persuasion. And persuading people that they need to die is kinda hard :-/

The words have meanings.

Words have a variety of meanings which also tend to heavily depend on the context. If you want to convey precise meaning, you need not only to use words precisely, but also to convey to your communication partner which particular meaning you attach to these words.

Right here is an example: I interpret you using words like "cripple" and "torture" as tools of emotional impact. In my experience this is how people use them (outside of specific technical areas). If you mean something else, you need to tell me: you need to define the words you use.

It's not a replacement for talking about issues you think are important, it's a prerequisite to meaningful communication.

So you said "I'm using strong words b/c they correspond to my intended claims" and that tells me nothing. So you basically want to say that conventional upbringing is bad? Extra bad? Super duper extra bad? Are there any nuances, any particular kind of bad?

You are failing to communicate.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T01:29:56.047Z · LW · GW

those people don't matter intellectually anyway

Ivory tower it is, then.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T01:06:29.561Z · LW · GW

The right approach is to use purely voluntary methods which are not rightly described as passive.

How successful do you think these are, empirically?

I don't see the special difficulty with evaluating those statements as true or false.

I do. Quantum physics operates with very well defined concepts. Words like "cripple" or "torture" are not well-defined and are usually meant to express the emotions of the speaker.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T01:02:23.352Z · LW · GW

"Not getting shunned" is not quite the same thing as attempting "persuasion via attaining social status".

Which method do you think can work for what you want to do? Any success so far?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T22:11:14.252Z · LW · GW

accusations of "extremism" are not critical arguments

Of course they are not. But such perceptions have consequences for those who are not hermits or safely ensconced in an ivory tower. If you want to persuade (and you do, don't you?) the common people, getting labeled as an extremist is not particularly helpful.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T22:08:11.024Z · LW · GW

I am not worried. However taking positions viewed as extremist by the mainstream (aka the normies) has consequences. Often you are shunned and become an outcast -- and being an outcast doesn't help with extinguishing the fire. There are also moral issues -- can you stand passively and just watch? If you can, does that make you complicit? If you can't, you are transitioning from a preacher into a revolutionary and that's an interesting transition.

The quotes above don't sound like they could be usefully labeled "true" or "not true" -- they smell like ranting and for this genre you need to identify the smaller (and less exciting) core claims and define the terms: e.g. what is a "mental cripple" and by which criteria would we classify people as such or not?

Oh, and I would also venture a guess that neither you nor curi have children.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T21:05:28.521Z · LW · GW

I made no claims as to extremeness

Would you like to?

You are basically a missionary: you see savages engage in horrifying practices AND they lose their soul in the process. The situation looks like it calls for extreme measures.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T20:40:36.695Z · LW · GW

So you don't feel these quotes represent an "extremist" point of view?

Current parenting and educational practices destroy children's minds. They turn children into mental cripples, usually for life. ... Almost everyone is broken by being psychologically tortured for the first 20 years of their life. Their spirit is broken, their rationality is broken, their curiosity is broken, their initiative and drive are broken, and their happiness is broken. And they learn to lie about what happened ...

When I use words like "torture" regarding things done to children or to the "mentally ill", people often assume I'm exaggerating or speaking about the past when kids were physically beaten much more. But I mean psychological "torture" literally ...

Parenting more reliably hurts people in a longterm way than torture, but has less overt malice and cruelty. Parenting is more dangerous because it taps into anti-rational memes better ...

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T18:59:36.398Z · LW · GW

Though actually I have gone to curi's website (or, rather, websites; he has several) and read his stuff

So have I, but curi's understanding of "using references" is a bit more particular than that. Unrolled, it means "your argument has been dealt with by my tens of thousands of words over there [waves hand in the general direction of the website], so we can consider it refuted and now will you please stop struggling and do as I tell you".

Why, yes, I am being snarky.

Embrace your snark and it will set you free! :-D

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T15:45:21.000Z · LW · GW

And knowing how this works enables us to think better.

Sure, but that's not sufficient. You need to show that the effect will be significant, suitable for the task at hand, and is the best use of the available resources.

Drinking CNS stimulants (such as coffee) in the morning also enables us to think better. So what?

And the breakthrough in AGI will come from epistemology.

How do you know that?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T15:39:00.175Z · LW · GW

This is just more evasion.

Fail to ask a clear question, and you will fail to get a clear answer.

You know Yudkowsky also wants to save the world right?

Not quite save -- EY wants to lessen the chance that the humans will be screwed over by off-the-rails AI.

That Less Wrong is ultimately about saving the world?

Oh grasshopper, maybe you will eventually learn that not all things are what they look like and even fewer are what they say the are.

you're in the wrong place

I am disinclined to accept your judgement in this matter :-P

Hypothetically, suppose you came across a great man ... In what way would your response to him be different to your response to curi?

Obviously it depends on the way he presented his new ideas. curi's ideas are not new and were presented quite badly.

There are two additional points here. One is that knowledge is uncertain, fallible, if you wish. Knowledge about the future (= forecasts) is much more so. Great men rarely know they are great, they may guess at their role in history but should properly be very hesitant about it.

Two, I'm much more likely to meet someone who knew he was Napoleon, the rightful Emperor of France, and honestly said so rather than a truly great man who goes around proclaiming his greatness. I'm sure Napoleon has some great ideas that I'm unfamiliar with -- what should my response be?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T05:35:32.939Z · LW · GW

That's not an answer. That's an evasion.

The question is ill-posed. Without context it's too open-ended to have any meaning. But let me say that I'm here not to save the world. Is that sufficient?

Epistemology tells you how to think.

No, it doesn't. It deals with acquiring knowledge. There are other things -- like logic -- which are quite important to thinking.

impute bad motives to curi?

I don't impute bad motives to him. I just think that he is full of himself and has... delusions about his importance and relationship to truth.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T02:12:47.322Z · LW · GW

I still have no idea what "hostile to using references" is meant to mean.

It means you're unwilling to go to curi's website and read all he has written on the topic when he points you there.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T02:10:24.662Z · LW · GW

Why are you here?

I've been here awhile. Your account is a few days old. Why are you here?

The world is burning and you're helping spread the fire.

Whether the world is burning or not is an interesting discussion, but I'm quite sure that better epistemology isn't going to put out the fire. Writing voluminous amounts of text on a vanity website isn't going to do it either.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T21:05:21.387Z · LW · GW

Are you really going to argue for Pascal's Wager here?

Tell me which single hell you think you're avoiding and I'll point out a few others in which you will end up.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T21:03:30.686Z · LW · GW

He used his philosophy skills to become a world-class gamer

Gold! This is solid gold!

Are you aware of the battles great ideas and great people often face?

Have you considered becoming a stand-up comedian?

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T06:40:11.959Z · LW · GW

The interesting thing is that the answer is "nothing". Nothing at all.

Comment by Lumifer on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T06:39:40.627Z · LW · GW

This is so ridiculously bombastic, it's funny.

So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides? Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?