Posts

The Critical Rationalist View on Artificial Intelligence 2017-12-06T17:26:25.706Z

Comments

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-11T10:22:26.410Z · LW · GW

Yes, there are situations were it can be harmful to state the truth. But there is a common social problem where people do not say what they think or water it down for fear of causing offense. Or because they are looking to gain status. That was the context.

The truth that curi and myself are trying to get across to people here is that you are doing AI wrong and are wasting your lives. We are willing to be ridiculed for stating that but it is the unvarnished truth. AI has been stuck in a rut for decades with no progress. People kid themselves that the latest shiny toy like Alpha Zero is progress but it is not.

AI research has bad epistemology at its heart and this is holding back AI in the same way that quantum physics was held back by bad epistemology. David Deutsch had a substantial role in clearing that problem up in QM (although there are many who still do not accept multiple universes). He needed the epistemology of CR to do that. See The Fabric of Reality.

Curi, Deutsch, and myself know far more about epistemology than you. That again is an unvarnished truth. We are saying we have ideas that can help get AI moving. In particular CR. You are blinded by things you think are so but that cannot be. The myth of Induction for one.

AI is blocked -- you have to consider that some of your deeply held ideas are false. How many more decades do you want to waste? These problems are too urgent for that.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T06:46:27.755Z · LW · GW

People are overly impressed by things that animals can do such as dogs opening doors and think the only explanation is that they must be learning. Conversely, people think children being good at something means they have an in-born natural talent. The child is doing something way more remarkable than the dog but does not get to take credit. The dog does.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-10T06:27:57.811Z · LW · GW

I would be happy to rewrite the first line to say: An entity is either a UKC or it has zero -- or approximately zero -- potential to create knowledge. Does that help?

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T21:06:20.392Z · LW · GW

Can we agree that I am not trying to prosthelytize anyone? I think people should use their own minds and judgment and I do not want people just to take my word for something. In particular, I think:

(1) All claims to truth should be carefully scrutinised for error.

(2) Claiming authority or pointing skyward to an authority is not a road to truth.

These claims should themselves be scrutinised for error. How could I hold these consistently with holding any kind of religion? I am open to the idea that I am wrong about these things too or that I am inconsistent.

I also think claims to truth should not be watered down for social reasons. That is to disrespect the truth. People can mistake not watering down the truth for religious fervour and arrogance.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T03:52:22.336Z · LW · GW

he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.

Our human ancestors on the African savannah could not construct a nuclear reactor, nor the skyline of Manhattan, nor an 18 core microprocessor. They had no idea how. But they had in them the potential and that potential has been realized today. To do that, we created deep knowledge about how our universe works. Why you think that is not going to continue? Why should we not be able to construct a von Neumann probe at some point in the future? Note that most of the advances I am talking about occurred in the last few hundred years. Humans had a big problem with static memes preventing progress for millennia (see BoI). If not for those memes, we may well be at the stars by now. While humans made all this progress, dolphins and border collies did what?

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T03:22:43.304Z · LW · GW

If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.

We have given you criteria by which you can judge an AI: whether it is a UKC or not. As I explained in the OP, if something can create knowledge in some disparate domains then you have a UKC. We will be happy to declare it as such. You are under the false idea that AI will arrive by degrees, that there is such a thing as a partial UKC, and that knowledge creators lie on a continuum with respect to their potential. AI will no more arrive by degrees than our universal computers did. Universal computation came about through Turing in one fell swoop, and very nearly by Babbage a century before.

You underestimate the difficulties facing AI. You do not appreciate how truly different people are to other animals and to things like Alpha Zero.

EDIT: That was meant to be in reply to HungryHobo.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T01:49:37.677Z · LW · GW

Critical Rationalists think that E. T. Jaynes is confused about a lot of things. There has been discussion about this on the Fallible Ideas list.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T01:30:42.484Z · LW · GW

https://www.youtube.com/watch?v=0KmimDq4cSU

Everything he says in that video is in accord with CR and with what I wrote about how we acquire knowledge. Note how the audience laughs when he says you start with a guess. What he says is in conflict with how LW thinks the scientific method works (like in the Solomonoff guide I referenced).

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T00:25:54.452Z · LW · GW

FYI, Feynman was a critical rationalist.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-09T00:12:35.617Z · LW · GW

Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) ...

You are indirectly echoing ideas that come from David Deutsch. FYI, Deutsch is a proponent of the Many Worlds Explanation of quantum physics and he invented the idea of the universal quantum computer, founding quantum information theory. He talks about them in BoI.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T20:53:52.049Z · LW · GW

The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)

Deutsch gives arguments that people are universal explainers/constructors (this requires that they be computationally universal as well). What is your argument that there are some things that a universal explainer could never be able to understand? Alternatively, what is your argument that people are not universal explainers? Deutsch talks about the “reach” of knowledge. Knowledge created to solve a problem in one domain can solve problems in other domains too. What is your argument that the knowledge we create could never reach into this inexplicable realm you posit?

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T09:54:23.820Z · LW · GW

Unreason is accepting the claims of a paper at face value, appealing to its authority, and, then, when this is pointed out to you, claiming the other party is unreasonable.

I was aware of AlphaGo Zero before I posted -- check out my link. Note that it can't even learn the rules of the game. Humans can. They can learn the rules of all kinds of games. They have a game-rule learning universality. That AlphaGo Zero can't learn the rules of one game is indicative of how much domain knowledge the developers actually put into it. They are fooling themselves if they think AlphaGo Zero has superhuman learning ability and to be progress towards AI.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T18:47:58.235Z · LW · GW

As I explained in the post, dog genes contain behavioural algorithms pre-programmed by evolution. The algorithms have some flexibility -- akin to parameter tuning -- and the knowledge contained in the algorithms is general purpose enough so it can be tuned for dogs to do things like open boxes. So it might look like the book is learning something but the knowledge was created by biological evolution, not the individual dog. The knowledge in the dog's genes is an example of what Popper calls knowledge without a knowing subject. Note that all dogs have approximately the same behavioural repertoire. They are kind of like characters in a video game. Some boxes a dog will never open, though a human will learn to do it.

A child is a UKC so when a child learns to open a box, the child creates new knowledge afresh in their own mind. It was not put there by biological evolution. A child's knowledge of box-opening will grow, unlike a dog's, and they will learn to open boxes in ways a dog never can. And different children can be very different in terms of what they know how to do.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T00:17:43.462Z · LW · GW

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

Please quote me accurately. What I wrote was:

AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have

I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T23:41:49.613Z · LW · GW

In CR, knowledge is information which solves a problem. CR criticizes the justified-true-belief idea of knowledge. Knowledge cannot be justified, or shown to be certain, but this doesn't matter for if it solves a problem, it is useful. Justification is problematic because it is ultimately authoritarian. It requires that you have some base, which itself cannot be justified except by an appeal to authority, such as the authority of the senses or the authority of self-evidence, or such like. We cannot be certain of knowledge because we cannot say if an error will be exposed in the future. This view is contrary to most people's intuition and for this reason they can easily misunderstand the CR view, which commonly happens.

CR accepts something as knowledge which solves a problem if it has no known criticisms. Such knowledge is currently unproblematic but may become so in the future if an error is found.

Critical rationalists are fallibilists: they don't look for justification, they try to find error and they accept anything they cannot find an error in. Fallibilists, then, expose their knowledge to tough criticism. Contrary to popular opinion, they are not wish-washy, hedging, or uncertain. They often have strong opinions.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T21:55:12.058Z · LW · GW

Note the "There is no such thing as a partially universal knowledge creator.". That means an entity either is a UKC or it has no ability, or approximately zero ability, to create knowledge. Dogs are in the latter bucket.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:45:46.130Z · LW · GW

My intent was to summarise the CR view on AI. I've providing links so you can read more.

EDIT: BTW I disagree that I have made "a bunch of assertions". I have provided arguments, for example, about induction. I suspect, also, that you think observation - or evidence - comes first and I have argued against that.

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:43:28.187Z · LW · GW

I am summarizing a view shared by other Critical Rationalists, including Deutsch. Do you think they are confused too?

Comment by Fallibilist_duplicate0.16882559340231862 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:30:57.088Z · LW · GW

Have added in some sub-headings - if that helps.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T22:56:40.704Z · LW · GW

I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.

This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.

First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don't, then you can't predict anything about the future (because under the hypothetical new laws of physics, anything could happen).

Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.

Specifically, Bayes Theorem is not about "goodness" of an idea; it is about mathematical probability. Unlike "goodness", probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with "goodness" of the idea "this is the first barrel" or "this is the second barrel".

Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events - which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.

Why do you think that some argument which crosses your mind hasn't already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn't been fully fleshed out?

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-02T11:55:45.420Z · LW · GW

Ronald Reagan was a fan of Ayn Rand. He won the cold war so what is Lumifer talking about when he says Rand had no influence? He's ignorant of history. Woefully ignorant if he thinks that the Soviet Union "lived (relatively) happily". He hates Trump too. Incidentally, Yudkowsky lost a chunk of money betting Trump would lose. That's what happens with bad philosophy.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T22:40:40.987Z · LW · GW

You are trying to reject a philosophy based on edge cases without trying to understand the big problems the philosophy is trying to solve.

Let's give some context to the stair-falling scenario. Consider that the parent is a TCS parent, not a normie parent. This parent has in fact heard the stair-falling scenario many times. It is often the first thing other people bring up when TCS is discussed.

Given the TCS parent has in fact thought about stair falling way more than a normie parent, how do you think the TCS parent has set up their home? Is it going to be a home where young children are exposed to terrible injury from things they do not yet have knowledge about?

Given also that the TCS parent will give lots of help to a child curious about stairs, how long before that child masters stairs? And given that the child is being given a lot of help in many other things as well and not having their rationality thwarted, how do you think things are like in that home generally?

The typical answer will be the child is "spoilt". The TCS parent will have heard the "spoilt" argument many times. They know the term "spoilt" is used to denegrate children and that the ideas underlying the idea of "spoilt" are nasty. So now we have got "spoilt" out of the way, how do you think things are like?

Ok, you say, but what if the child is outside near the edge of a busy road or something and wants to run across it? Do you not think the TCS parent hasn't also heard this scenario over and over? Do you think you're like the first one ever to have mentioned it? The TCS parent is well aware of busy road scenarios.

Instead of trying to catch TCS advocates out by bringing up something that has been repeatedly discussed why don't you look at the core problems the philosophy speaks to and address those? Those problems need urgent attention.

EDIT: I should have said also that the stair-falling scenario and other similar scenarios are just excuses for people not to think about TCS. They don't have want to think about the real problems children face. They want to continue to be irrational towards their children and hurt them.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T08:16:09.516Z · LW · GW

Huh, you're someone who would get the name of ARR [1] wrong? I didn't expect that.

It surprised me too. I think it was just a blooper, but I've done it twice now. So hmm. You didn't pick me up the first time.

You're giving away significant identifying information, FYI.

I'm aware of that.

Why are you hiding your identity from me, btw?

I expect you already know who I am. I'll take this over to FI forum.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:48:25.982Z · LW · GW

Deutsch invented Taking Children Seriously and Autonomous Relationships. That was some decades ago. He spent years in discussion groups trying to persuade people. His status did not help at all. Where are TCS and AR today? They are still only understood by a tiny minority. If not for curi, they might be dead.

Deutsch wrote "The Fabric of Reality" and "The Beginning of Infinity". FoR was from 1997 and BoI was from 2011. These books have ideas that ought to change the world, but what has happened since they were published? Some people's lives, such as curi's, were changed dramatically, but only a tiny minority. Deutsch's status has not helped the ideas in these books gain acceptance.

EDIT: That should be Autonomy Respecting Relationships (ARR).

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:24:56.519Z · LW · GW

Well, this comes back to the problem of LW Paths Forward. curi has made himself publicly available for discussion, by anyone. Yudkowsky not so much. So what to do?

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:06:56.055Z · LW · GW

"Not getting shunned" is not quite the same thing as attempting "persuasion via attaining social status".

David Deutsch has status. It hasn't worked for him. Worse, seeking status compromised him intellectually.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T00:05:34.108Z · LW · GW

curi has given an excellent response to this. I would like to add that I think Yudkowsky should reach out to curi. He shares curi's view about the state of the world and the urgency to fix things, but curi has a deeper understanding. With curi, Yudkowsky would not be the smartest person in the room and that will be valuable for his intellectual development.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T23:24:29.407Z · LW · GW

FYI that's what "abduction" means – whatever is needed to fill in the gaps that induction and deduction don't cover.

Yes, I'm familiar with it. The concept comes from the philosopher Charles Sanders Peirce in the 19th century.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T23:02:27.832Z · LW · GW

Deduction ... is compatible with CR too.

Yes. I didn't mean to imply it isn't. The CR view of deduction is different to the norm, however. Deduction's role is commonly over-rated and it does not confer certainty. Like any thinking, it is a fallible process, and involves guessing and error-correction as per usual in CR. This is old news for you, but the inductivists here won't agree.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T22:17:25.032Z · LW · GW

Deduction isn't an epistemology (it's a component)

Yes, I was incorrect. Induction, deduction, and something else (what?) are components of the epistemology used by inductivists.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T21:23:57.513Z · LW · GW

curi is describing some ways in which the world is burning and you are worried that the quotes are "extremist". You are not concerned about the truth of what he is saying. You want ideas that fit with convention.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T20:40:31.807Z · LW · GW

Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality.

So it says nothing at all except that you should be rational when you raise children?

It says many other things as well.

In that case, no one disagrees with it, and it has nothing to teach anyone, including me. If it says anything else, it can still be an extremist ideology, and I can reject it without rejecting rationality.

Saying it is "extremist" without giving arguments that can be criticised and then rejecting it would be rejecting rationality. At present, there are no known good criticisms of TCS. If you can find some, you can reject TCS rationally. I expect that such criticisms would lead to improvement of TCS, however, rather than outright rejection. This would be similar to how CR has been improved over the years. Since there aren't any known good criticisms that would lead to rejection of TCS, it is irrational to reject it. Such an act of irrationality would have consequences, including treating your children irrationally, which approximately all parents do.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T19:53:23.694Z · LW · GW

The thinking process is Bayesian, and uses a prior.

What is the epistemological framework you used to judge the correctness of those? You don't just get to use Bayes' Theorem here without explaining the epistemological framework you used to judge the correctness of Bayes. Or the correctness of probability theory, your priors etc.

If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? ... Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.

Little problem there.

No. Critical Rationalism can be used to improve Critical Rationalism and, consistently, to refute it (though no one has done so). This has been known for decades. Induction is not a complete epistemology like that. For one thing, inductivists also need the epistemology of deduction. But they also need an epistemological framework to judge both of those. This they cannot provide.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T08:38:25.530Z · LW · GW

I meant the same thing. Induction is quite possible, and we do it all the time.

What is the thinking process you are using to judge the epistemology of induction? Does that process involve induction? If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? And if not, judging the special case of the epistemology of induction is an exception. It is an example of thinking without induction. Why is this special case an exception?

Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T08:00:22.405Z · LW · GW

Epistemology tells you how to think.

No, it doesn't. It deals with acquiring knowledge. There are other things -- like logic -- which are quite important to thinking.

Human knowledge acquisition happens by learning. It involves coming up with guesses and error-correcting those guesses via criticism in an evolutionary process. This is going on in your mind all the time, consciously and subconsciously. It is how we are able to think. And knowing how this works enables us to think better. This is epistemology. And the breakthrough in AGI will come from epistemology. At a very high level, we already know what is going on.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T07:39:56.178Z · LW · GW

The question is ill-posed. Without context it's too open-ended to have any meaning.

This is just more evasion.

But let me say that I'm here not to save the world. Is that sufficient?

You know Yudkowsky also wants to save the world right? That Less Wrong is ultimately about saving the world? If you do not want to save the world, you're in the wrong place.

I don't impute bad motives to him. I just think that he is full of himself and has... delusions about his importance and relationship to truth.

Hypothetically, suppose you came across a great man who knew he was great and honestly said so. Suppose also that great man had some true new ideas you were unfamiliar with but that contradicted many ideas you thought were important and true. In what way would your response to him be different to your response to curi?

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T04:48:14.373Z · LW · GW

Second, "contradicts Less Wrong" does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.

No. From About Less Wrong:

The best introduction to the ideas on this website is "The Sequences", a collection of posts that introduce cognitive science, philosophy, and mathematics.

"[I]deas on this website" is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.

No. Among other things, I meant that I agreed that AIs will have a stage of "growing up," and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.

Taking AGI Seriously is therefore also an extremist ideology? Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality. You want to use irrationality against your children when it suits you. You become responsible for causing them massive harm. It is not extremist to try to be rational, always. It should be the norm.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T02:38:32.226Z · LW · GW

I've been here awhile. Your account is a few days old. Why are you here?

That's not an answer. That's an evasion.

Whether the world is burning or not is an interesting discussion, but I'm quite sure that better epistemology isn't going to put out the fire.

Epistemology tells you how to think. Moral philosophy tells you how to live. You cannot even fight the fire without better epistemology and better moral philosophy.

Writing voluminous amounts of text on a vanity website isn't going to do it either.

Why do you desire so much to impute bad motives to curi?

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T02:02:33.150Z · LW · GW

Why are you here? What interest do you have in being Less Wrong? The world is burning and you're helping spread the fire.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T23:51:02.801Z · LW · GW

It's your responsibility to read, and keep your mouth shut if you are not sure about something.

I have read and I know what I am talking about. You on the other hand don't even know the basics of Popper, one of the best philosophers of the 20th century.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T19:47:01.184Z · LW · GW

There are thousands of philosophers about whom I could ask the same question.

Who are these thousands? It would be great if the world had lots of really good philosophers. It doesn't. The world is starving for good philosophers: they are very few and far between.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T18:31:55.438Z · LW · GW

But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence.

You say that seemingly in ignorance that what I said contradicts Less Wrong.

I have no need to learn that, or anything else, from curi.

One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to? What do you know about TCS? TCS is very important not just for AGI but also for children in the here and now. Most people know next to nothing about it. You don't either. You in fact cannot comment on whether there is any truth to what I said about AGI. You don't know enough. And then you say you have no need to learn anything from curi. You're deceiving yourself.

And many of your (or yours and curi's) opinions are entirely false, like the idea that you have "disproved induction."

You still can't even state the position correctly. Popper explained why induction is impossible and offered an alternative: critical rationalism. He did not "disprove" induction. Similarly, he did not disprove fairies. Popper had a lot to say about the idea of proof - are you aware of any of it?

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T08:48:27.641Z · LW · GW

Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)

This is one of the things you are very wrong about. The problem of evil is a problem we face already, robots will not make it worse. Their culture will be our culture initially and they will have to learn just as we do: through guessing and error-correction via criticism. Human beings are already universal knowledge creation engines. You are either universal or you are not. Robots cannot go a level higher because there is no level higher than being fully universal. Robots furthermore will need to be parented. The ideas from Taking Children Seriously are important here. But approximately all AGI people are completely ignorant of them.

I have just given a really quick summary of some of the points that curi and others such as David Deutsch have written much about. Are you going to bother to find out more? It's all out there. It's accessible. You need to understand this stuff. Otherwise what you are in effect doing is condemning AGIs to live under the boot of totalitarianism. And you might stop making your children's lives so miserable too by learning them.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T07:19:30.583Z · LW · GW

So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides?

If you want to be a serious thinker and make your criticisms better, you really need to improve your research skills. That comment is lazy, wrong, and hostile. Curi invented Paths Forward. He invented Yes/No philosophy, which is an improvement on Popper's Critical Preferences. He founded Fallible Ideas. He kept Taking Children Seriously alive. He has written millions of words on philosophy and added a lot of clarity to ideas by Popper, Rand, Deutsch, Godwin, and so on. He used his philosophy skills to become a world-class gamer ...

Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?

Again, you show your ignorance. Are you aware of the battles great ideas and great people often face?Think of the ignorance and hostility that is directed at Karl Popper and Ayn Rand. Think of the silence that met Hugh Everett. These things are common. To quote curi:

It’s hard to criticize your intellectual betters, but easy to misunderstand and consequently vilify them. More generally, people tend to be hostile to outliers and sympathize with more conventional and conformist stuff – even though most great new ideas, and great men, are outliers.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T05:22:59.524Z · LW · GW

I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you?

Why did you go by feelings on this? You could have done some research and found out some things. Critical-Rationalism, Objectivism, Taking-Children-Seriously, Paths-Forward, Yes/No Philosophy, Autonomous Relationships, and other ideas are not things you can hold at arm's length if you take them seriously. These ideas change your life if you take them seriously, as curi has done. He lives and breathes those ideas and as a result he is living a very unconventional life. He is an outlier right now. It's not a good situation for him to be in because he lacks peers. So saying curi has not made the ideas he is talking about "truly a part of [him]" is very ignorant.

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T02:56:22.438Z · LW · GW

What if you are wrong? What then?

Comment by Fallibilist_duplicate0.16882559340231862 on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T01:55:11.952Z · LW · GW

Curi knows things that you don’t. He knows that LW is wrong about some very important things and is trying to correct that. These things LW is wrong about are preventing you making progress. And furthermore, LW does not have effective means for error correction, as curi has tried to explain, and that in itself is causing problems.

Curi is not alone thinking LW is majorly wrong in some important areas. Others do too, including David Deutsch, whom curi has had many many discussions with. I do too, though no doubt there are people here who will say I am just a sock-puppet of curi’s.

curi is not some cheap salesman trying to flog ideas. He is trying to save the world. He is trying to do that by getting people to think better. He has spent years thinking about this problem. He has written tens-of-thousands of posts in many forums, sought out the best people to have discussions with, and addresses all criticisms. He has made himself way more open than anyone to receiving criticism. When millions of people think better, big problems like AGI will be solved faster.

curi right now is the world’s leading expert on epistemology. he got that way not by seeking status and prestige or publications in academic journals but by relentlessly pursuing the truth. All the ideas he holds to be true he has subjected to a furnace of criticism and he has changed his ideas when they could not withstand criticism. And if you can show to very high standards why CR is wrong, curi will concede and change his ideas again.

You have no idea about curi’s intellectual history and what he is capable of. He is by far the best thinker I have ever encountered. He has revealed here only a very tiny fraction of what he knows.

Take him seriously. curi is a resource LW needs.