Open thread, October 2 - October 8, 2017

post by root · 2017-10-03T10:46:41.517Z · LW · GW · Legacy · 52 comments

Contents

52 comments
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top-level comments on this article" and ".

52 comments

Comments sorted by top scores.

comment by rememberingGrognor · 2017-10-05T04:48:10.955Z · LW(p) · GW(p)

I don't have enough karma yet to post this as an original post but, for those who remember Grognor:

1 - https://grognor.github.io/ A memorial site where people can submit tributes for publication and otherwise 'remember' him.

2 - https://www.youcaring.com/floridaexoticbirdsanctuary-955648 A fundraiser in honor of him.

He passed sometime in June and I know that he was a valued member of LessWrong and cared a lot about the communities he inhabited.

Replies from: turchin, hg00
comment by turchin · 2017-10-05T10:53:25.196Z · LW(p) · GW(p)

Any attempts for posthumous digital immortality? That is collecting all the data about the person with the hope that the future AI will create his exact model.

Replies from: rememberingGrognor
comment by rememberingGrognor · 2017-10-05T18:19:17.506Z · LW(p) · GW(p)

http://grognor.stacky.net/index.php?title=Main_Page

Grognor did a good job collecting his own data. I don't have access to his alt twitter account, as it is a private account. But maybe someone else who does can help if the demand arises.

comment by hg00 · 2017-10-18T01:35:28.803Z · LW(p) · GW(p)

This is sad.

Some of his old tweets are pretty dark:

I haven't talked to anyone face to face since 2015

https://twitter.com/Grognor/status/868640995856068609

I just want to remind everyone that this thread exists.

Replies from: Elo
comment by Elo · 2017-10-18T01:51:22.556Z · LW(p) · GW(p)

I say this as often as I can. Reach out to me and say hello.

comment by [deleted] · 2017-10-04T21:36:26.373Z · LW(p) · GW(p)

Latest results on KIC 8462852 / Boyajians Star:

After comparing data from Spitzer and Swift - an infrared and ultraviolet telescope - whatever the heck the three dimensional distribution of the material causing the brightness dips, the long-term secular dimming of the star is being caused by dust. Over the course of a year of observations the star dimmed less in the infrared than in the ultraviolet, with the light extinction dependent upon wavelength in a way that screams dust of a size larger than primordial interstellar dust (and thus likely in the star system rather than somewhere between us) but still dust.

Still a weird situation. There cannot be a very large amount of dust in total since there is no infrared excess, so we must be seeing small amounts of it pass directly between the star and us.

The dipping is also semiperiodic, to the point that a complex of dips beginning in May was predicted months in advance.

Replies from: turchin, MrMind
comment by turchin · 2017-10-06T10:40:11.949Z · LW(p) · GW(p)

I read in one Russian blog that they calculated the form of objects able to produce such dips. It occurred to be 10 million kilometres strips orbiting the star. I think it is very similar to very large comet tails.

comment by MrMind · 2017-10-06T10:11:48.911Z · LW(p) · GW(p)

That's interesting... is the dust size still consistent with artificial objects?

Replies from: Manfred
comment by Manfred · 2017-10-06T20:04:35.346Z · LW(p) · GW(p)

The dust probably is just dust - scattering of blue light more than red is the same reason the sky is blue and the sun looks red at sunset (Rayleigh scattering / Mie scattering). It comes from scattering off of particles smaller than a few times the wavelength of the light - so if visible light is being scattered less than UV, we know that lots of the particles are of size smaller than ~2 um. This is about the size of a small bacterium, so dust with interesting structure isn't totally out of the question, but still... it's probably just dust.

comment by a gently pricked vein (strangepoop) · 2017-10-03T22:08:33.539Z · LW(p) · GW(p)

Can someone help me out with Paul Christiano's email/contact info? Couldn't find it anywhere online.

I might be able to discuss possibilities for implementing his Impact Certificate ideas with some very capable people here in India.

Replies from: Manfred
comment by Manfred · 2017-10-04T18:27:20.199Z · LW(p) · GW(p)

I don't, and maybe you've already been contacted, but you could try contacting him on social sites like this one (user paulfchristiano) and Medium, etc. Typical internet stalking skillset.

comment by MaryCh · 2017-10-15T11:09:19.470Z · LW(p) · GW(p)

Warning: please don't read if you are triggered by a discussion of post-mortem analysis (might come up in the comments).

I want to have my body donated to science, well, afterwards, and to convince my twin sister to organize the same thing; there's probably a dearth of comparative post-mortem studies of adult (aged) human twins. However, my husband said he wouldn't do it. I don't want to argue with him about something we both hope won't be an issue for many years to come, so, in pure scientific interest:

what would you think it would be interesting to study in such a setting?

Sorry if I offended you, it wasn't my intention. Just can't ask this on facebook, my Mom would eat me alive.

Replies from: gwern, IlyaShpitser
comment by gwern · 2017-10-17T20:21:24.187Z · LW(p) · GW(p)

You could look into joining a twin registry. Discordant-twin designs are fairly powerful, but still need _n_>50 or something like that to be worth doing. Plus if you keep your own novel set of data, people will be less interested in analyzing it compared to a twin registry using a familiar set of questionnaires/scales/measures. (One of the reasons you see so much from twin registries or the UK Biobank: consistent measurements.) It would've been best if you two had been enrolled as kids, but perhaps better late than never.

comment by IlyaShpitser · 2017-10-15T23:09:06.384Z · LW(p) · GW(p)

Consider creating detailed records of lifestyle differences between you and your sister. Perhaps keep a diary (in effect creating a longitudinal dataset for folks to look at later).

There is an enormous interest in disentangling lifestyle choices from genetics for all sorts of health and nutrition questions.


Thank you for considering this, I think this could be very valuable.

Replies from: ChristianKl, MaryCh
comment by ChristianKl · 2017-10-17T15:01:16.660Z · LW(p) · GW(p)

Thank you for considering this, I think this could be very valuable.

Do you think that having one pair of twins is enough to get valuable data from it?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-10-17T16:10:57.242Z · LW(p) · GW(p)

In the hierarchy of evidence, this would be a "case study." So the value is not as high as a proper study, but non-zero.

comment by MaryCh · 2017-10-16T06:18:37.849Z · LW(p) · GW(p)

I think she will be open to it. Here's hope. People usually don't get it, how having a twin makes you feel you live an experiment - same clothes or different clothes (but people say different things to you when they see you in them - "why?"), same favourite poems and different ones (so weird, really). Always thought it a shame, to have so much material go to waste.

comment by mako yass (MakoYass) · 2017-10-15T00:20:37.211Z · LW(p) · GW(p)

I have a patent law question.

Summary/main question: Should patents ever be granted for a common, unoriginal idea, before any original work has been done, to protect the claimant's future work in the area of the claim. If we are not allowed to grant patents like that, what sort of schemes do we favor for bringing incentives to make progress in competitive arenas of research closer to the societal value of the expected findings?

Companies often seem to need a promise that if they can make an idea work and find an audience, all of those unprotected advancements they must make (market research, product development and building awareness in the audience(marketing)), wont just be stolen by some competitor the moment people start buying the thing.

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50. They aren't compensated proportionate to the value they produced. If it can't be produced for $3.50, it will be produced by their largest, most complacent competitors to safeguard their stranglehold on the market. The incumbents will go completely unpunished for having sat on their hands for long enough to allow these new innovators to threaten them, the idea will threaten them, and then it will serve them, and it will serve as an example to anyone who tries to threaten them in future, and innovation will generally be discouraged.

The expected rewards for solving a problem that takes a long time to solve are generally much lower than the societal value of the solution, because there's a high chance that another team will solve it first, and most of the resources invested in development will have been in vain. If a working group had exclusive rights to the solutions to some problem, whatever they turn out to be, the amount they aught to invest will be much closer to the solutions' actual value.

It's a way of limiting the inefficiencies of competition. It sort of reminds me of bitcoin-NG, if I've understood it correctly, the protocol periodically elects a single working group to process the bulk of the transactions, to prevent costly duplication of efforts.

So, to reiterate, should patents ever be granted before any original work has been done, to protect the claimant's future work in the area of the claim, and if not, what should we do instead, or what do we do instead, to bring the incentive to make progress in competitive arenas of research closer to the actual societal value of the expected findings?

Replies from: Dagon, ChristianKl, satt, Lumifer
comment by Dagon · 2017-10-16T15:01:13.137Z · LW(p) · GW(p)

That's not a patent law question, that's a social theory question using a bizarre form of patents as the mechanism.

And my answer is "absolutely not". I have no interest in preventing people to work on what they want, nor in protecting someone's unproven idea with no evidence that it's the right person to solve it or that there will be any success. Ideas are cheap, working systems are valuable.

Also, I'll take "the inefficiencies of competition" over the inefficiencies of monopoly any day, especially in public pursuits where governments have any say.

comment by ChristianKl · 2017-10-15T21:17:22.880Z · LW(p) · GW(p)

If you want to understand how companies can have incentives to produce new products I think it's worth to read startup literature like Eric Ries "The Lean Startup".

It seems like a common situation, someone puts a lot of money into popularizing some innovation, but because it's an obvious innovation, they can't protect it, and you'll find it on aliexpress for like $3.50.

A small startup is unlikely to successfully run a patent battle in China. Having a patent won't protect the company from getting copied.

Let's look at an example. In the Quantified Self field, it would be nice to have a toilet that regularly does urine analysis and gives me data. In 1988 someone filed a patent for a toilet in which that's directly build. That doesn't mean that any such product hit the market. Did that original company produce a product for the European or US market? No, there's no toilet that you can buy from the original company. On the other hand, if another person would have tried to put something on the market they could have been sued. There's no company that produced a product that can be easily brought.

Most startups fail and when startups who filled patents fail, the patents are often brought by other parties who then use the patents to sue and do patent trolling.

China provides interesting opportunities. It's cheaper for someone to ship an item from China via Aliexpress to me than it is for someone to ship the same item to me from an Amazon Fulfillment Center. I can buy a 0.70 cent free shipping item from Aliexpress while I can't buy that from Amazon.

It's cheap to run a Kickstarter campaign and let a Chinese company produce your product. Doing this usually means that employees from the company are going to pass your design around and your product will get sold in an unbranded version on Aliexpress.

This means that the dream that Kickstarter promised where everybody can produce his idea and bring it to market comes with the side problem of copycat products being produced but that's still much better than it was in the past. It's also worth noting that you could in theory build your product in the US and not have factory employees pass the design around but given that the Chinese factories are so efficient the Kickstarter inventors still go and let a Chinese company produce their products.

That a bit sad but 10 years ago the same person had no way to bring their product to market at all.

comment by satt · 2017-10-15T22:01:54.412Z · LW(p) · GW(p)

Upvoted for asking an interesting question, but my answer would be "probably not". Whether patents are a good idea even as is is debatable — see Michele Boldrin and David Levine's Against Intellectual Monopoly — and I expect beefing them up to be bad on the margin.

I'm unclear on whether the proposed super-patents would

  1. be the same as normal patents except fileable before the work of sketching a plausible design has been done, or

  2. would be even more powerful, by also allowing the filer to monopolize a market in which they carry out e.g. "market research, product development and building awareness", even if that involves no original design work,

but in any case the potential downsides hit me as more obvious than the potential upsides.

Item 1 would likely lead to more patents being filed "just in case", even without a real intention of bringing a real product to market. This would then discourage other profit-seeking people/organizations from investigating the product area, just as existing patents do.

Item 2 seems to take us beyond the realm of patents and intellectual work; it's about compensating a seller for expenses which produce positive spillovers for other sellers. As far as I know, that's not usually considered a serious enough issue to warrant state intervention, like granting a seller a monopoly. I suspect that when The Coca-Cola Company runs an advert across the US, Wal-Mart sells more of its own knockoff colas, but the US government doesn't subsidize Coca-Cola or its advertising on those grounds!

comment by Lumifer · 2017-10-15T02:53:18.188Z · LW(p) · GW(p)

No.

comment by Erfeyah · 2017-10-03T20:07:22.249Z · LW(p) · GW(p)

A few days ago I asked for LW articles regarding the Chinese Room argument and got into a conversation with the user hairyfigment. As I am certainly not convinced of the validity of the Chinese room argument myself I tried to understand the Chinese gym extension of the argument and if/why it matters to the original point. In particular I pointed to the relevance of the brain not evidently being a digital computer. I went back to the 2014 book The Future of the Brain: Essays by the World's Leading Neuroscientists which is a recent exposition of our current (quite poor) understanding of the brain. In particular I went back to the chapter The Computational Brain by Gary Marcus. Here are some quotes that I believe are relevant. Unfortunately I can not provide the full chapter for copyright reasons but I do recommend the book.

[...] we still haven't even resolved the basic question of whether brains are analog, digital, or (as I suspect but certainly can't prove) a hybrid of the two.

and

Going hand in hand with the neural network community's odd presumption of initial randomness was a needless commitment to extreme simplicity, exemplified by models that almost invariably included a single neuronal type, abstracted from the details of biology. We now know that there are hundreds of different kinds of neurons , and the exact details—of where synapses are placed, of what kinds of of neurons are interconnected where-make an enormous difference. Just in the retina (itself a part of the brain), there are roughly twenty different types of ganglion cells; there, the idea that you could adequately capture what's going on with a single kind of neuron is absurd. Across the brain as a whole, there are hundreds of different types of neurons, perhaps more than a thousand, and it is doubtful that evolution would sustain such diversity if each type of neurons were essentially doing the same type of thing.

Is the non or partially digital nature of the brain relevant to certain arguments based on neural networks presented in the sequences?

Does it open the possibility that Searle's argument on syntactic symbol manipulation might be relevant?

Apart from the digital/analog point what about the neural complexity and variety? What if anything does it show about the current state of AI research?

Replies from: Manfred
comment by Manfred · 2017-10-04T01:09:04.014Z · LW(p) · GW(p)

Ah, you mean to ask if the brain is special in a way that evades our ability to construct an analogy of the chinese room argument for it? E.g. "our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry, therefore there is nothing in my body that understands English."

I think such an argument is totally valid imitation. It doesn't necessarily bear on the Chinese room itself, which is a more artificial case, but it certainly applies to AI in general.

Replies from: entirelyuseless, Erfeyah
comment by entirelyuseless · 2017-10-04T14:28:48.762Z · LW(p) · GW(p)

"our neurons don't indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry"

The question is what the word "just" means in that sentence. Ordinarily it means to limit yourself to what is said there. The implication is that your behavior is explained by those simple laws, and not by anything else. But as I pointed out recently, having one explanation does not exclude others. So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals, or in other ways. In other words, the argument is false because the word "just" here implies something false.

Replies from: Manfred, Dagon
comment by Manfred · 2017-10-04T18:32:45.213Z · LW(p) · GW(p)

Yeah, whenever you see a modifier like "just" or "merely" in a philosophical argument, that word is probably doing a lot of undeserved work.

comment by Dagon · 2017-10-04T17:44:23.075Z · LW(p) · GW(p)

The implication is that your behavior is explained by those simple laws

I don't think the laws of physics (chemistry) are actually simple in the case of large systems. Note that this understanding applies to the Chinese Room idea too - the contents of the rules/slips of paper are not "simple" by any means.

But I'm more concerned about a confusion in interpreting

and not by anything else

Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors (instead of trying to understand the lower-level physics/chemistry)? Or are you saying that the physics is insufficient and you must supplement it with something else in order to identify all causes of behavior?

I agree with the first, and disagree with the second.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-10-05T01:41:32.592Z · LW(p) · GW(p)

Are you merely claiming that there are other models which can alternatively be used to explain some of all of the behaviors

There's that word, "merely," there, like your other word "just," which makes me say no to this. You could describe the situation as "there are many models," but you are likely to be misled by this. In particular, you will likely be misled to think there is a highly accurate model, which is that someone did what he did because of chemicals, and a vague and inaccurate model, which says for example that someone went to the store to buy milk. So rather than talking about models, it is better simply to say that we are talking about two facts about the world:

Fact 1: the person went to the store because of the behavior of chemicals etc. Fact 2: the person went to the store to buy milk.

These are not "merely" two different models: they are two different facts about the world.

Or are you saying that the physics is insufficient

I said in my comment, "So your behavior can be explained by those simple laws, and at the same time by the fact that you were seeking certain goals." If the first were insufficient, it would not be an explanation. Both are sufficient, and both are correct.

you must supplement it with something else in order to identify all causes of behavior?

Yes, if we mean by "cause", "explanation," which is normally meant, then you have to mention both to mention all causes, i.e. all explanations, since both are explanations, and both are causes.

Replies from: Dagon
comment by Dagon · 2017-10-05T02:52:14.347Z · LW(p) · GW(p)

Fact 1: the person went to the store because of the behavior of chemicals etc. Fact 2: the person went to the store to buy milk. These are not "merely" two different models: they are two different facts about the world.

Not independent facts, surely. The person went to the store to buy milk because of the behavior of chemicals, right? Even longer chains ... because they were thirsty and they like milk because it reminds them of childhood because their parents thought it was important for bone growth because ... because ... ends eventually with because of the quantum configuration of the universe at some point. and you can correctly shortcut to there at any point in between.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-10-05T14:22:55.977Z · LW(p) · GW(p)

I said they were two different facts, not two independent facts. So dependent or not (and this question itself is also more confused and complicated than you realize), if you do not mention them both, you are not mentioning everything that is there.

Replies from: Dagon
comment by Dagon · 2017-10-05T16:59:47.995Z · LW(p) · GW(p)

if you do not mention them both, you are not mentioning everything that is there.

Hmm. I don't think "mention everything that is there" is on my list of goals for such discussions. I was thinking more along the lines of "mention the minimum necessary". I'm still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior, even while acknowledging that there are higher-level models which are way easier to understand.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-10-06T02:32:16.751Z · LW(p) · GW(p)

I'm still unclear whether you agree that physics is sufficient to describe all events in the universe including human behavior

It is sufficient to describe them in the way that it does describe them, which certainly includes (among other things) all physical motions. But it is obvious that physics does not make statements like "the person went to the store to buy milk," even though that is a true fact about the world, and in that way it does not describe everything.

Replies from: Dagon
comment by Dagon · 2017-10-06T14:02:36.946Z · LW(p) · GW(p)

Ok, one more attempt. Which part of "the person went to the store to buy milk" is not described by the quantum configuration of the local space? The person certainly is. Movement toward and in the store certainly is. The neural impulses that correspond to desire for milk very probably are.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-10-07T01:16:32.485Z · LW(p) · GW(p)

Which part of "the person went to the store to buy milk" is not described by the quantum configuration of the local space?

All of it.

The person certainly is.

The person certainly is not; this is why you have arguments about whether a fetus is a person. There would be no such arguments if the question were settled by physics.

Movement toward and in the store certainly is.

Movement is, but stores are not; physics has nothing to say about stores.

The neural impulses that correspond to desire for milk very probably are.

Indeed, physics contains neural impulses that correspond to the desire for milk, but it does not contain desire, nor does it contain milk.

comment by Erfeyah · 2017-10-05T19:09:39.240Z · LW(p) · GW(p)

Hmm.. I do not think that is what I mean, no. I lean towards agreeing with Searle's conclusion but I am examining my thought process for errors.

Searle's argument is not that consciousness is not created in the brain. It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks). He does not deny that we might discover the architecture of the brain in the future. All he does is demonstrate through analogy how syntactic operations work.

In the Chinese gym rebuttal the issues is not really addressed. There is no denial by Searle that the brain is a system, with sub components, through which structure, consciousness emerges. That is a different discussion. He is arguing that the system must be doing something, different or in addition to, syntactic symbol manipulation.

Since the neuroscience does not support the digital information processing view where is the certainty of the community coming from? Am I missing something fundamental here?

Replies from: Manfred, MrMind
comment by Manfred · 2017-10-05T21:58:26.090Z · LW(p) · GW(p)

I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It's perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like "well, I sure feel conscious"?

The reason LWers are so confident that this simulation is conscious is because we think of concepts like "consciousness," to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It's just like how the concept of "apples" exists because apples exist, and when I correctly think I see an apple, it's because there's an apple. Talking about "consciousness" is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label "consciousness" are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it's a misunderstanding of what we have access to when we encounter consciousness.

Replies from: Erfeyah
comment by Erfeyah · 2017-10-06T19:55:25.547Z · LW(p) · GW(p)

I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated.

The working of a computer is not unimaginably complicated. Its basis is quite straightforward really. As I said in my answer to MrMind below “As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).”.

Talking about "consciousness" is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label "consciousness" are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation.

In our debate I am holding the position that there can not be a simulation of consciousness using the current architectural basis of a computer. Searle has provided a logical argument. In my quotes above I show that the state of neuroscience does not point towards a purely digital brain. What is your evidence?

comment by MrMind · 2017-10-06T10:21:49.288Z · LW(p) · GW(p)

It is that it is not based on syntactic symbol manipulation in the way a computer is and for that reason it is not going to be simulated by a computer with our current architecture (binary, logic gates etc.) as the AI community thought (and thinks).

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same. Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

Replies from: Erfeyah
comment by Erfeyah · 2017-10-06T19:44:18.052Z · LW(p) · GW(p)

Well, that would run counter to the Church-Turing thesis. Either the brain is capable of doing things that would require infinite resources for a computer to perform, or the power of the brain and the computer is the same.

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis? If that is the case I would refer you to this article’s section Misunderstandings of the Thesis. If I have understood wrong I would be grateful if you could offer some more details on your point.

Indeed, not even computers are based on symbolic manipulation: at the deepest level, it's all electrons flowing back and forth.

We can demonstrate the erroneous logic of this statement by saying something like: ”Indeed, not even language is based on symbolic manipulation: at the deepest level, it's all sound waves pushing air particles back and forth”.

As Searle points out the meaning of zeros, ones, logic gates etc. is observer relative in the same way money (not the paper, the meaning) is observer relative and thus ontologically subjective. The electrons are indeed ontologically objective but that is not true regarding the syntactic structures of which they are elements in a computer. Watch this video of Searle explaining this (from 9:12).

Replies from: MrMind
comment by MrMind · 2017-10-09T12:16:14.467Z · LW(p) · GW(p)

Am I right to think that this statement is based on the assumption that the brain (and all computation machines) have been proven to have Turing machine equivalents based on the Church-Turing thesis?

No, otherwise we would have the certainty that the brain is Turing-equivalent and I wouldn't have prefaced with "Either the brain is capable of doing things that would require infinite resources for a computer to perform". We do not have proof that everything not calculable by a Turing machine requires infinite resources, otherwise Church-Turing will be a theorem and not a thesis, but we have strong hints: every hypercomputation model is based on accessing some infinite resource (whether it's infinite time or infinite energy or infinite precision). Plus recently we had this theorem: any function on the naturals is computable by some machine in some non-standard time.
So either the brain can compute things that a computer would take infinite resources to do, or the brain is at most as powerful as a Turing machine.

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

Replies from: Erfeyah
comment by Erfeyah · 2017-10-17T20:25:08.953Z · LW(p) · GW(p)

As per the electron thing, there's a level where there is symbolic manipulation and a level where there isn't. I don't understand why it's symbolic manipulation for electronics but not for neurons. At the right abstraction level, neurons too manipulate symbols.

It is not the symbols that are the problem. It is that the semantic content of the symbol used in a digital computer is observer relative. The circuits depend on someone understanding their meaning. The meaning provided by the human engineer that, since he possesses the semantic content, understands the method of implementation and the calculation results at each level of abstraction. This is clearly not the case in the human brain in which the symbols arise in a manner that allows for intrinsic semantic content.

comment by Osho · 2017-10-03T18:48:30.469Z · LW(p) · GW(p)

Is anyone interested in starting a small team (2-3 people) to work on this Kaggle dataset?

https://www.kaggle.com/c/porto-seguro-safe-driver-prediction

comment by JohnGreer · 2017-10-03T13:31:40.472Z · LW(p) · GW(p)

Has Eliezer written anything outlining why he's working on AI rather than directly on life extension? I could guess (we need AI to speed up research, we need to make sure we don't die from AI first, etc.) but I'd prefer to read it explicitly. Posts not from Eliezer but answering the same question would also be welcome.

comment by alanforr_duplicate0.6027038989367575 · 2017-10-29T15:17:37.144Z · LW(p) · GW(p)

This article has substantive advice on how to be open minded:

http://fallibleideas.com/paths-forward

Replies from: Elo
comment by Elo · 2017-10-29T22:15:31.494Z · LW(p) · GW(p)

Good article. Seems like a friend.

comment by alanforr_duplicate0.6027038989367575 · 2017-10-16T18:35:23.686Z · LW(p) · GW(p)

This link proposes a new improvement on epistemology:

http://fallibleideas.com/essays/yes-no-argument

Replies from: ChristianKl
comment by ChristianKl · 2017-10-17T14:57:18.819Z · LW(p) · GW(p)

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.

There's nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.

All solutions are equal because they all solve the problem.

If my problem is that I want to have a successful job interview, then I don't have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

Replies from: alanforr_duplicate0.6027038989367575, curi
comment by alanforr_duplicate0.6027038989367575 · 2017-10-21T21:49:33.272Z · LW(p) · GW(p)

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

No it can't. It can only be used in situations where it happens to agree with reality. That's not the same as the theory being correct.

The amount of goodness approach has no objective way to determine the sizes of the amounts, so it leads to subjective bias instead of objective knowledge, and it creates unresolvable disagreements between people.

There's nothing bad about two people with different priors coming to different conclusions. It creates an intellectual climate where a lot of different ideas get explored. Most breakthrough ideas have plenty of flaws at their birth and need to go through a lot of refinement to get valuable.

You have misunderstood the problem. The problem is not that people come to different conclusions. Rather, the problem is that people are completely arbitrarily assigning scores to ideas. Since there is no objective reality underlying their scoring, there no rational way for any two people to come to agreement on scores.

All solutions are equal because they all solve the problem.

If my problem is that I want to have a successful job interview, then I don't have a binary outcome. I want to get the job earning as much money as possible and modeling the salary with a scalar makes much more sense than having binary judgments.

Making a judgement about whether to take a job is a yes or no judgement. Making a decision about whether to say X during a job interview is a yes or no judgement. That doesn't prevent you from modelling salary with a scalar. If you judge that you should always take the job that earns you as much money as possible then if job A money > job B money, you will say yes to A and no to B.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

An idea either solves a problem or it doesn't.

There is no way to assign probabilities to ideas. Theories such as quantum mechanics assign probabilities to events, e.g. - radioactive decay of an atom. Assigning a probability to a theory makes no sense since there is no rule for assigning probabilities in the absence of an explanatory theory.

comment by curi · 2017-10-30T23:43:48.384Z · LW(p) · GW(p)

Newtons theory of relativity has flaws but it's still a good idea and can be used in plenty of cases.

Is this intended to contradict something in the article?

There's nothing bad about two people with different priors coming to different conclusions.

People often disagree, np, but if there's no possible way to agree – if everything is just arbitrary – then you have a problem.

If my problem is that I want to have a successful job interview

That's not a well-defined problem.

Furthermore anytime I want to maximize the probability of an outcome I also care about a scalar. Why do you think that probabilities shouldn't be central in epistemology?

Maximizing a single metric has a binary outcome: either you did the thing which maximizes it or you didn't.

comment by Manfred · 2017-10-04T19:54:06.048Z · LW(p) · GW(p)

Neat paper about the difficulties of specifying satisfactory values for a strong AI. h/t Kaj Sotala.

The design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. [] Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results.

I think it's slightly lacking in sophistication about aggregation of numerical preferences, and in how revealed preferences indicate that we don't actually have incommensurable or infinitely-strong preferences, but is overall pretty great.

On the subject of the problem, I don't think we should program in values that are ad-hoc on the object level (what values to use - trying to program this by hand is destined for failure), or even the meta level (whose values to use). But I do think it's okay to use an ad-hoc process to try to learn the answers to the meta-level questions. After all, what's the worst that could happen? (irony). Of course, the ability to do this assumes the solution of other, probably more difficult philosophical/AI problems, like how to refer to peoples' values in the first place.

Replies from: Dagon
comment by Dagon · 2017-10-04T23:16:31.409Z · LW(p) · GW(p)

Note that these three things (standing, measurement, and aggregation) are unsolved for human moral decisionmaking as well.