Beware of identifying with school of thoughts

post by ChristianKl · 2016-12-05T00:30:52.748Z · score: 10 (11 votes) · LW · GW · Legacy · 28 comments

As a child I decided to do a philosophy course as an extracurricular activity. In it the teacher explained to us the notion of schools of philosophical thought. According to him classifying philosophers as adhering either to school A or school B, is typical for Anglo thought.


It deeply annoys me when Americans talk about Democrat and Republican political thought and suggest that you are either a Democrat or a Republican. The notion that allegiance to one political camp is supposed to dictate your political beliefs feels deeply wrong.


A lot of Anglo high schools do policy debating. The British do it a bit differently than the American but in both cases it boils down to students having to defend a certain side.

Traditionally there's nearly no debating at German high schools.


When writing political essays in German school there’s a section where it's important to present your own view. Your own view isn't supposed to be one that you simply copy from another person. Good thinking is supposed to provide a sophisticated perspective on the topic that is the synthesis of arguments from different sources instead of following a single source.


That’s part of the German intellectual thought has the ideal of 'Bildung'. In Imprisoned in English Anna Wierzbicka tells me that 'Bildung' is a particularly German construct and the word isn't easily translatable into other languages. The nearest English word is 'education'. 'Bildung' can also be translated as 'creation'. It's about creating a sophisticated person, that's more developed than the average person on the street who doesn't have 'Bildung'. Having 'Bildung' signals having a high status.


According to this ideal you learn about different viewpoints and then you develop a sophisticated opinion. Not having a sophisticated opinion is low class. In liberal social circles in the US a person who agrees with what the Democratic party does at every point in time would have a respectable political opinion. In German intellectual life that person would be seen as a credulous low status idiot that failed to develop a sophisticated opinion. A low status person isn't supposed to be able to fake being high status by memorizing the teacher's password.


If you ask me the political question "Do you support A or B?", my response is: "Well, I neither want A or B. There are these reasons for A, there are those reasons for B. My opinion is that we should do C which solves those problems better and takes more concerns into account." A isn’t the high status option so that I can signal status by saying that I'm in favour of A.


How does this relate to non-political opinions? In Anglo thought philosophic positions belong to different schools of thought. Members belonging to one school are supposed to fight for their school being right and being better than the other schools.


If we take the perspective of hardcore materialism, a statement like: "One of the functions of the heart is to pump blood" wouldn't be a statement that can be objectively true because it's teleology. The notion of function isn't made up of atoms.


From my perspective as a German there's little to be gained in subscribing to the hardcore materialist perspective. It makes a lot of practical sense to say that such as statement can be objectively true. I have gotten the more sophisticated view of the world, that I want to have. Not only statements that are about arrangements of atoms can be objectively true but also statements about the functions of organs. That move is high status in German intellectual discourse but it might be low status in Anglo-discourse because it can be seen as being a traitor to the school of materialism.


Of course that doesn't mean that no Anglo accepts that the above statement can be objectively true. On the margin German intellectual norms make it easier to accept the statement as being objectively true. After Hegel you might say that thesis and antithesis come together to a synthesis instead of thesis or antithesis winning the argument.


The German Wikipedia page for "continental philosophy" tells me that the term is commonly used in English philosophy. According to the German Wikipedia it's mostly used derogatorily. From the German perspective the battle between "analytic philosophy" and "continental philosophy" is not a focus of the debate. The goal isn't to decide which school is right but to develop sophisticated positions that describe the truth better than answers that you could get by memorizing the teacher's password.


One classic example of an unsophisticated position that's common in analytic philosophy is the idea that all intellectual discourse is supposed to be based on logic. In Is semiotics bullshit? PhilGoetz stumbles about a professor of semiotics who claims: "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis."


That's seen as a strong violation of how reasoning based on logical positivism is supposed to work. It violates the memorized teachers password. But is it true? To answer that we have to ask what 'logical basis' means. David Chapman analysis the notion of logic in Probability theory does not extend logic. In it he claims that in academic philosophical discourse the phrase logic means predicate logic.


Predicate logic can make claims such:

(a) All men are mortal.

(b) Socrates is a man.

Therefore:

(c) Socrates is mortal.


According to Chapman the key trick of predicate logic is logical quantification. That means every claim has to be able to be evaluated as true or false without looking at the context.


We want to know whether a chemical substance is safe for human use. Unfortunately our ethical review board doesn't let us test the substance on humans. Fortunately they allow us to test the substance on rats. Hurray, the rats survive.


(a) The substance is safe for rats.

(b) Rats are like humans

Therefore:

(c) The substance is safe for humans.


The problem with `Rats are like humans` is that it isn’t a claim that’s simply true or false.

The truth value of the claim depends on what conclusions you want to draw from it. Propositional calculus can only evaluate the statement as true or false and can’t judge whether it’s an appropriate analogy because that requires looking at the deeper meaning of the statement `Rats are like humans` to decide whether `Rats are like humans` in the context we care about.


Do humans sometimes make mistakes when they try to reason by analogy? Yes, they do. At the same time they also come to true conclusions by reasoning through analogy. Saying "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis." sounds fancy, but if we reasonably define the term logical basis as being about propositional calculus, it's true.


Does that mean that you should switch from the analytic school to the school of semiotics? No, that's not what I'm arguing. I argue that just as you shouldn't let tribalism influence yourself in politics and identify as Democrat or Republican you should keep in mind that philosophical debates, just as policy debates, are seldom one-sided.


Daring to slay another sacred cow, maybe we also shouldn't go around thinking of ourselves as Bayesian. If you are on the fence on that question, I encourage you to read David Chapman's splendid article I referenced above:

 

Probability theory does not extend logic

28 comments

Comments sorted by top scores.

comment by moridinamael · 2016-12-05T16:16:10.965Z · score: 2 (2 votes) · LW · GW

I'm not sure what work the word "sophisticated" is doing here.

Let's say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it "sophisticated" to say, "Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don't possess, then the Greens have some good points!"? That doesn't seem sophisticated. That just indicates a pathological inability to "join a side" even when one side is the one you should join by your own ethical compunctions, you want to join, you would benefit from joining, and you would cause others to benefit by joining.

Also, what if you arrive at the party partway through, and the Green and Blue have already spoken, and also another Sophisticate has spoken and indicated that "both sides have some good points, perhaps the answer is in the middle". Are you allowed to just say, "I agree with the Sophisticate!" or does that make you a "sophisticate partisan" meaning you are obligated by the laws of being/appearing "sophisticated" to say, "Well, actually, the answer can't be in the middle, a 50-50 split just seems improbable, the Greens are probably 25% right and the Blues are probably 75% right."?

What I'm getting at is I'm not sure what the difference is being your usage of "sophisticated" and just being a contrarian.

You mention the attitudes implicit in certain styles of debate. I've written before about the dangers of certain styles of policy debate taught in American schools. I've always seen it as damaging that the point of US policy debate is to be able to argue from any position and against any position. It implicitly teaches the young mind that you can "win" an argument through cleverness and rule-lawyering without regard to whether your position is actually superior. The whole framework actively undermines the truthseeking mindset, because in a policy debate, you're not truthseeking, you're trying to obfuscate the opponents' inconvenient truths and distort facts that support your own argument to appear more important than they are. In short, I think there's definitely such a thing as "too much sophistication", and I blame this type of sophistication on why many of my former high school friends are now effectively insane.

Obviously I agree that it's dangerous to identify with a school of thought. Political parties in particular are coalitions of disparate interest groups, so the odds that a group of people who are only aligned for historically contingent reasons are going to come up with uniformly correct conclusions is near zero. That doesn't mean you can never be confident that you're right about something.

Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc. None of us are really Bayesians anyway, because a thorough and correct Bayesian network for computing something as simple as whether you should buy Hershey's or Snickers would be computationally intractable.

comment by ChristianKl · 2016-12-05T18:41:38.032Z · score: 1 (1 votes) · LW · GW

Let's say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it "sophisticated" to say, "Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don't possess, then the Greens have some good points!"? That doesn't seem sophisticated.

Both of those positions are expressible in a single sentence. Sophisticated positions on topics are generally complex enough that they aren't expressible in a single sentence.

Saying: "Here's the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like." is more sophisticated.

Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc

There are cases where it's useful to use probability when faced with uncertainty. It's when you can define a specific test of how the world looks like when the belief is true and when it isn't.

Many beliefs are too vague for such a test to exist. It doesn't make sense to put a probability on "The function of the heart is to pump blood". That belief doesn't have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.

At the same time it's useful to have beliefs like "The function of the heart is to pump blood".

comment by Lumifer · 2016-12-05T19:33:56.451Z · score: 2 (2 votes) · LW · GW

Saying: "Here's the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like." is more sophisticated.

ROFL...

tl;dr: KILL THEM ALL! ...but if you want sophistication, here is a 300 page paper about how and why we should KILL THEM ALL!

comment by ChristianKl · 2016-12-06T17:56:57.005Z · score: 0 (0 votes) · LW · GW

It's probably not the best example, but I stayed with the original example.

comment by NatashaRostova · 2016-12-06T02:18:18.214Z · score: 0 (0 votes) · LW · GW

Many beliefs are too vague for such a test to exist. It doesn't make sense to put a probability on "The function of the heart is to pump blood". That belief doesn't have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.

Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.

Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.

Again, this is sort of trivial, because all it's saying is that 'past information is probabilistically useful to the future.' I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it's simply the way to interpret reality.

comment by ChristianKl · 2016-12-06T07:42:01.237Z · score: 1 (1 votes) · LW · GW

Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.

David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.

In biology people who build knowledge bases find it useful to allow storing knowledge like "The function of the heart is to pump blood". If I'm having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.

comment by moridinamael · 2016-12-08T21:20:42.549Z · score: 1 (1 votes) · LW · GW

I'm still having trouble with this.

A human mind is built out of nonlinear logic gates of various kinds. So even a belief like "the function of the heart is to pump blood" is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like "probabilistic classification and probabilistic reasoning" than it looks like "a clean algorithm for some kind of abstract formal logic". (Assume all the appropriate caveats that we don't actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)

Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to "virtualize" it.

Maybe probability matters little at the object level of your discussion, but that's completely ignoring the fact that your brain's assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.

I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that's what we are.

comment by moridinamael · 2016-12-05T21:02:02.726Z · score: 0 (0 votes) · LW · GW

If "sophisticated" in this usage just means "complex", I'm not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.

There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don't count that as being a plus.

Complexity or "sophistication" can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.

At the same time it's useful to have beliefs like "The function of the heart is to pump blood".

I don't know. I try to root out beliefs that follow that general form and replace them, e.g. "the heart pumps blood" is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word "function" which implies some kind of designed intent.

comment by ChristianKl · 2016-12-06T17:56:20.459Z · score: 1 (1 votes) · LW · GW

If "sophisticated" in this usage just means "complex", I'm not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.

I haven't argued that A is just better than B.

I try to root out beliefs that follow that general form

Yes, and I see that as a flaw that's the result of thinking of everything in Bayesian terms.

"the heart pumps blood" is a testable factual statement, and a basic observation, which semantically carries all the same useful information

When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result. The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.

comment by Lumifer · 2016-12-05T17:39:08.336Z · score: 1 (1 votes) · LW · GW

You seem to fail Bildung as you contrast AngloAmerican and German ways of doing things. A true sophisticate would know the truth is not in picking one side and sticking to it :-P

I am also not sure why should I care about what's high-status in Germany (other than being Dr. Dr. Dr. Lumifer, of course).

comment by sen · 2016-12-06T09:23:08.202Z · score: 0 (0 votes) · LW · GW

Or is it that a true sophisticate would consider where and where not to apply sophistry?

comment by Lumifer · 2016-12-06T15:39:41.943Z · score: 1 (1 votes) · LW · GW

A true sophisticate would apply sophistry everywhere but modulate it to make it appear that she possesses σοφία where she needs to show it and that she is a simpleton where it suits her :-P

comment by ChristianKl · 2016-12-05T20:20:05.930Z · score: 0 (0 votes) · LW · GW

There are frequently arguments that presume that tribalism is universal in a sense that it isn't.

comment by John_Maxwell (John_Maxwell_IV) · 2016-12-05T09:23:14.131Z · score: 1 (1 votes) · LW · GW

This seems like a desirable feature of German culture. Have Germans always been like this? Do you know where this aspect of German culture might have originated? The closest thing I can think of in American culture is the satirical TV show South Park, which is known for lampooning both sides in political debates. Unfortunately, the show is not strongly associated with intellectual sophistication.

comment by ChristianKl · 2016-12-05T13:55:18.904Z · score: 0 (0 votes) · LW · GW

Since German has elections it has a system that allows multiple political parties to exist. The US system on first-past-the-post shapes the political landscape in a way to move to having two political parties.

In the Germany of the 19th century there were three tiers in the the upper class signaled that they are upper class by having a better education. Germany never moved to get rid of it's upper class.

comment by WalterL · 2016-12-09T03:25:52.857Z · score: 0 (0 votes) · LW · GW

I dunno, man. What are you gonna do if a school of thought is right? It seems dumb to precommit to being too cool to agree with them, but surely you would lose your GERMANWORD if you credulously accepted their conclusions.

comment by Douglas_Knight · 2016-12-06T20:50:10.406Z · score: 0 (0 votes) · LW · GW

To sum up: invent a single sentence to summarize your opponent's position so that you condemn them as naive. For example, what you did to Phil Goetz.

comment by Viliam · 2016-12-07T09:43:59.697Z · score: 4 (4 votes) · LW · GW

General algorithm:

  • make a strawman version of your opponent's ideas, and call it "thesis";
  • make a strawman version of 'what is wrong with the strawman of my opponent's ideas', and call it "antithesis";
  • write a long text explaining how both "thesis" and "antithesis" are right about some partial aspects, optionally add your own ideas, and call it "synthesis";
  • collect your Hegel points ;-)

tl;dr -- any system of debate can be gamed

comment by sarahconstantin · 2016-12-05T19:20:44.584Z · score: 0 (0 votes) · LW · GW

I don't know German, but it sounds like the thing you mean by "Bildung" is something like "self-development".

Let me know if this sounds like the right idea:

If you want to be an excellent person, strong in various capacities, mature and able, then you want to think for yourself, and always keep looking for deeper/subtler/more powerful insights. There's a mental move of "yes, okay, but it could be even better" or "this is too easy, it's boring" that makes it seem unappealing to remain stuck in dogma. You don't want to flatten yourself or reduce yourself to a stereotype; you want to broaden and deepen your capacities.

The opposite mindset would be something like "I want to be Done, with all the thinking, forever, please don't make me get up from where I've plopped down. I'm on the side of the angels, and That is That."

Does that seem like the thing you're pointing at?

comment by ChristianKl · 2016-12-06T17:59:57.512Z · score: 0 (0 votes) · LW · GW

I don't know German, but it sounds like the thing you mean by "Bildung" is something like "self-development".

As far as I understand especially in the US the term self-development is bend up with the American dream. It's about developing capabilities to turn the dream into reality.

On the other hand "Bildung" can be for it's own sake and only includes art and literature that have no practical usage.

comment by Viliam · 2016-12-05T10:31:21.444Z · score: 0 (0 votes) · LW · GW

I suspect there is a trade-off between partisanship and "deep wisdom". You can make status moves of both kinds, you just have to choose the move that fits your audience.

Displaying sophistication by saying things like "there are also some interesting arguments against the statement 2+2=4" at every opportunity can perfectly kill any momentum. (Even if those arguments are technically valid, for example "there is no such thing as '4' in base 3; it's called '11' instead", as long as they don't contribute to solving the problem, only to signal the scholarship of the speaker.)

comment by NatashaRostova · 2016-12-05T02:57:28.534Z · score: 0 (0 votes) · LW · GW

To the extent that using prior information about the world is useful in understanding the future, it's sort of nonsensical to say someone shouldn't think of themselves as Bayesian. To the extent someone is perhaps ignoring a correct methodological approach because of a misguided/misunderstood appeal to Bayesianism, that's fine.

For example, back in the academic world I worked on research forecasting the U.S. yield curve. We did this using a series of non-linear equations to fit each day (cross-section), then used filtering dynamics to jointly capture the time-series. Figuring out a way to make this already insanely complex model work within a Bayesian framework wouldn't only be perhaps too hard, but not particularly useful. There is no nice quantifiable information that would fit in a prior, given the constraints we have on data, math, and computational ability, that would make the model formally Bayesian.

Having said that, to the extent that we tweaked the model using our prior information on less structured scientific theories (e.g. Efficient market hypothesis) -- it certainly was Bayesian. Sometimes the model worked perfectly, computed perfectly, but something didn't match up with how we wanted to map it to reality. In that sense we had our own neural-prior and found the posterior less likely.

It's really hard for me to see under what model of the world (correct) Bayesian analysis could be misleading.

One classic example of an unsophisticated position that's common in analytic philosophy is the idea that all intellectual discourse is supposed to be based on logic. In Is semiotics bullshit? PhilGoetz stumbles about a professor of semiotics who claims: "People have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis."

I think the claim that people can make correct judgements at better-than random probability that have no logical basis is nonsensical. Lots of this sort of writing and theorizing of the world is from a time, and from people, who existed before modern computational powers and machine learning. In the past the view was that the human mind was this almost-mystical device for processing reality, and we had to ground our beliefs in some sort of formal logic for them to follow. At least in my experience from working with stuff like neural-nets, I only see vast amounts of information, which our brains can filter out to predict the future. To reason from analogy, sometimes when doing some sort of ML problem you'll add a bunch of data and you have no logical clue why it would improve your model... But then your predictions/classification score increase.

In this context what does it even mean to call this logical or non-logical? It's nothing more than using past observed information patterns to classify and predict future information patterns. It's strictly empirical. I can't think of any logical decomposition of that which would add meaning.

comment by ChristianKl · 2016-12-05T13:06:32.286Z · score: 0 (0 votes) · LW · GW

To the extent someone is perhaps ignoring a correct methodological approach because of a misguided/misunderstood appeal to Bayesianism, that's fine.

If someone sees themselves as a Bayesian with a capital "B" that person is likely prefer Bayesian methods of modeling a problem.

If I have a personal problem and do Gendlin's Focusing, I come up with an intuitive solution. There's little logic involved. There are real life cases where it makes more sense to follow the intuitive solution.

comment by sen · 2016-12-05T06:09:18.239Z · score: 0 (0 votes) · LW · GW

Is there ever a case where priors are irrelevant to a distinction or justification? That's the difference between pure Bayesian reasoning and alternatives.

OP gave the example of the function of organs for a different purpose, but it works well here. To a pure Bayesian reasoner, there is no difference between saying that the heart has a function and saying that the heart is correlated with certain behaviors, because priors alone are not sufficient to distinguish the two. Priors alone are not sufficient to distinguish the two because the distinction has to do with ideals and definitions, not with correlations and experience.

If a person has issues with erratic blood flow leading to some hospital visit, why should we look at the heart for problems? Suppose there were a problem found with the heart. Why should we address the problem at that level as opposed to fixing the blood flow issue in some more direct way? What if there was no reason for believing that the heart problem would lead to anything but the blood flow problem? What's the basis for addressing the underlying cause as opposed to addressing solely the issue that more directly led to a hospital visit?

There is no basis unless you recognize that addressing underlying causes tends to resolve issues more cleanly, more reliably, more thoroughly, and more persistently than addressing symptoms, and that the underlying cause only be identified by distinguishing erroneous functioning from other abnormalities. Pure Bayesian reasoners can't make the distinction because the distinction has to do with ideals and definitions, not with correlations and experience.

It's really hard for me to see under what model of the world (correct) Bayesian analysis could be misleading.

If you wanted a model that was never misleading, you might as well use first order logic to explain everything. Or go straight for the vacuous case and don't try to explain anything. That problem is that that doesn't generalize well, and it's too restrictive. It's about broadening your notion of reasoning so that you consider alternative justifications and more applications.

comment by NatashaRostova · 2016-12-05T16:36:12.356Z · score: 0 (0 votes) · LW · GW

I don't understand what you mean by 'ideals and definitions,' and how these are not influenced by past empirical observations and observed correlations. Any definition can simply be reduced to past observed correlations. The function of a heart is based strictly on past observations, and our mapping them to a functional model of how the heart behaves.

My argument seems trivial to me, because the idea that there is some non-empirical or correlated knowledge not based on past information seems nonsensical.

comment by sen · 2016-12-06T05:31:20.912Z · score: 1 (1 votes) · LW · GW

The distinction between "ideal" and "definition" is fuzzy the way I'm using it, so you can think of them as the same thing for simplicity.

Symmetry is an example of an ideal. It's not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you've never seen. You can construct a symmetry that you've never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of time to think about the problem. You can even construct symmetries that, at first glance, would not look like a symmetry to someone else, and you can convince that someone else that what you've constructed is a symmetry.

The set of natural numbers is an example of something that's defined, not observed. Each natural number is defined sequentially, starting from 1.

Addition is an example of something that's defined, not observed. The general notion of a bottle is an ideal.

In terms of philosophy, an ideal is the Platonic Form of a thing. In terms of category theory, an ideal is an initial or terminal object. In terms of category theory, a definition is a commutative diagram.

I didn't say these things weren't influenced by past observations and correlations. I said past observations and correlations were irrelevant for distinguishing them. Meaning, for example, you can distinguish between more natural numbers than your past experiences should allow.

comment by NatashaRostova · 2016-12-06T05:44:22.900Z · score: 0 (0 votes) · LW · GW

I'm going to risk going down a meaningless rabbit hole here of semantic nothingness --

But I still disagree with your distinction, although I do appreciate the point you're making. I view, and think the correct way to view, the human brain as simply a special case of any other computer. You're correct that we have, as a collective species, proven and defined these abstract patterns. Yet even all these patterns are based on observations and rules of reasoning between our mind and the empirical reality. We can use our neurons to generate more sequences in a pattern, but the idea of an infinite set of numbers is only an abstraction or an appeal to something that could exist.

Similarly, a silicon computer can hold functions and mappings, but can never create an array of all numbers. They reduce down to electrical on-off switches, no matter how complex the functions are.

There is also no rule that says natural numbers or any category can't change tomorrow. Or that right outside of the farthest information set in the horizon of space available to humans, the gravitational and laws of mathematics all shift by 0.1. It is sort of nonsensical, but it's part of the view that the only difference between things that feel real and inherently distinguishable is our perception of how certain they are to continue based on prior information.

In my experience talking about this with people before, it's not the type of thing people change their mind on (not implying your view is necessarily wrong). It's a view of reality that we develop pretty foundationally, but I figured I'd write out my thoughts anyway for fun. It's also sort of a self-indulgent argument about how we perceive reality. But, hey, it's late and I'm relaxing.

comment by sen · 2016-12-06T07:12:37.138Z · score: 0 (0 votes) · LW · GW

I don't understand what point you're making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can't have them or their equivalent. It's obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It's obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because... well that's what neurons do, and neurons seem to be doing to bulk of the heavy lifting as far as thinking goes.

Where we disagree is in saying that all concepts that our neurons recognize are equivalent and that they should be reasoned about in the same way. There are clearly some notions that we recognize as being valid only after seeing sufficient evidence. For these notions, I think bayesian reasoning is perfectly well-suited. There are also clearly notions we recognize as being valid for which no evidence is required. For these, I think we need something else. For these notions, only usefulness is required, and sometimes not even that. Bayesian reasoning cannot deal with this second kind because their acceptability has nothing to do with evidence.

You argue that this second kind is irrelevant because these things exist solely in people's minds. The problem is that the same concepts recur again and again in many people minds. I think I would agree with you if we only ever had to deal with a physical world in which people's minds did not matter all that much, but that's not the world we live in. If you want to be able to reliably convey your ideas to others, if you want to understand how people think at a more fundamental level, if you want your models to be useful to someone other than yourself, if you want to develop ideas that people will recognize as valid, if you want to generalize ideas that other people have, if you want your thoughts to be integrated with those of a community for mutual benefit, then you cannot ignore these abstract patterns because these abstract patterns constitute such a vast amount of how people think.

It also, incidentally, has a tremendous impact on how your own brain thinks and the kinds of patterns your brain lets you consciously recognize. If you want to do better generalizing your own ideas in reliable and useful ways, then you need to understand how they work.

For what it's worth, I do think there are physically-grounded reasons for why this is so.