Posts

"Dialectics"? 2014-07-12T06:34:00.244Z · score: 1 (11 votes)
Am I Understanding Bayes Right? 2013-11-13T20:40:43.402Z · score: 3 (6 votes)
Meetup : New Meetup: Milwaukee 2013-11-13T19:36:40.270Z · score: 1 (2 votes)

Comments

Comment by cyrildan on "Dialectics"? · 2014-07-12T09:08:10.720Z · score: 7 (7 votes) · LW · GW

Right, my mistake about the mistaken communism statistic; you're correct that I confused the two in my memory.

And that was a very thorough explanation; thank you. It seems to match what I could glean from my searches, but it was nice having it in one place and in more straight-forward terminology. So thank you.

Comment by cyrildan on Am I Understanding Bayes Right? · 2013-11-15T19:13:20.605Z · score: 0 (0 votes) · LW · GW

Unfortunately no, but from your description it seems quite like the theory of the mind of General Semantics.

I think it's similar, but Lakoff focuses more on how things are abstracted away. For example, because in childhood affection is usually associated with warmth (e.g. through hugs), the different areas of your brain that code for those things become linked ("neurons that wire together, fire together"). This then becomes the basis of a cognitive metaphor, Affection Is Warmth, such that we can also say "She has a warm smile" or "He gave me the cold shoulder" even though we're not talking literally about body temperature.

Similarly, in Where Mathematics Comes From: How The Embodied Mind Brings Mathematics Into Being, he summarises his chapter "Boole's Metaphor: Classes and Symbolic Logic" thusly:

  • There is evidence ... that Container schemas are grounded in the sensory-motor system of the brain, and that they have inferential structures like those just discussed. These include Container schema versions of the four inferential laws of classical logic.
  • We know ... that conceptual metaphors are cognitive cross-domain mappings that preserve inferential structure.
  • ... [W]e know that there is a Classes are Containers metaphor. This grounds our understanding of classes, by mapping the inferential structure of embodied Container schemas to classes as we understand them.
  • Boole's metaphor and the Propositional Logic metaphor have been carefully crafted by mathematicians to mathematicize classes and map them onto propositional structures.
  • The symbolic-logic mapping was also crafted by mathematicians, so that propositional logic could be made into a symbolic calculus governed by "blind" rules of symbol manipulation.
  • Thus, our understanding of symbolic logic traces back via metaphorical and symbolic mappings to the inferential structure of embodied Container schemas.

That's what I was getting at above, but I'm not sure I explained it very well. I'm less eloquent than Mr. Lakoff is, I think.

Yes. Since a group of maps can be seen just as a set of things in itself, it can be treated as a valid territory. In logic there are also map/territory loops, where the formulas itself becomes the territory mapped by the same formulas (akin to talking in English about the English language). This trick is used for example in Goedel's and Tarski's theorems.

Hmm interesting. I should become more familiar with those.

Yes. Basically the Bayesian definition is more inclusive: e.g. there is no definition of a probability of a single coin toss in the frequency interpretation, but there is in the Bayesian. Also in Bayes take on probability the frequentist definition emerges just as a natural by-product. Plus, the Bayesian framework produced a lot of detangling in frequentist statistics and introduced more powerful methods.

Oh right for sure, another historical example would be "What's the probability of a nuclear reactor melting down?" before any nuclear reactors had melted down. But I mean, even if the Bayesian definition covers more than the frequentist definition (which it definitely does), why not just use both definitions and understand that one application is a subset of the other application?

The first two chapters of Jaynes' book, a pre-print version of which is available online for free, do a great job in explaining and using Cox to derive Bayesian probability. I urge you to read them to fully grasp this point of view.

Right, I think I found the whole thing online, actually. And the first chapter I understood pretty much without difficulty, but the second chapter gave me brainhurt, so I put it down for a while. I think it might be that I never took calculus in school? (something I now regret, oddly enough for the general population) So I'm trying to becoming stronger before I go back to it. Do you think that getting acquainted with Cox's Theorem in general would make Jayne's particular presentation of it easier to digest?

Yes...

Yes...

Yes...

Hooray, I understand some things!

Comment by cyrildan on Am I Understanding Bayes Right? · 2013-11-15T07:59:20.967Z · score: 0 (0 votes) · LW · GW

I believe you've defined an equivalent if unusual form (or rather, your definition can be extended to an equivalent form).

Yeah, that's what MrMind said too. Thanks!

The only laws of probability measure I know are that the measure of the whole set is 1, and the measure of a union of disjoint subsets is the sum of their measures. I'm finding it hard to imagine how I could hold beliefs that wouldn't conform to them. I mean, I guess it's conceivable that I could believe that A has probability 0.1, and B has probability 0.1, and A OR B has probability 0.3, but that just seems crazy.

Yeah, and I fully grasp the "measure of the whole set is 1" thing. (After all, if you're 100% certain something is true, then that's the only thing you think is possible). The additivity axiom is harder for me to grasp, though. It seems like it should be true intuitively, but teaching myself the formal form has been more difficult. Thinking and Deciding tries to derive it from having different bets depending on how things are worded (for example, on whether a coin comes up heads or tails versus whether the sun is up and the coin comes up heads or tails) which I grasp intellectually, but I'm having a hard time grokking it intuitively.

I think you're trying to be too formal too fast (or else your title isn't what you're really interested in). Try getting a solid practical handle on Bayes in finite contexts before worrying about extending it to infinite possibilities and the real world.

I do have a subjective feeling of success when I use Bayes (or Bayes-derive heuristics, more commonly) in my everyday life, but I really want to be sure I understand the nitty-gritty of it. Even if most of my use of it is just in justifying heuristics, I still want to be sure that I can formulate and apply them properly, you know?

Comment by cyrildan on Am I Understanding Bayes Right? · 2013-11-15T07:35:40.128Z · score: 0 (0 votes) · LW · GW

First of all, let me thank you so much, MrMind, for your post. It was really helpful, and I greatly appreciate how much work you put into it!

I'll try to give you the formalist perspective, which is a sort of 'minimal' take on the whole matter.

Much obliged.

Everything starts with a set of symbols, usually finite, that can be combined to form strings called formulas.

Question. I'm making my way through George Lakoff's works on metaphor and embodied thought; are familiar with the theory at all? (I know lukeprog did a blog post about them, but it's not nearly everything there is to know) Basically the theory is that our most basic understandings are linked to our direct sensory experience, and then we abstract away from that metaphorically in various fields, a very bottom-up approach. Whereas what you're saying is starting with symbols, which I think would be the reverse of what he's saying? Which probably means that it's a difference of perspective (it probably is), but as a starting point it gives the concepts less ballast for me. That said, I'm not entirely lost - I think I mentioned that I've studied symbolic logic, so I'll brave ahead!

Then there's the concept of truth: when you have a logic, you notice that sometimes formulas refer to entities or states of some environment, and that syntactic rules somehow reflect processes happening between those entities. Specifying which environment, which processes and which entities you are considering is the purpose of ontology, while the task of relating ontology and morphology/syntax is the purpose of semantics.

As you can probably imagine, there are a myriad of logics and myriads of ontologies (often called models).

How does this connect to the map-territory distinction? Generally as I've understood it, logic is a form of map, but so too would be a model. Would a model be a map and logic be a map of a map? Am I getting that right?

All three of them, frequentists, subjectivists and Bayesians, believe that the structure of probability is correctly described by the mathematical concept of a measure, as formalized by the Kolmogorov axioms.

This is something that has always confused me, the probability definition wars. Is there really something to argue about here? Maybe I'm missing something, but it seems like a "if a tree falls in the woods..." kind of question that should just be taboo'd. But when you taboo frequency-probability off from epistemic-probability, it's not immediately obvious why the same axioms should apply to both of them (which doesn't mean that they don't; thank you to everyone for pointing me to Cox's Theorems again. I know I've seen them before, but I think they're starting to click a little bit more on this pass-over). And Richard Carrier's new book said that they're actually the same thing, which is just confusing (that epistemic probability is the frequency at which beliefs with the same amount of evidence will be true/false, or something like that). (EDIT: Another possibility would be that both frequentist and Bayesian definitions of probability could both be "probability" and both conform to the axioms, but that would just make it more perplexing for people to argue about it)

As you can see, you are just using one ontology (possible worlds) to justify one interpretation (Kolmogorov measure), but there are many more.

Thanks for the terminology. I don't really understand what they are given so brief a description, but knowing the names at least spurs further research. Also, am I doing it right for the one ontology and one interpretation that I've stumbled across, regardless of the others?

Fuzzy logic resembles PTEL in the expansions of the set of truth values, but uses different rules than CL, so the resemblance is only superficial: PTEL and fuzzy logics are two very different beasts.

Right, because in fuzzy logics the spectrum is the truth value (because being hot/cold, near/far, gay/straight, sexual/asexual, etc. is not an either/or), whereas with PTEL the spectrum is the level of certainty in a more staunch true/false dichotomy, right? I don't actually know fuzzy logic, I just know the premise of it.

The other question I forgot to ask in the first post was how Bayes' Theorem interacts with group identity not being a matter of necessary and sufficient conditions, or for other fuzzy concepts like I mentioned earlier (near/far, &c.). For this would you just pick a mostly-arbitrary concept boundary so that you have a binary truth value to work with?

Comment by cyrildan on Am I Understanding Bayes Right? · 2013-11-15T07:11:43.143Z · score: 0 (0 votes) · LW · GW

Okay, that clears up it up a lot.

Comment by cyrildan on Good movies for rationalists? · 2013-11-13T19:26:33.758Z · score: 3 (3 votes) · LW · GW

In The Princess Bride, Vizzini loses his contest of wits with Wesley because he doesn't question his assumptions, which leads to him dying (and falling over comically). It's not super high-level, but it could be a useful talking point, given that it's a popular movie and relatively kid-friendly.

Comment by cyrildan on Having Useful Conversations · 2013-11-13T19:14:37.899Z · score: 1 (1 votes) · LW · GW

As a variation on the explicit "I'm ending my talking turn", maybe use Aesop-style summaries as the marker, using some kind of agreed-upon lead-in, like "and that's why..." That could be useful for a number of reasons:

~It would definitively mark that the speaker was done talking

~It would encourage one's thinking to be more focused. If you can't briefly sum up what you've been saying, then maybe you've been trying to say too much at a time. This isn't always a problem, but it could lead to your conversation partner missing something.

~It serves to signal what the speaker themself thinks is important or more tangential, which may differ from the ranking the listener inferred. This could become a talking point in itself, either for clarification or for a more meta analysis.

Thoughts?

Comment by cyrildan on Welcome to Less Wrong! (6th thread, July 2013) · 2013-11-12T23:57:54.724Z · score: 2 (2 votes) · LW · GW

My name is Dan, 25-year old white male.

It's unclear when my path to rationalism began. I was pretty smart and studious even as a home-schooled creationist in a very Christian family. Things started changing when I hit high school and left home school for private school. Dealing with people of other denominations and (Christian) theologies meant that I had to know where my own beliefs were coming from, and then my domain of beliefs-needing-justification expanded again when I was anticipating going to (and evangelising at) a public university. I took the Outsider Test For Faith and it took me from Christianity to atheism. The same process has continued in a feedback loop of known unknowns and "tsuyoku naritai"-style curiosity.

I discovered LessWrong through HPMOR and/or Commonsense Atheism (Luke Muelhauser's late great blog), I can't remember which. I've been lurking here for years and nowadays I check it on a regular basis, but I never really felt the need to create an account since most people here seem wicked-smart enough that I'm not sure I have enough to contribute. But I've changed my mind, the cent that breaks (plus my realisation that there was probably a selection bias going into my estimation).

So yeah, pleasure to meet you all :)