Posts

Comments

Comment by MendelSchmiedekamp on Frequentist Magic vs. Bayesian Magic · 2010-04-09T20:42:57.152Z · LW · GW

What I keep coming to here is, doesn't the entire point of this post come to the situations where the parameters in question, the bias of the coins, are not independent? And doesn't this contradict?

estimate 100 independent unknown parameters

Which leads me to read the later half of this post as, we can (in principle, perhaps not computably) estimate 1 complex parameter with 100 data sets better than 100 independent unknown parameters from individual data sets. This shouldn't be surprising. I certainly don't find it as such.

The first half just points out that in the independent case of this particular example, Bayesian and Frequentist perform equivalently for relatively similar assumptions. But cousin_it made a general claim about the Frequentist approach, so this isn't worth much weight on its own.

Comment by MendelSchmiedekamp on Levels of communication · 2010-03-24T20:15:46.561Z · LW · GW

This post is a decent first approximation. But it is important to remember that even successful communication is almost always occurring on more than just one of these levels at once.

Personally I find it useful to think of communication as having spontaneous layers of information which may include things like asserting social context, acquiring knowledge, reinforcing beliefs, practicing skills, indicating and detecting levels of sexual interest, and even play. And by spontaneous layers, I mean that we each contribute to the scope of a conversation, and then those contributions become discerned as patterns (whether intended or not).

Then iterate this process a few times, with my attempting to perceive and affect your patterns and you attempting to perceive and affect mine. Add some habitual or built-in (it's extremely hard to tell the difference) models in the mind to start from and it seems simple (to me) how something as complex and variable as human communication can arise.

Comment by MendelSchmiedekamp on Open Thread: March 2010, part 3 · 2010-03-19T20:08:36.434Z · LW · GW

In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.

As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.

Comment by MendelSchmiedekamp on Mental Crystallography · 2010-03-01T14:31:42.531Z · LW · GW

When you're trying to communicate facts, opinions, and concepts - most especially concepts - it is a useful investment of effort to try to categorize both your audience's crystallography and your own.

This is something of an oversimplification. Categories are one possible first step, but eventually you will need more nuance than that. I suggest forming estimates based on the communication being serving also as a sequence of experiments. And being very strict about not ruling things out, especially if you have not managed to beat down your typical mind fallacy.

And that's just for a simply dialogue. Communication in a public forum with other audiences and even other participants, well, that is even more complex.

Comment by MendelSchmiedekamp on Open Thread: February 2010, part 2 · 2010-02-17T14:38:08.734Z · LW · GW

Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?

Comment by MendelSchmiedekamp on Outline of a lower bound for consciousness · 2010-01-13T15:06:30.979Z · LW · GW

The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge.

Especially given these are likely significantly lower bounds, and don't account for the problems of running on spotty evolutionary hardware, I suspect that the discrepancy is even greater than it first appears.

What I find intriguing about this result is that essentially it is one of the few I've seen that has a limit description of consciousness: you have on one hand a rating of complexity of your "conscious" cognitive system and on the other you have world adherence based on the population of your assertions. Consciousness is maintained if, as you increase your complexity, you maintain the variety of the assertion population.

It is possible that the convergence rates for humans and prospective GAI will simply be different, however. Which makes a certain amount of sense. Ideal consciousness in this model is unachievable, and approaching it faster is more costly, so there are good evolutionary reasons for our brains to be as meagerly conscious as possibly - even to fake consciousness when the resources would not otherwise be missed.

Comment by MendelSchmiedekamp on High Status and Stupidity: Why? · 2010-01-12T19:00:08.811Z · LW · GW

This should not be underestimated as an issue. Status as we use it here and at overcoming bias tends to be simplified into something not unlike a monetary model.

It is possible to try to treat things like status reductively, but in the current discussion it will hopefully suffice to characterize it with more nuance than "social wealth".

Comment by MendelSchmiedekamp on The Correct Contrarian Cluster · 2009-12-22T18:17:34.801Z · LW · GW

If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.

Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.

Comment by MendelSchmiedekamp on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom · 2009-12-13T06:37:33.591Z · LW · GW

there's really no reason those numbers should too much higher than they are for a random inhabitant of the city

Actually simply being in the local social network of the victim should increase the probability of involvement by a significant amount. This would of course be based on population, murder rates, and so on. And likely would also depend on estimates of criminology models for the crime in question.

Comment by MendelSchmiedekamp on 11 core rationalist skills · 2009-12-04T17:23:13.678Z · LW · GW

Proof of how dangerous this sort of list can be.

I entirely forget about:

  • act effectively

After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?

Comment by MendelSchmiedekamp on 11 core rationalist skills · 2009-12-04T16:35:33.378Z · LW · GW

Or more succinctly and broadly, learn to:

  • pay attention

  • correct bias

  • anticipate bias

  • estimate well

With a single specific enumeration of means to accomplish these competencies you risk ignoring other possible curricula. And you encourage the same blind spots for the entire community of aspiring rationalists so educated.

Comment by MendelSchmiedekamp on The Moral Status of Independent Identical Copies · 2009-12-01T18:05:02.053Z · LW · GW

This parallels some of the work I'm doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don't want to use a classical information measure - otherwise Kolmogorov random outcomes will be preferred to all others.

Comment by MendelSchmiedekamp on Rational lies · 2009-11-24T19:23:33.355Z · LW · GW

Lies, truth, and radical honesty are all that get in the way in understanding what is going on here.

You are communicating with someone, several of the many constantly changing layers (in addition to status signaling, empathy broadcasting, and performatives) of this communication are the transfer of information from you to that someone. The effectiveness of the communication of this information and its accuracy when received is something we can talk about fairly easily in terms of both instrumental (effectiveness) and epistemic (accurate) rationality.

To classify that communication as a lie or as truth or as honest (from your own perspective) involves unpacking social signals, conscious and unconscious intent, and is entirely irrelevant to any rational goal.

Considering that our societies place value on the signals shown by these terms, it may matter how our signals are received. This is an instrumental rationality question about increasing the likelihood of being seen as honest or as telling a lie.

It is essential not to confuse these two very different things. One of the first clues is to realize that when we talk about truth in rationality we mean something closely related to accuracy, but in communication it may be the same word, but it means something entirely different. This means that we should ban ourselves from using the word until we are quite sure we know what we mean by it.

Comment by MendelSchmiedekamp on Request For Article: Many-Worlds Quantum Computing · 2009-11-20T21:58:23.810Z · LW · GW

My post does describe a distinct model based on a Many Worlds interpretation where the probabilities are computed differently based on whether entanglement occurs or not - i.e. whether the universes influence each other. It is distinct from the typical model of decoherence.

As for photosythesis, it ought to behave in much the same way, as a network of states propagating through entangled universes, with the interactions of the states in those branches causing the highest probabilities to be assigned to the branches which have the lowest energy barriers.

Of note, there are other, more esoteric models based on even more unusual interpretations of quantum mechanics, but I suspect that's not something we need to get into here.

Comment by MendelSchmiedekamp on A Less Wrong singularity article? · 2009-11-20T21:41:45.173Z · LW · GW

It's as though no one here has ever heard of the bystander effect. The deadline is January 15th. Setting up a wiki page and saying "Anyone's free to edit." is the equivalent to killing this thing.

Also this is a philosophy, psychology, and technology journal, which means that despite the list of references for Singularity research you will also need to link this with the philosophical and/or public policy issues that the journal wants you to address (take a look at the two guest editors).

Another worry to me is that in all the back issues of this journal I looked over, the papers were almost always monographs (and baring that 2). I suspect that having many authors might kill the chances for this paper.

Comment by MendelSchmiedekamp on Request For Article: Many-Worlds Quantum Computing · 2009-11-20T20:03:42.296Z · LW · GW

First of all consider a computer is incomplete without a program, so lets just think of a programmed computer - whether in hardware or software doesn't matter for our purposes.

This gives us a system that goes from some known start state to some outcome state through a series of intermediate steps. If each of these steps is deterministic, then the entire system reaches the same outcome in all universes where it had the same starting point.

If those steps were stochastic, perhaps because there is chance of memory corruption in our computer or because of a random guess, than in some universes the system arrives as a different outcome, based on the probability of that branch of the intermediate states. This can produce many branches, but because each of these branches cannot affect the others the result is tree of intermediate states, leading to the outcomes of our computer and its program.

Now, both of these are classical computers, but it helps to know what a classical computer looks like in a many worlds interpretation, before mapping a quantum computer there. This is because all computers, classical or quantum share a property - they are computers. This means there must be a path from the starting state to the outcome state. We can influence that path in many ways, but the path is part of how we define and build a computer and a program.

In quantum mechanics, there is a phenomena called entanglement, which loosely means that the events in very similar worlds can affect the probabilities of events in all of those worlds. You can think of this as the boundaries between the many worlds smoothing out as you get to a small scale.

This means that unlike our stochastic tree of states, the quantum computer can have a more complex structure. It is even possible, for two branches to converge back into one and to have branches cancel out.* In practice, these are more approximate than precise, so you will find a dominant combination of two branches or a near cancellation. Using this interaction a skilled quantum algorithm designer can use a variety of tricks to make correct answers more likely, by canceling wrong answers, and by increasing the probability of correction ones.

There is no uniform solution to this problem, for example, the best known quantum algorithm, the Shor algorithm for prime factorization exploits frequency of a possible prime factorization using number theory. This works well on quantum computers because the frequency difference is also a critical value for determining the probability of the combination of two quantum variables.

In each case, a computer and its program produce a path of execution, but by exploiting the features of or needing to deal with the problems of non-determinism and quantum mechanics, the nature of that computation becomes more complex and difficult to see. Any one world's view is not sufficient, especially in the case of quantum computing where the probabilities which govern a world's execution are not derived solely from within that world.

  • I'm fairly sure one this, but I'm a little rusty, so I could be wrong.
Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-17T16:52:36.089Z · LW · GW

I meant that setting the limit to no preference for a given C doesn't equate to a globally continuous function. But that when you adjust your preferences function to approximate the discontinuous function by a continuous one, the result will contain (at least one) no preference point between any two A < B.

Now perhaps there is a result which says that if you take the limit as you set all discontinuous C to no preference, that the resulting function is complete, consistent, transitive, and continuous, but I wouldn't take that to be automatic.

Consider, for example, a step discontinuity, where an entire swatch of pA + (1-p)B are stuck on the same set of < and = mappings and then there is a sharp jump to a very large set of < and = mappings at a critical p'. If you map the ordinals to the real line, this is analogous to a y-coordinate jump. To remove this discontinuity you would need to do more than split the preferences at p' around no preference, because all this does is add a single point to the mix. To fully resolve it, you need to add an entire continuous curve, which means a process of selecting new A, B, and C, and showing that the transfinite limit always converges to a valid result.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-16T15:58:59.328Z · LW · GW

We are talking about the same thing here just at different levels of generality. The function you describe is the same as the one I'm describing, except on a much narrower domain (only a single binary lottery between A and B). Then you project the range to just a question about C.

In the specific function you are talking about, you must hold that this is true for all A, B, and C to get continuity. In the function I describe, the A, B, and C are generalized out, so the continuity property is equivalent to the continuity of the function.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-11T18:09:26.297Z · LW · GW

I was talking about utility functions, but I can see your point about generalizing the result to the mapping from arbitrary dilemmas to preferences. Realize though, that preference space isn't discrete.

You can describe it as the function from a mixed dilemma to the joint relation space for < and =. Which you can treat as a somewhat more complex version of the ordinals (certainly you can construct a map to a dense version of the ordinals if you have at least 2 dilemmas and dense probability space). That gives you a notion of the preference space where a calculus concept of continuity does apply (as the continuity axiom is a variation on the intermediate value theorem for this space which implies typical continuity).

From this perspective, the point I'm making about continuous approximations should make more sense.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-10T17:24:09.132Z · LW · GW

That is my reading of it too. I know Stuart is putting forward analytic results here, I was concerned that this one was not correctly represented.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-10T16:38:19.005Z · LW · GW

Note, Independence II does not imply Independence, without using at least the consistency axiom.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-09T21:55:26.576Z · LW · GW

If we're using the Independence II as an axiom, you should be a little more precise, when you introduced it above, you referred to the base four axioms, including continuity.

Now, I only noticed consistency needed to convert between the two Independence formulations, which would make your statement correct. But on the face of things, it looks like you are trying to show a money pump theorem under discontinuous preferences by calling upon the continuity axiom.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-09T19:19:03.473Z · LW · GW

Correct, by definition, if you have a dense set (which by default we treat the probability space as) and we map it into another space than either that space is also dense, in which case the converging sequences will have limits or it will not be dense (in which case continuity fails). In the former case, continuity reduces to point-wise continuity.

Note, setting the limit to "no preference" does not resolve the discontinuity. But by intermediate value, there will exist at least one such point in any continuous approximation of the discontinuous function.

Comment by MendelSchmiedekamp on All hail the Lisbon Treaty! Or is that "hate"? Or just "huh"? · 2009-11-06T18:24:09.169Z · LW · GW

Nice to see Europe catching up with, say India in this regard.

Does that answer your question?

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-06T15:36:04.324Z · LW · GW

This has been helpful. I'm much more familiar with the mathematics than the economics. Presently, I'm more worried about the mathematical chicanery involved in approximating a consistent continuous utility function out of things.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-06T15:18:34.848Z · LW · GW

But does doesn't the money pump result for non-independence rely on continuity? Perhaps I missed something there.

(Of note, this is what happens when I try to pull out a few details which are easy to relate and don't send entirely the wrong intuition - can't vouch for accuracy, but at least it seems we can talk about it.)

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-05T18:27:38.677Z · LW · GW

Sorry I left this out. It's a huge simplification, but treat the set of p as a discrete subset set in the standard topology.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-05T16:18:39.219Z · LW · GW

I'm very busy at the moment, but the short version is that one of my good candidates for a utility component function, c, has, c(A) < c(B) < c(pA + (1-p)B) for a subset of possible outcomes A and B, and choices of p.

This is only a piece of the puzzle, but if continuity in the von Neumann-Morgenstern sense falls out of it, I'll be surprised. Some other bounds are possible I suspect.

Comment by MendelSchmiedekamp on Money pumping: the axiomatic approach · 2009-11-05T15:53:25.112Z · LW · GW

Of note, you don't explain why discontinuous preferences necessarilly cause vulnerability to money pumping.

I'm concerned about this largely because the von Neumann-Morgenstern continuity axiom is problematic for constructing a functional utility theory from "fun theory".

Comment by MendelSchmiedekamp on Open Thread: November 2009 · 2009-11-04T13:31:50.672Z · LW · GW

Glad to see something like this.

Comment by MendelSchmiedekamp on Arrow's Theorem is a Lie · 2009-10-27T14:54:14.127Z · LW · GW

Fair enough. Although in considering the implications of more than two options for the other conditions, I noticed something else worrisome.

The solution you present weakens a social welfare function, after all if I have two voters, and they vote (10,0,5) and (0,10,5) the result is an ambiguous ordering, not a strict ordering as required by Arrow's theorem (which is really a property of very particular endomorphisms on permutation groups).

It seems like a classic algorithmic sacrifice of completeness for power. Was that your intent?

Comment by MendelSchmiedekamp on Arrow's Theorem is a Lie · 2009-10-26T18:43:36.626Z · LW · GW

Note, according to the wikipedia article listed, Arrow's theorem is valid "if the decision-making body has at least two members and at least three options to decide among". This makes me suspect the Pareto-efficiency counter-example as this assumes we have only 2 options.

Comment by MendelSchmiedekamp on How to get that Friendly Singularity: a minority view · 2009-10-10T19:47:30.931Z · LW · GW

What worries me about this tact is that I'm sufficiently clever to realize that in conducting a vast and complex research program to empirically test humanity to determine a global reflectively consistent utility function, I will be changing the utility trade-offs of humanity.

So I might as well make sure that I conduct my mass studies in such a way to ensure that the outcome is both correct and easier for me to perform my second much longer (essentially infinitely longer) time phase of my functioning.

So said AI would determine and then forever follow exactly what humanity's hidden utility function is. But there is no guarantee that this is a particularly friendly scenario.

Comment by MendelSchmiedekamp on When Willpower Attacks · 2009-10-05T16:19:06.372Z · LW · GW

I have a similar result, except I've never experienced a stimulant effects from anything other than blood sugar I'm not certain I can discount sleepiness. Also, I suffer from a migraine condition which has a much more severe affect on my mental faculties on a day-to-day basis.

And since improper sleeping is one of my triggers - "Happiness is getting enough sleep." Not too much, not too little.

Comment by MendelSchmiedekamp on Intuitive differences: when to agree to disagree · 2009-09-30T20:22:54.623Z · LW · GW

This seems like the conflict between two deep seated heuristics, hence it would be difficult at best to argue for the right one.

Instead, I suggest a synthetic approach. Stop treating the two intuitions as a false dichotomy, and consider the continuum between them (or even beyond them).

Comment by MendelSchmiedekamp on Privileging the Hypothesis · 2009-09-29T19:49:40.014Z · LW · GW

This is essentially an instance of availability bias. Of course, the most interesting case, rather than just a declarative hypothesis elevated among the other inhabitants of the hypothesis space for that particular question, models have other effects that go far beyond merely availability.

This is because our initial model won't just form the first thing we think of when we examine the question, but some of the very structures we use when we formulate the question. Indeed, how we handle our models is easily responsible for the majority of the biases that have been discussed here and at Overcoming Bias.

In the case of the models mentioned in this post about quantum mechanics - we can look at the first quantum mechanics interpretation as having its own version of hypothesis privilege. This means we should downgrade it. Of course we should also do the same with its immediate successor, Many Worlds (although perhaps not as much). After all, it is the interpretations we haven't thought of which are being penalized the most by the effect of privileging the hypothesis.

But to adequately apply this discounting beyond the direct route we need to understand the way which these model affect our thinking. How do these models encourage other models being developed, and how do they blind our cognitive architecture from different avenues?

This becomes even more pronounced when more is at stake than epistemic rationality. Since these two models are also sides (even fairly politely) in cultural and political conflicts, there are further biases that arise, as parts of these ideas become further tied to status, self-esteem, social behaviors, and habits.

Comment by MendelSchmiedekamp on Boredom vs. Scope Insensitivity · 2009-09-24T14:35:42.082Z · LW · GW

I expect that one source of the problem is seen in equating these two situations. On one hand you have 100 copies of the same movie. On the other hand, you have 100 distinct humans you could pay to save. To draw a direct comparison you would need to treat these as 100 copies of some idealized stranger. In which case the scope insensitivity might (depending on how you aggregate the utility of a copy's life) make more sense as a heuristic.

And this sort of simplification is likely one part of what is happening when we naively consider the questions:

How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers?

I wonder how this experiment would change if you presented lists of names? If you encouraged a different model for the 100 strangers.

Comment by MendelSchmiedekamp on Open Thread: September 2009 · 2009-09-04T20:36:44.421Z · LW · GW

It's just that with two distinctly different conclusions from the results mentioned from two different sources: the article authors (in the abstract) and Gerald Weissmann, M.D., Editor-in-Chief (in the news article), I place a much lower confidence in later being a reasonable reading of the research paper.

But of course we could quite safely argue about readings and interpretations indefinitely. I'd point you to Derrida and Hermeneutics if you want to go that route.

In any case, I'll update my estimates on the likelihood of the research paper having an errant conclusion based on Weismann's quote, and I suggest you do the same based on the evidence in the abstract - and then I suspect we have little more to discuss on the subject.

Comment by MendelSchmiedekamp on Open Thread: September 2009 · 2009-09-03T15:08:22.897Z · LW · GW

So, perhaps the news article was based on press release that was based on the journal article. My point was that it was not produced solely from the abstract.

I don't see why this is your point? In the very least it doesn't present counter evidence to my claim that the abstract contains information not present in the news article which mitigates or negates the concerns of the original comment.

Comment by MendelSchmiedekamp on Open Thread: September 2009 · 2009-09-01T20:07:35.295Z · LW · GW

But the abstract does not make any "just right" claims, unlike the summary on science daily. Which is what you where complaining about.

The abstract reads - we did an incremental test, and even at the lowest dosage we found an effect. This suggest that low dosages could be effective. I don't see anything wrong with that reasoning.

The science daily summary is simply misrepresenting it. So, the original commenter isn't missing something in the science news, it is science daily who made the error.

Comment by MendelSchmiedekamp on Open Thread: September 2009 · 2009-09-01T19:34:30.585Z · LW · GW

The following sounds like a control measurement was taken:

"Blood and urine samples were collected before and after each dose of DHA and at 8 wk after arrest of supplementation."

Also note, that the abstract doesn't say that 200mg is ideal as the science daily description does it says:

"It is concluded that low consumption of DHA could be an effective and nonpharmacological way to protect healthy men from platelet-related cardiovascular events."

Comment by MendelSchmiedekamp on Open Thread: September 2009 · 2009-09-01T17:39:19.455Z · LW · GW

Well, the article abstract isn't consistent with the description you linked to. One of the dangers of paraphrasing science.

Comment by MendelSchmiedekamp on Pittsburgh Meetup: Survey of Interest · 2009-08-27T17:54:10.298Z · LW · GW

I'm interested, especially since this will likely be the closest such meet-up to State College, PA. I'm not the only one here, so I can ask around. Although, obviously, our transportation logistics will be more complicated.

Comment by MendelSchmiedekamp on Mathematical simplicity bias and exponential functions · 2009-08-27T17:44:42.622Z · LW · GW

No. The Medawar zone is more about scientific discoveries as marketable products to the scientific community, not the cultural and cognitive pressures of those communities which affect how those products are used as they become adopted.

Different phenomena, although there are almost certainly common causes.

Comment by MendelSchmiedekamp on Mathematical simplicity bias and exponential functions · 2009-08-27T04:44:28.840Z · LW · GW

Oh yes, but it's not just a prediliction for simple models in the first place, but also a tendency to culturally and cognitively simplify the model we access to use - even if the original model had extensions to handle this case and even to the tune of orders of magnitude of error.

Of course sometimes it may be worth computing an estimate that is (unknown to you) orders of magnitude off, in a very short amount of time. Certainly if the impact of the estimate is delayed and subtle less conscious trade-offs may factor in between cognitive effort to access and use a more detailed model and the consequences of error. Yet another form of akrasia.

Comment by MendelSchmiedekamp on Mathematical simplicity bias and exponential functions · 2009-08-26T20:40:12.665Z · LW · GW

Generally (and therefore somewhat inaccurately) speaking, one way that our brains seem to handle the sheer complexity computing in the real world us is a tendency to simplify the information we gather.

In many cases these sorts of extremely simple models didn't start that way. They may have started with more parameters and complexity. But as they were repeated, explained and applied the model becomes, in effect, simpler. The example begins to represent the entire model, rather than serving to show only a piece of it.

Technically the exponential radioactive decay model for radioactivity of a mixture has most of the pieces you describe fairly directly. But this hardly means they will be appropriately applied, that they will be available when we are thinking of how to use the model. We need to fight the simplification effect to be able to make our models more nuanced and detailed - even though they are still almost certainly lossy compression of the facts, observations, and phenomena they were built from.

On the other hand, the simplification serves its purpose too, if we could devote unlimited cognitive resources to a model, then we risk not being unable to actually reach a decision from the model.

Comment by MendelSchmiedekamp on Working Mantras · 2009-08-25T18:24:38.795Z · LW · GW

Don't have many mantras, although I stress the importance of understanding before trying to solve.

One that does stand out is more of a question:

"What am I not thinking here?" or "What are we forgetting here?" - Followed by estimations based on meta-biases and human error tendencies to make some hypotheses where cognitive, social, or cultural blind spots might be. And then comes the testing, followed by more hypotheses. And so on.

After all, every field of thought is developed by humans. It's a common point of failure.

Comment by MendelSchmiedekamp on Fighting Akrasia: Finding the Source · 2009-08-08T20:19:15.725Z · LW · GW

Procrastination and laziness may be kinds of akrasia, but simply because that are the type most talked about here does not mean that they are an exhaustive description of "weaknesses of will". One example I find easy to bring up is trying to move while we are in pain. There are definite moments where a crisis of will occurs, and if you have a sharp shooting pain in your leg while walking you will either change your movement against your intended direction or overcome that moment and escape the akrasia for a time.

I do, however, suspect that this community would do a better job at fighting akrasia if we did not confound it solely with procrastination and "laziness".

Comment by MendelSchmiedekamp on Suffering · 2009-08-04T22:19:11.647Z · LW · GW

You're right in that this, among other topics, I owe a top level post.

Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno's paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side.

Which means that until I find a way to map people around that phenomena I'm hesitant in giving a large scale treatment. Just because it was the route I took, doesn't mean it's a good way to explain things generally, ala Typical Mind Fallacy born out by evidence.

But in any case I will return to it when I have the time.

Comment by MendelSchmiedekamp on Suffering · 2009-08-04T04:28:02.622Z · LW · GW

Building on some of the more non-trivial theories of fun - specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia.

Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it's better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations.

One nice effect of having suffering be defined as the sapping of complexity is that it deals with the question of which pain is suffering fairly elegantly - "subjectively" interesting pain is not suffering, but "subjectively" uninteresting pain is suffering.

Of course, that is only a small part of the process of making these distinctions. It's important to estimate both the subject of the qualia, and the structure of the sequence of qualia as it relates to the current state of the entity in question before you can estimate whether the stream of qualia will induce suffering or not.

It is a very powerful approach. But it is by no means simple. So I don't begrudge some karma loss in trying to explain it to folks here. But it's at least some feedback from unclear explanations.