## Posts

## Comments

**redding**on Linguistic mechanisms for less wrong cognition · 2015-11-29T16:46:46.041Z · score: 4 (4 votes) · LW · GW

Not sure if this is what KevinGrant was referring to, but this article discusses the same phenomenon

http://rosettaproject.org/blog/02012/mar/1/language-speed-vs-density/

**redding**on Against Expected Utility · 2015-09-24T16:29:34.772Z · score: 0 (2 votes) · LW · GW

You say you are rejecting Von Neumann utility theory. Which axiom are you rejecting?

https://en.wikipedia.org/wiki/Von_Neumann–Morgenstern_utility_theorem#The_axioms

**redding**on [LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours · 2015-09-14T19:52:17.090Z · score: 7 (7 votes) · LW · GW

I think this is pretty cool and interesting, but I feel compelled to point out that all is not as it seems:

Its worth noting, though, that just the evaluation function is a neural network. The search, while no long iteratively deepening, is still recursive. Also, the evaluation function is not a pure neural network. It includes a static exchange evaluation.

It's also worth noting that doubling the amount of computing time usually increasing a chess engine's score by about 60 points. International masters usually have a rating below 2500. Though this is sketchy, the top chess engines are rated at around 3300. Thus, you could make a top-notch engine approximately 10,000 times slower and achieve the same performance.

Now, that 3300 figure is probably fairly inaccurate. Also, its quite possible that if the developer tweaked their recursive search algorithm, they could improve it. Thus that 10,000 figure I came to above is probably fairly inaccurate. Regardless, it is not clear to me that the neural network itself is proving terribly useful.

**redding**on The Heuristic About Representativeness Heuristic · 2015-09-14T00:49:20.558Z · score: 0 (0 votes) · LW · GW

Just to clarify, I feel that what you're basically saying that often what is called the base-rate fallacy is actually the result of P(E|!H) being too high.

I believe this is why Bayesians usually talk not in terms of P(H|E) but instead use Bayes Factors.

Basically, to determine how strongly ufo-sightings imply ufos, don't look at P(ufos | ufo-sightings). Instead, look at P(ufos | ufo-sightings) / P(no-ufos | ufo-sightings).

This ratio is the Bayes factor.

**redding**on Flowsheet Logic and Notecard Logic · 2015-09-10T21:00:09.752Z · score: 2 (2 votes) · LW · GW

I'm currently in debate and this is one of (minor) things that annoy me about it. The reason I can still enjoy debate (as a competitive endeavor) is that I treat it more like a game than an actual pursuit of truth.

I am curious though whether you think this actively harms peoples ability to reason or whether this just provides more numerous examples how most people reason - i.e. is this primarily a sampling problem?

**redding**on Stupid Questions September 2015 · 2015-09-03T13:03:18.264Z · score: 2 (2 votes) · LW · GW

Could we ever get evidence of a "read-only" soul? I'm imagining something that translates biochemical reactions associated with emotions into "actual" emotions. Don't get me wrong, I still consider myself an atheist, but it seems to me that how strongly one believes in a soul that is only affected by physical reality is based purely on their prior probability.

**redding**on My future posts; a table of contents. · 2015-08-31T14:03:40.944Z · score: 2 (2 votes) · LW · GW

Thanks for taking the time to contribute!

I'm particularly interested in "Goals interrogation + Goal levels".

Out of curiosity, could you go a little more in-depth regarding what "How to human" would entail? Is it about social functioning? first aid? psychology?

I'd also be interested in "Memory and Notepads", as I don't really take notes outside of classes.

With "List of Effective Behaviors", would that be behaviors that have scientific evidence for achieving certain outcomes ( happiness, longevity, money, etc.), or would that primarily be anecdotal?

That last one "Strike to the heart of question" reminds me very much of the "void" from the 12 virtues, which always struck me as very important, but frustratingly vaguely described. I think you really hit the nail on the head with "am I giving the best answer to the best question I can give". I'm not really sure where you could go with this, but I'm eager to see.

**redding**on Open Thread - Aug 24 - Aug 30 · 2015-08-24T16:37:41.999Z · score: 1 (1 votes) · LW · GW

Not sure if this is obvious of just wrong, but isn't it possible (even likely?) that there is no way of representing a complex mind that is sufficiently useful enough to allow an AI to usefully modify itself. For instance, if you gave me complete access to my source code, I don't think I could use it to achieve any goals as such code would be billions of lines long. Presumably there is a logical limit on how far one can usefully compress ones own mind to reason about it, and it seams reasonably likely that such compression will be too limited to allow a singularity.

**redding**on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-30T11:53:31.342Z · score: 0 (0 votes) · LW · GW

What I mean by "essentially ignore" is that if you are (for instance) offered the following bet you would probably accept: "If you are in the first 100 rooms, I kill you. Otherwise, I give you a penny."

I see your point regarding the fact that updating using Bayes' theorem implies your prior wasn't 0 to begin with.

I guess my question is now whether there are any extended versions of probability theory. For instance, Kolmogorov probability reverts to Aristotelian logic for the extremes P=1 and P=0. Is there a system of though that revers to probability theory for finite worlds but is able to handle infinite worlds without privileging certain (small) numbers?

I will admit that I'm not even sure saying that guessing "not a multiple of 10" follows the art of winning, as you can't sample from an infinite set of rooms either in traditional probability/statistics without some kind of sampling function that biases certain numbers. At best we can say that whatever finite integer N you choose as N goes to infinity the best strategy is to pick "multiple of 10". By induction we can prove that guessing "not a multiple of 10" is true for any finite number of rooms but alas infinity remains beyond this.

**redding**on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-29T22:29:16.265Z · score: 0 (0 votes) · LW · GW

Could you point me to some solutions?

**redding**on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-29T21:27:26.669Z · score: 0 (0 votes) · LW · GW

From a decision-theory perspective, I should essentially just ignore the possibility that I'm in the first 100 rooms - right?

Similarly, if I'm born in a universe with infinite such rooms and someone tells me to guess whether my room is a multiple of 10 or not. If I guess correctly, I get a dollar; otherwise I lose a dollar.

Theoretically there are as many multiples of 10 as not (both being equinumerous to the integers), but if we define rationality as the "art of winning", then shouldn't I guess "not in a multiple of 10"? I admit that my intuition may be broken here - maybe it just truly doesn't matter which you guess - after all its not like we can sample a bunch of people born into this world without some sampling function. However, doesn't the question still remain: what would a rational being do?

**redding**on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-29T21:17:31.288Z · score: 0 (0 votes) · LW · GW

Could you recommend a good source from which to learn measure theory?

**redding**on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-29T12:52:48.308Z · score: 0 (0 votes) · LW · GW

I (now) understand the problem with using a uniform probability distribution over a countably infinite event space. However, I'm kind of confused when you say that the example doesn't exist. Surely, its not logically impossible for such an infinite universe to exist. Do you mean that probability theory isn't expressive enough to describe it?

**redding**on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T12:20:02.981Z · score: 1 (3 votes) · LW · GW

There are different levels of impossible.

Imagine a universe with an infinite number of identical rooms, each of which contains a single human. Each room is numbered outside: 1, 2, 3, ...

The probability of you being in the first 100 rooms is 0 - if you ever have to make an expected utility calculation, you shouldn't even consider that chance. On the other hand, it is definitely possible in the sense that some people are in those first 100 rooms.

If you consider the probability of you being in room Q, this probability is also 0. However, it (intuitively) feels "more" impossible.

I don't really think this line of thought leads anywhere interesting, but it definitely violated my intuitions.

**redding**on Looking to restart Madison LW meetups, in need of regulars · 2015-05-10T20:28:35.932Z · score: 1 (1 votes) · LW · GW

I'm tentatively interested. I live about an hour east of Madison, but as a college student this is really only relevant during the summer. I'll take a look at potential (cheap) transportation.

**redding**on Stupid Questions May 2015 · 2015-05-02T15:21:44.644Z · score: 0 (0 votes) · LW · GW

Interesting. Do you have any idea why this results in a paradox, but not the corrigibility problem in general?

**redding**on Stupid Questions May 2015 · 2015-05-01T22:32:19.696Z · score: 3 (3 votes) · LW · GW

One common way to think about utilitarianism is to say that each person has a utility function and whatever utilitarian theory you subscribe to somehow aggregates these utility functions. My question, more-or-less, is whether an aggregating function exists that says that (assuming no impact on other sentient beings) the birth of a sentient being is neutral. My other question is whether such a function exists where the birth of the being in question is neutral if and only if that sentient being would have positive utility.

EDIT: I do recall that a similar-seeming post: http://lesswrong.com/lw/l53/introducing_corrigibility_an_fai_research_subfield/

**redding**on Probability of coming into existence again ? · 2015-02-28T17:24:32.474Z · score: 1 (1 votes) · LW · GW

I think the probability of you popping into existence again is (1) very small and (2) depends on how you define your "self." Would you consider an atom-for-atom copy of you to be "you"? How about an uploaded copy? etc. The simple fact is that physicists have constructed a very simple model for the universe that hasn't been wrong yet and, so, is very likely to be correct in the vast majority of situations - your existence should be one of them. Faith in the accepted model of the universe constructed by modern physicists can be justified by any reasonable prior coupled with Bayes' theorem. Thus, you can be extremely (99.999%+) that you won't pop into existence with infinite suffering (technically 0 and 1 aren't commonly accepted as probabilities on LessWrong).

Moving on, you will almost certainly not live forever (suffering or otherwise), because, quite simply, the universe will experience heat-death at some point. Justification for this belief is, similarly, based on Bayesian updating.

As a side-note. You say

Similar to what happens if there is no free will and thus nothing matters since no change is possible?

I'm not sure free will is a meaningful mental category when used in philosophy. If we lived in a deterministic universe, I, personally, would still believe that life had value. Ultimately, our universe is either deterministic or it isn't, but I fail to see why this would have any important philosophical implications. Why would it be good if our universe contained randomness?

You might consider reading "Possibility and Could-ness" if you haven't done so for an alternative perspective on what free will actually is.

**redding**on Justifying (Improper) Priors · 2015-02-03T00:40:38.687Z · score: 6 (6 votes) · LW · GW

I had typed up an eloquent reply to address these issues, but instead wrote a program that scored uniform priors vs 1/x^2 priors for this problem. (Un)fortunately, my idea does consistently (slightly) worse using the p*log(p) metric. So, you are correct in your skepticism. Thank you for the feeback!

**redding**on Is there a rationalist skill tree yet? · 2015-01-31T00:38:21.896Z · score: 3 (3 votes) · LW · GW

I think such a tree would depend in large part on what approach one wants to take. Do you want to learn probability to get a formal foundation of probabilistic reasoning? As far as I know, no other rationality skill is required to do this, but a good grasp of mathematics is. On the other hand, very few of the posts in the main sequences (http://wiki.lesswrong.com/wiki/Sequences#Major_Sequences) require probability theory to understand. So, in a sense, there is very little cross-dependency between mathematical understanding of probability and the rationality taught here. On the other hand, so many of the ideas are founded on probability theory, that it seems odd that they wouldn't be required. Thoughts?

**redding**on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? · 2014-12-13T18:50:50.061Z · score: 0 (0 votes) · LW · GW

As others have stated, obligation isn't really part of utilitarianism. However, if you really wanted to use that term, one possible way to incorporate it is to ask what would the xth percentile of people do in this situation (where the people are ranked in terms of expected utility) given that everyone has the same information and use that as a boundary to the label "obligation."

As an aside, there is a thought experiment called the "veil of ignorance." Although it is not, strictly speaking, called utilitarianism, you can view it that way. It goes something like this: when deciding how a society should be set up, the designer should set it up as if they had no idea who they would become in the society. In this case "obligation" would probably loosely correspond to "what rules should that society have?" In this case, a utilitarian's obligated giving rate would be something like

k*(Income - Poverty Line) where k is some number between 0 and 1 such that you maximize utility if everyone did the same.