Comment by truepath on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T23:19:19.535Z · score: 2 (2 votes) · LW · GW

Stimulants are an excellent short term solution. If you absolutely need to get work done tonight and can't sleep amphetamine (i.e. Adderall) is a great solution. Indeed, there are a number of studies/experiments (including those the airforce relies on to give pilots amphetamines) backing up the fact that it improves the ability to get tasks done while sleep deprived.

Of course, if you are having long term sleep problems it will likely increase those problems.

Comment by truepath on Can a Bayesian agent be infinitely confused? · 2019-03-27T08:14:49.194Z · score: 6 (4 votes) · LW · GW

There is a lot of philosophical work on this issue some of which recommends taking conditional probability as the fundamental unit (in which case Bayes theorem only applies for non-extremal values). For instance, see this paper

Comment by truepath on And My Axiom! Insights from 'Computability and Logic' · 2019-01-22T06:51:22.207Z · score: 1 (1 votes) · LW · GW

Computability is just \Delta^0_1 definability. There are plenty of other notions of definability you could try to cash out this paradox in terms of. Why pick \Delta^0_1 definability?

If the argument worked in any particular definability notion (e.g. arithmetic definability) it would be a problem. Thus, the solution needs to explain why the argument shouldn't convince you that with respect to any concrete notion of definable set the argument doesn't go through.

Comment by truepath on And My Axiom! Insights from 'Computability and Logic' · 2019-01-17T17:48:42.842Z · score: 1 (1 votes) · LW · GW

But that's not what the puzzle is about. There is nothing about computability in it. It is supposed to be a paradox along Russell's set of all sets that don't contain themselves.

The response about formalizing exactly what counts as a set defined by an English sentence is exactly correct.

Comment by truepath on And My Axiom! Insights from 'Computability and Logic' · 2019-01-17T17:27:31.160Z · score: 1 (1 votes) · LW · GW

Yah, enumerable means something different than computably enumerable.

Comment by truepath on Anthropics: Full Non-indexical Conditioning (FNC) is inconsistent · 2019-01-15T02:20:30.155Z · score: -1 (2 votes) · LW · GW

This is just the standard sleeping beauty paradox and I’d suggest that the issue isn’t unique to FNC.

However, you are a bit quick in concluding it is time inconsistent as it’s not altogether clear that one is truly referring to the same event before and after you have the observation. The hint here is that in the standard sleeping beauty paradox the supposed update involves only information you already were certain you would get.

Id argue that what’s actually going on is that you are evaluating slightly different questions in the two cases

Comment by truepath on Who's welcome to our LessWrong meetups? · 2018-12-11T10:01:12.719Z · score: 9 (8 votes) · LW · GW

Don't. At least outside of Silicon Valley where oversubscription may actually be a problem. It's a good intention but it inevitably will make people worry they aren't welcome or aren't the right sort of people . Instead, describe what one does or what one talks about in a way that will appeal to the kind of people who would enjoy coming

Comment by truepath on [Insert clever intro here] · 2018-11-20T13:48:28.718Z · score: 7 (6 votes) · LW · GW

Given that you just wrote a whole post to say hi and share your background with everyone I'm pretty confident you'll fit right in and won't have any problems being too shy. Writing a post like this rather than just commenting is such a less wrong kind of thing to do so I think you'll be right at home.

Comment by truepath on The "semiosis reply" to the Chinese Room Argument · 2018-11-16T04:50:09.452Z · score: 1 (1 votes) · LW · GW

Searle can be any X?? WTF? That's a bit confusingly written.

The intuition Searle is pumping is that since he, as a component of the total system doesn't understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system's experiences.

Searle's view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn't understand Chinese he is asserting that it doesn't have the requisite experiences (because it's not having any experiences) that someone would need to have to count as understanding Chinese.

Look, I've listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I'm sorry but you are interpreting him incorrectly.

I know I'm not making the confusion you suggest because I've personally talked with him at some length about his argument.

Comment by truepath on Sam Harris and the Is–Ought Gap · 2018-11-16T04:35:21.693Z · score: 22 (12 votes) · LW · GW

I essentially agree with you that science can't bridge the is-ought gap (see caveats) but it's a good deal more complicated than the arguments you give here allow for (they are a good intro but I felt it's worth pointing out the complexities).

  1. When someone claims to have bridged the is-ought gap they aren't usually claiming to have analytically identified (i.e. identified as a matter of definition) ought with some is statements. That' s a crazily high bar and modern philosophers (and Sam Harris was trained as a philosopher) tend to feel true analytic identities are rare but are not the only kind of necessary truths. For instance, the fact that "water is H20" is widely regarded as a necessary truth that isn't analytic (do a search if you want an explanation) and there are any number of other philosophical arguments that are seen as establishing necessary truths which don't amount to the definitional relationship you demand.

I think the standard Harris is using is much weaker even than that.

  1. You insist that to be an ought it must be motivating for the subject. This is a matter of some debate. Some moral realists would endorse this while others would insist that it need only motivate certain kinds of agents who aren't too screwed up in some way. But, I tend to agree with your conclusion just suggest it be qualified by saying we presuming the standard sense of moral realism here.

  2. One has to be really careful with what you mean by 'science' here. One way people have snuck around the is-ought gap before is using terms like cruel which are kinda 'is' facts that back in an ought (to be cruel requires that you immorally inflict suffering etc..).

  3. It's not that Harris is purely embedded in some kind of dialectical tradition. He was trained as an analytic philosopher and they invented the is-ought gap and are no strangers to the former mode of argumentation. IT's more that Carrol is a physicist and doesn't know the terminology that would let him pin Harris down in terms he would understand and keep him from squirming off the point.

However, I'm pretty sure (based on my interaction with Harris emailing him over what sounded like a similarly wrongheaded view in the philosophy of mind) that Harris would admit that he hasn't bridged Hume's is-ought gap as philosophers understand it but instead explain that he means to address the general public's sense that science has no moral insight to offer.

In that sense I think he is right. Most people don't realize how much science can inform our moral discussions...he's just being hyperbolic to sell it.

Comment by truepath on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-13T01:15:10.652Z · score: -3 (4 votes) · LW · GW

I agree with your general thrust except your statement that "you longtermists can simply forgo your own pleasure wireheading and instead work very hard on the whole growth and reproduction agenda" if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.

Comment by truepath on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-13T01:13:09.446Z · score: 1 (1 votes) · LW · GW

Also, your concern about some kind of disaster caused by wireheading addiction and resulting deaths and damage is pretty absurd.

Yes, people are more likely to do drugs when they are more available but even if the government can't limit the devices that enable wireheading from legal purchase it will still require a greater effort to put together your wireheading setup than it currently does to drive to the right part of the nearest city (discoverable via google) and purchasing some heroin. Even if it did become very easy to access it's still not true that most people who have been given the option to shoot up heroin do so and the biggest factor which deters them is the perceived danger or harm. If wireheading is more addictive/harmful it will discourage use.

Moreover, for wireheading to pose a greater danger than just going to buy heroin it would have to give greater control over brain stimulation (i.e. create more pleasure etc..) and the greater our control over the brain stimulation the greater the chance we can do so in a way that doesn't create damage.

Indeed, any non-chemical means of brain stimulation is almost certain to be crazily safe because once monitoring equipment detects a problem you can simply shut off the intervention without the concern of long-halflife drugs remaining in the system continuing the effect.

Comment by truepath on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-13T01:12:54.167Z · score: 1 (1 votes) · LW · GW

You make a lot of claims here that seem unsupported and based on nothing but vague analogy with existing primitive means of altering our brain chemisty. For instance a key claim that pretty most of your consequences seem to depend on is this: "It is great to be in a good working mood, where you are in the flow and every task is easy, but if one feels “too good”, one will be able only to perform “trainspotting”, that is mindless staring at objects.

Why should this be true at all? The reason heroin abusers aren't very productive (and, imo, heroin isn't the most pleasurable existing drug) is because of the effects opiates have as depressants making them nod off etc.. The more control we achieve over brain stimulation the less likely wireheading will have the kind of side-effects which limit functioning. Now one might have a more subtle argument that suggests the ability of even a directly stimulated brain to feel pleasure will be limited and thus if we directly stimulate too much pleasure we will no longer have the appropriate rewards to incentivize work but it seems equally plausible that we will be able to seperate pleasure and motivation/effort and actually enhance our inclination to work while instilling great pleasure.

Comment by truepath on Is Copenhagen or Many Worlds true? An experiment. What? Yes. · 2018-11-10T10:50:25.062Z · score: 3 (2 votes) · LW · GW

Skimming the paper I'm not at all impressed. In particular, they make frequent and incautious use of all sorts of approximations that are only valid up to a point under certain assumptions but make no attempt to bound the errors introduced or justify the assumptions.

This is particularly dangerous to do in the context of trying to demonstrate a failure of the 2nd law of thermodynamics as the very thought experiment that might be useful will generally break those heuristics and approximations. Worse, the 2nd law is only a statistical regularity not a true exceptionless regularity so what one actually needs to show.

Even worse this piece seems to be trying to use a suspicious mixture of quantum and classical notions, e.g., using classical notions to define a closed system then analyzing it as a quantum system

Comment by truepath on In favor of tabooing the word “values” and using only “priorities” instead · 2018-10-26T23:42:49.529Z · score: 2 (2 votes) · LW · GW

Not everyone believes that everything is commesurable and people often wish to be able to talk about these issues without implicitly presuming that fact.

Moreover, values suggests something that is desirable because it is a moral good. A priority can be something I just happen to selfishly want. For instance, I might hold diminishing suffering as a value yet my highest current priority might be torturing someone to death because they killed a loved one of mine (having that priority is a moral failing on my part but doesn't make it impossible).

Comment by truepath on Nyoom · 2018-10-15T11:36:48.838Z · score: 2 (2 votes) · LW · GW

Two thoughts.

First, as a relatively in shape person who walks a ton (no car living in the midwest) I can attest that I often wish I had a golf cart/scooter solution. They don't need to be a replacement for walking (though good that they can be) they might also appeal to those of us who like to walk a lot but need a replacement for a car when it gets really hot or we need to carry groceries (motorcycle style scooters require licenses and can't always be driven on campuses or parks). It would be great if these became less socially disapproved of for the non-disabled.

Second, aren't there stable, fast scooters with decent torque and larger battery packs? Why do you think the crappy scooter will become super popular? Is it that much cheaper? Or are you just saying even a crappy scooter provides these advantages.

Decision Theory Anti-realism

2018-10-10T00:08:33.242Z · score: 8 (6 votes)
Comment by truepath on Formal vs. Effective Pre-Commitment · 2018-09-19T11:56:26.970Z · score: 1 (1 votes) · LW · GW

Except if you actually go try and do the work people's pre-theoretic understanding of rationality doesn't correspond to a single precise concept.

Once you step into Newcomb type problems it's no longer clear how decision theory is supposed to correspond to the world. You might be tempted to say that decision theory tells you the best way to act...but it no longer does that since it's not that the two-boxer should have picked one box. The two-boxer was incapable of so picking and what EDT is telling you is something more like: you should have been the sort of being who would have been a one boxer not that *you* should have been a one boxer.

Different people will disagree over whether their pre-theoretic notion of rationality is one in which it is correct to say that it is rational to be a one/two boxer. Classic example of working with a imprecisely defined concept.

Comment by truepath on Realism about rationality · 2018-09-19T03:04:43.211Z · score: 7 (5 votes) · LW · GW

First, let me say I 100% agree with the idea that there is a problem in the rationality community of viewing rationality as something like momentum or gold (I named my blog rejectingrationality after this phenomena and tried to deal with it in my first post).

However, I'm not totally sure everything you say falls under that concept. In particular, I'd say that rationality realism is something like the belief that there is a fact of the matter about how best to form beliefs or take actions in response to a particular set of experiences and that many facts about this (going far beyond don't be dutch booked). With the frequent additional belief that what is rational to do in response to various kind of experiences can be inferred by a priori considerations, e.g., think about all the ways that rule X might lead you wrong in certain possible situations so X can't be rational.

When I've raised this issue in the past the response I've gotten from both Yudkowsky and Hanson is: "But of course we can try to be less wrong," i.e., have less false beliefs. And of course that is true but that's a very different notion than the notion of rationality used by rationality realists and misses the way that much of the rationality's community's talk about rationality isn't about literally being less wrong but about classify rules for reaching beliefs into rational and irrational even when they don't disagree in the actual world.

In particular, if all I'm doing is analyzing how to be less wrong I can't criticize people who dogmatically believe things that happen to be true. After all, if god does exist, than dogmatically believing he does makes the people who do less wrong. Similarly the various critiques of human psychological dispositions as leading us to make wrong choices in some kinds of cases isn't sufficient if those cases are rare and cases where it yields better results are common. However, those who are rationality realists suggest that there is some fact of the matter which makes these belief forming strategies irrational and thus appropriate to eschew and criticize. But, ultimately, aside from merely avoiding getting dutch booked, no rule for belief forming can assure it is less wrong than another in all possible worlds.

Comment by truepath on The "semiosis reply" to the Chinese Room Argument · 2018-09-19T02:30:36.864Z · score: 1 (1 votes) · LW · GW

If you want to argue against that piece of reasoning give it a different name because it's not the Chinese room argument. I took multiple graduate classes with professor Searle and, while there are a number of details Said definitely gets the overall outline correct and the argument you advanced is not his Chinese room argument.

That doesn't mean we can't talk about your argument just don't insist it is Searle's Chinese room argument.

Comment by truepath on Formal vs. Effective Pre-Commitment · 2018-09-19T02:28:13.390Z · score: 1 (1 votes) · LW · GW

To the extent they define a particular idealization it's one which isn't interesting/compelling. What one would want to have to say there was a well defined question here is a single definition of what a rational agent is that everyone agreed on which one could then show favored such and such decision theory.

To put the point differently you and I can agree on absolutely every fact about the world and mathematics and yet disagree about which is the best decision theory because we simply mean slightly different things by rational agent. Moreover, there is no clear practical difference which presses us to use one definition or another like the practical usefulness of the aspects of the definition of rational agreement which yield the outcomes that all the theories agree on.

Comment by truepath on Formal vs. Effective Pre-Commitment · 2018-09-19T02:14:34.789Z · score: 1 (1 votes) · LW · GW

Obviously you can and if you define that NEW idealization an X-agent (or more likely redefine the word rationality in that situation) and then there may be a fact of the matter about how an X-agent will behave in such situations. What we can't do is assume that there is a fact of the matter about what a rational agent will do that outstrips the definition.

As such it doesn't make sense to say CDT is right or TDT or whatever before introducing a specific idealization relative to which we can prove they give the correct answer. But that idealization has to come first and has to convince the reader that it is a good idealization.

But the rhetoric around these decision theories misleadingly tries to convince us that there is some kind of pre-existing notion of rational agent and they have discovered that XDT gives the correct answer for that notion. That's what makes people view these claims as interesting. If the claim was nothing more than 'here is one way you can make decisions corresponding to the following assumptions" it would be much more obscure and less interesting.

Comment by truepath on Four kinds of problems · 2018-09-19T02:07:34.987Z · score: 0 (2 votes) · LW · GW

Your criticism of the philosophy/philosophers is misguided on a number of accounts.

1. You're basing those criticisms on the presentation in a video designed to present philosophy to the masses. That's like reading some phys.org arg article claiming that electrons can be in two locations at once and using that to critisize the theory of Quantum Mechanics.

2. The problem philosophers are interested in addressing may not be the one you are thinking of. Philosophers would never suggest that the assumption of logical omniscience prevents one from using Bayesianism as a practical guide towards reasoning or that it's not often a good idealization to treat degrees of belief as probabilities. However, I believe the question that this discussion is in relation to is in giving a theory that explains the fundamental nature of probability claims and here the fact that we really aren't logically omniscient prevents us from identifying probabilities with something like rational degrees of belief (though that proposal has other problems too).

3. It's not like philosophers haven't put in plenty of effort looking for probability like systems that don't presume logical omniscience. They have developed any number of them but none seem particularly useful and I'm not convinced that the paper you link about this will be much different (not that they are wrong just not that useful).

Comment by truepath on Formal vs. Effective Pre-Commitment · 2018-08-27T21:53:35.605Z · score: 4 (3 votes) · LW · GW

In particular, I'd argue that the paradoxical aspects of Newcomb's problem result from exactly this kind of confusion between the usual agent idealization and the fact that actual actors (human beings) are physical beings subject to the laws of physics. The apparent paradoxical aspects results because we are used to idealizing individual behavior in terms of agents where that formalism requires we specify the situation in terms of a tree of possibilities with each path corresponding to an outcome and with the payoff computed by looking at the path specified by all agent's choices (e.g. there is a node where the demon player chooses what money to put in the boxes and then there is a node where the human player, without knowledge of demon player's choices, decides to take both boxes or neither). The agent formalization (where 1 or 2 boxing is modeled as a subsequent choice) simply doesn't allow the content of the boxes to depend on whether or not the human agent chooses 1 or 2 boxes.

Of course, since actual people aren't ideal agents one can argue that something like the newcomb demon is physically possible but that's just a way of specifying that we are in a situation where the agent idealization breaks down.

This means there is simply no fact of the matter about how a rational agent (or whatever) should behave in newcomb type situations because the (usual) rational agent idealization is incompatible with the newcomb situation (ok, more technically you can model it that way but the choice of how to model it just unsatisfactorily builds in the answer by specifying how the payoff depends on 1 vs two boxing).

To sum up what the answer to the newcomb problem is depends heavily on how you preciscify the question. Are you asking whether humans who are disposed to decide in way A end up better of than humans disposed to behave in way B? In that case it's easy. But things like CDT, TDT etc.. don't claim to be producing facts of that kind but rather saying something about ideal rational agents of some kind which then just boringly depends on a ambiguities in what we mean by that.ideal rational agents.

Comment by truepath on Formal vs. Effective Pre-Commitment · 2018-08-27T21:28:29.928Z · score: 1 (1 votes) · LW · GW

It doesn't really make sense to talk about the agent idealization at the same time as talking about effective precommitment (i.e. deterministic/probabilistic determination of actions).

The notion of an agent is an idealization of actual actors in terms of free choices, e.g., idealizing individuals in terms of choices of functions on game theoretic trees. This is an incompatible idealization with thinking of such actors as being deterministically or probabilistically committed to actions for those same 'choices.'

Of course, ultimately, actual actors (e.g. people) are only approximated by talk of agents but if you try and simultaneously use the agent idealization while regarding those *same* choices as being effectively precommited you risk contradiction and model absurdity (of course you can decide to reduce the set of actions you regard as free choices in the agent idealization but that doesn't seem to be the way you are talking about things here).

Comment by truepath on The "semiosis reply" to the Chinese Room Argument · 2018-08-19T10:49:50.876Z · score: 13 (4 votes) · LW · GW

You are getting the statement of the Chinese room wrong. The claim isn't that the human inside the room will learn Chinese. Indeed, it's a key feature of the argument that the person *doesn't* ever count as knowing Chinese. It is only the system consisting of the person plus all the rules written down in the room etc.. which knows Chinese. This is what's supposed to (but not convincingly IMO) be an unpalatable conclusion.

Secondly, no one is suggesting that there isn't an algorithm that can be followed which makes it appear as if the room understands Chinese. The question is whether or not there is some conscious entity corresponding to the system of the guy plus all the rules which has the qualitative experience of understanding the Chinese words submitted etc.. As such the points you raise don't really address the main issue.

Comment by truepath on Is there a practitioner's guide for rationality? · 2018-08-16T09:13:15.373Z · score: 0 (5 votes) · LW · GW

We can identify places we know (inductively) tend to lead us astray and even identify tricks that help us avoid being affected by common fallacies which often aflict humans. However, it's not at all clear if this actually makes us more rational in any sense.

If you mean act-rationality we'd have to study if this was a good life choice. If you mean belief rationality you'd have to specify some kind of measure/notion of importance to decide when it really matters you believed the true thing. After all if it's just maximizing the number of times you believe the truth the best way to be rational is just to memorize giant tables of dice rolls. If it's minimizing false beliefs you might want to avoid forming any at all. Even if you find some more appropriate function to maximize some beliefs obviously should count more than others. I mean you don't want to spend your time memorizing lists of dice roles and forget the facts about being killed by buses if you walk into the street.

But once you realize this point then who knows. It could be the most rational thing in the world to have a totally dogmatic, evidence irresponsive, belief in the existence of some beardy dude in the sky because it's the belief that matters the most and the rule "always believe in God Beard" will thus maximize getting important beliefs right.

I know what you mean. You mean something like avoiding the kind of fallacies that people who always talk about fallacies care about avoiding. But why should those be the most important fallacies to combat etc..

Comment by truepath on Cause Awareness as a Factor against Cause Neutrality · 2018-08-16T09:02:28.230Z · score: 4 (4 votes) · LW · GW

The inveresly proportional thing is a bad move. Sorting through potential charitable causes is itself charitable work and it's just crazy inefficient to do that by everyone voting on what tugs at their heartstrings but by paying someone smart to consider all the various pet causes and evaluate them. Worse, the causes that are least well known are often unknown for very good reason but will now get special attention.

The reason you are right about cases like the doctor example is that when you are actually in a location that then gets hit you *are* leveraging your superior knowledge of how to get things done there or even just understanding what's happened. Thought, truthfully, the real reason it makes sense is the easier psychological motivation.

Comment by truepath on Cause Awareness as a Factor against Cause Neutrality · 2018-08-16T08:56:12.678Z · score: 2 (2 votes) · LW · GW

Note that your whole delegation argument rests on the idea that you have (and know you have) some kind of superior knowledge (or virtue) about what needs to get done and you're just searching for the best way to get it done. The reason it made sense to stay involved in the local campaign was because you had the special advantage of being the person who knew the right way to improve the city so you could offer something more than any other equally virtuous person you might hand the money to instead..

In contrast, in the village case you *don't* have any special knowledge. If we just assigned everyone randomly to someone else who got to spend their charitable givings on the causes they favored (absent fraud) we would expect the world to be no better or worse. Just picking *any* reputable (easy to find online) EA charity or cause or even person and send them all your money won't make things worse and by amassing money from many people they avoid all the transaction costs of everyone trying to do the calculations.

Comment by truepath on Evidence For Simulation · 2018-08-15T15:18:09.519Z · score: 4 (2 votes) · LW · GW

Why assume whatever beings simulated us evolved?

Now I'm sure you're going to say well a universe where intelligent beings just pop into existence fully formed is surely less simple than one where they evolve. However, when you give it some more thought that's not true and it's doubtful if Occam's razor even applies to initial conditions.

I mean supposed for a moment the universe is perfectly deterministic (newtonian or no-collapse interp). In that case the Kolmogorov complexity of a world starting with a big bang that gives rise to intelligent creatures can't be much less and probably is much more than one with intelligent creatures simply popping into existence fully formed After all, I can always just augment the description of the big bang initial conditions with 'and then run the laws of physics for x years' when measuring the complexity.

Comment by truepath on Evidence For Simulation · 2018-08-15T15:09:51.151Z · score: 1 (1 votes) · LW · GW

No, because we want the probability of being a simulation conditional on having complex surroundings not the probability of having complex surroundings conditional on a simulation. The fact that a very great number of simulated beings are created in simple universes doesn't mean that none is ever simulated in a complex one or tell us anything about whether being such a simulation is more likely than being in a physical universe.

Comment by truepath on Evidence For Simulation · 2018-08-15T15:01:42.159Z · score: 1 (1 votes) · LW · GW

Ok, this is a good point. I should have added a requirement that the true solution is C infinity on the part of the manifold that isn't in our temporal past. The backward's heat equation is ill-posed for this reason on...it can't be propogated arbitrarily far forward (i.e. back).

Comment by truepath on Evidence For Simulation · 2018-08-15T14:54:09.749Z · score: 1 (1 votes) · LW · GW

Which way you think this goes probably depends on just how strongly you think Occam's razor should be applied. We are all compelled to let the probability of a theory's truth go to zero as it's kolmogorov complexity goes to infinity but there is no prima facia reason to think it drops off particularly fast or slow. If you think , as I do, that there is only relatively weak favoring of more simple scientific laws while intelligent creatures would favor simplicity as a cognitive technique for managing complexity quite strongly you get my conclusion. But I'll admit the other direction isn't implausible.

Comment by truepath on Logical Counterfactuals & the Cooperation Game · 2018-08-15T14:46:06.547Z · score: 4 (2 votes) · LW · GW

The problem with this kind of analysis is that one is using the intuition of a physical scenario to leverage an ambiguity in what we mean by agent and decision.

Ultimately, the notion of decisions and agents are idealizations. Any actual person or AI only acts as the laws of physics dictate and agents, decisions or choices don't appear in any description in terms of fundamental physics. Since people (and programs) are complex systems that often make relatively sophisticated choices about their actions we introduce the idealization of agents and decisions.

That idealization is basically what one sees in the standard formulation of game theory in terms of trees, visibility conditions and payoffs with decisions simply being nodes on the tree and agents being a certain kind of function from visible outcomes and nodes to children of those nodes. The math is all perfectly clear and there is nothing paradoxical or troubling.

What makes it seem like there is a problem is when we redescribe the situation in terms of guarantees the other player will have predicted your choice in a certain way or the like. Formally, that doesn't really make sense...or at least it corresponds to a radically different game, e.g., restricting the tree so that only those outcomes are allowed. However, because we have this other non-formal notion of choice and agent stuck in our heads (choice is something like picking what socks to wear agent is something like a person) we don't realize that our idealization just changed drastically even though in common language we are still playing the same game.

In other words there are no extra facts to be found about which decision theory is best. There are facts about what actual physical systems will do and there are mathematical facts about trees and functions on them but there isn't any room for further facts about what kind of decision theory is the true one.

Comment by truepath on I Want To Live In A Baugruppe · 2017-03-29T17:09:29.954Z · score: 0 (0 votes) · LW · GW

I worry such a plan will face significant legal hurdles. As suggested the building would probably not fall into the exceptions to the federal fair housing act (is that right) for choosing roommates (it's not a single family dwelling but a group of apartments in some sense).

But you EXACTLY want to choose who lives there based on political/religious beliefs (almost by definition it's impossible to be a rationalist and a dogmatic unquestioning conservative christian). Also by aspects of family makeup in that you don't want people living in this community to import a large extended family to live with them if that family doesn't share the values/concerns of the other people living in the community.

Basically, I think the degree of control you want over who lives in the building may be incompatible with various non-discrimination laws. However, one could probably find 20 families that could jointly purchase the building as condos to avoid this problem.

But I don't see any way around these problems in the long run. As the original group breaks up it will be hard to replace them without legally problematic screening.

Comment by truepath on Say It Loud · 2016-02-14T19:50:57.779Z · score: 1 (1 votes) · LW · GW

Sorry, but you can't get around the fact that humans are not well equipped to compute probabilities. We can't even state what our priors are in any reasonable sense much less compute exact probabilities.

As a result using probabilities has come to be associated with having some kind of model. If you've never studied the question and are asked how likely you think it is there are intelligent aliens you say something like "I think it's quite likely". You only answer with a number if you've broken it down into a model (chance life evolves average time to evolve intelligencechance of disaster*..).

Thus, saying something like "70% chance" indicates to most people that you are claiming your knowledge is the result of some kind of detailed computation and can thus be seen as an attempt to claim authority. You can't change this rule on your own.

Thankfully, there are easy verbal alternatives. "Ehh, I guess I would give 3:1 odds on it" and many others. But use of chance/probability language isn't it.

Comment by truepath on The Temptation to Bubble · 2015-10-11T21:56:34.516Z · score: 1 (3 votes) · LW · GW

Uhh, why not just accept that you aren't and can never be perfectly rational and use those facts in positive ways.

Bubbles are psychologically comforting and help generate communities. Rationalist bubbling (which ironically includes the idea that they don't bubble) probably does more to build the community and correct other wrong beliefs than almost anything else.

Until and unless rationalist take over society the best strategy is probably just to push for a bubble that actively encourages breaking other (non-rationalist) bubbles.

Comment by truepath on A Cost- Benefit Analysis of Immunizing Healthy Adults Against Influenza · 2014-11-14T22:35:23.668Z · score: 0 (0 votes) · LW · GW

So the equations should be (definition of vaccine efficacy from wikipedia)

.6 * p(sick2) = p(sick2) - p(sick1)
p(sick1) - .4 p(sick2) = 0 . i.e. efficacy is the difference be the unvaccinated and vacinated rates of infection divided by the unvaccinated rate. You have to assume there is no selective pressure in terms of who gets the vaccine (they have the same risk pool as the normal population for flu which is surely untrue) to get your assumtion that

.42 p(sick1) + .58p(sick2) = .1 p(sick1) + 1.38p(sick2) = .238

or 1.78 p(sick2) = .238

p(sick2)=.13 (weird I getting a different result) p(sick1) = .05

Did I solve wrong or did you. I do math so I can't actually manipulate numbers very well but I not seeing the mistake.

Comment by truepath on A Cost- Benefit Analysis of Immunizing Healthy Adults Against Influenza · 2014-11-14T22:12:34.178Z · score: 1 (1 votes) · LW · GW

Not with respect to their revealed preferences for working in high risk jobs I understand. There are a bunch of economic papers on this but it was a surprisingly low number.

Comment by truepath on The Truth and Instrumental Rationality · 2014-11-14T20:01:20.231Z · score: 0 (0 votes) · LW · GW

Well it can't still be instrumental rationality anymore. I mean suppose the value being minimized is overall suffering and you are offered a (non-zero probability one time...and you know there are no other possible infinitary outcomes) threat that if you don't believe some false claim X god will create an infinite (no other infinite outcomes) amount of suffering. You know before the choice to believe the false claim that no effect of believing it will increase expected suffering to overwhelm the harm of not believing it.


But the real rub is what do you say about the situation where the rocks turn out to be rocks cleverly disguised as people. You still have every indication that your behavior convincing yourself is an attempt to believe a false statement but it is actually true.

Does the decision procedure which says whatever you want it to normally say but makes a special exception that you can deceive yourself if (description of this situation which happens to identify it uniquely in the world).

In other words is it a relation to truth that you demand. In which case the rule gets better whenever you make exceptions that happen (no matter how unlikely it is) in the actual world to generate true and instrumentally useful beliefs. Or is it some notion of approaching evidence?

If the latter you seem to be committed to the existence of something like Carnap's logical probability, i.e., something deducible from pure reason that assigns priors to all possible theories of the world. This is a notoriously unsolvable (in the sense that it doesn't have one) unsolvable problem.

At the very least can you state some formal conditions that constrain a rule for deciding between actions (or however you want to model it) that captures the constraint you want?

Comment by truepath on 9/26 is Petrov Day · 2014-11-14T12:29:42.494Z · score: 1 (1 votes) · LW · GW

Given that he would be dead otherwise (and the strong human survival drive) I don't see how the incentives are perverse.

I mean to make the incentives positive for pushing the button requires some really strong conditioning or torture threats.

Utilitarianism and Relativity Realism

2014-06-23T19:12:05.211Z · score: -3 (27 votes)
Comment by truepath on Evidence For Simulation · 2014-06-22T01:00:37.844Z · score: 1 (1 votes) · LW · GW

No since experientially we already know that we don't perceive the world as if all everett branches are computed.

In other words what is up for discovery is not 'not all everett branches are fully realized'....that's something from our apparent standpoint as belonging to a single such branch we could never actually know. All we could discover was that the collapse of the wavefunction is observable inside our world.

In other words nothing stops the aliens from simply not computing plenty of everett branches but leaving no trace in our observables to tell us that only one branch is actually real.

Comment by truepath on Evidence For Simulation · 2014-06-22T00:57:10.701Z · score: 0 (0 votes) · LW · GW

I tried to avoid assuming this in the above discussion. You are correct that I do assume that the physics of the simulating world has two properties.

1) Effective physical computation (for the purposes of simulation) is the result of repeated essentially finite decisions. In other words the simulating world does not have access to a oracle that vastly aids in the computation of the simulated world. In other words they aren't simulating us by merely measuring when atoms decay in their world and that just happens to tell them data about a coherent lawlike physical reality.

I don't think this is so much an assumption as a definition of what it means to be simulated. If the description of our universe is embedded in the natural laws of the simulating universe we aren't so much a simulation as just a tiny part of the simulating universe.

2) I do assume that serial computation is more difficult to perform than serial computation, i.e., information can't be effectively transmitted infinitely fast in the simulating universe. Effectively is an important caveat there since even a world with an infinite speed of light would ultimately have to rely signals from sufficiently far off to avoid detection problems.

This is something that surely is plausible to be true. Maybe it isn't. THAT IS WHY I DON'T CLAIM THESE CONSIDERATIONS CAN EVER GIVE US A STRONG REASON TO BELIEVE WE AREN'T A SIMULATION. I do think they could give us strong reasons to believe we are.

Comment by truepath on On Terminal Goals and Virtue Ethics · 2014-06-22T00:39:28.085Z · score: -6 (6 votes) · LW · GW

"Good people are consequentialists, but virtue ethics is what works,"

To nit pick a little I don't think consequentialism even allows one to coherently speak about good people and it certainly doesn't show that consequentialists are such people (standard alien who tortures people when they find consequentialists example).

Moreover, don't believe there is any sense in which one can show people who aren't consequentialists are making some mistake or even that people who value other consequences are doing so. You tacitly admit this with your examples of paper clip maximizing aliens and I doubt you can coherently claim that those who assert that objectively virtue ethics is correct are any less rational than those who assert that consequentialism is correct.

You and I both judge non-consequentialists to be foolish but we have to be careful to distinguish between simply strongly disapproving of their views and actually accusing them of irrationality. Indeed, the actions prescribed by any non-consequentialist moral theory are identical to those prescribed by some consequentialist theory (every possible choice pattern results in a different total world state so you can always order them to give identical results to whatever moral theory you like).

Given this point I think it is a little dangerous to speak to the meta-level. I mean ideally one would simply say I think objectively hedonic/whatever consequentialism is true regardless of what is pragmatically useful. Unfortunately, it's very unclear what the 'truth' of consequentialism even consists of if those who follow a non-consequentialist moral theory aren't logically incorrect.

Pedantically speaking it seems the best one can do is say that when given the luxury of considering situations you aren't emotionally close to and have time to think about you will apply consequentialist reasoning that values X to recommend actions to people and that in such moods you do strive to bind your future behavior as that reasoning demands.

Of course that too is still not quite right. Even in a contemplative mood we rarely become totally selfless and I doubt you (any more than I) actually strive to bind yourself so that given then choice you would torture and kill your loved ones to help n+1 strangers avoid the same fate (assuming those factors not relevant to the consequences you say you care about).

Overall it's all a big mess and I don't see any easy statements that are really correct.

Comment by truepath on The Problem with AIXI · 2014-03-19T11:52:04.179Z · score: 0 (0 votes) · LW · GW

I'd also like to point out the Cartesian barrier is actually probably a useful feature.

It's not objectively true in any sense but the relation between external input, output and effect is very very different than that between internal input (changes to your memories say), output and effect. Indeed, I would suggest there was a very good reason that we took so long to understand the brain. It would be just too difficult (and perhaps impossible) to do so at a direct level the way we understand receptors being activated in our eyes (yes all that visual crap we do is part of our understanding).

Take your example of a sensor aimed at the computer's memory circuit. Unlike almost every other situation there are cases that it can't check it's hypothesis against because such a check would be logically incoherent. In other words certain theories (or at least representations of them) will be diagonalized against because the very experiments you wish to do can't be effected because that 'intention' itself modifies the memory cells in such a way as to make the experiment impossible.

In short the one thing we do know is that assuming that we are free to choose from a wide range of actions independently of the theory we are trying to test and that how we came to choose that action is irrelevant is an effective strategy for understanding the world. It worked for us.

Once the logic of decision making is tightly coupled with the observations themselves the problem gets much harder and may be insoluble from the inside, i.e., we may need to experiment on others and assume we are similar.

Comment by truepath on The Problem with AIXI · 2014-03-19T11:42:02.920Z · score: -1 (3 votes) · LW · GW

This is a debate about nothing. Turing completness tells us no matter how much it appears that a given Turing complete representation can only usefully process data about certain kinds of things in reality it can process data about anything any other language can do.

Well duh, but this (and the halting problem) have been taught yet systemically ignored in programming language design and this is exactly the same argument.

We are sitting around in the armchair trying to come up with a better means of logic/data representation (be it a programming language the underlying AI structure) as if the debate is about mathematical elegance or some such objective notion. Until you prove to me that any system in AIXI can duplicate the behavior (modulo semantic changes as to what we call a punishment) the other system can and vice versa that is the likely scenario.

So what would make one model for AI better than another? These vague theoretical issues? No, no more than how fancy your type system is determines the productivenesss of your programming language. Ultimately, the hurdle to overcome is that HUMANS need to build and reason about these systems and we are more inclined to certain kinds of mistakes than others. For instance I might write a great language using the full calculus of inductive constructions as a type system and still do type inference almost everywhere but if my language looks like line noise not human words all that math is irrelevant.

I mean ask yourself why is human programming and genetic programming so different. Because what model you use to build up your system has a far greater impact on your ability to understand what is going on than on any other effects. Sure, if you write in pure assembly JMPs everywhere with crazy code packing tricks it goes faster but you still lose.

If I'm right about this case as well it can only be decided by practical experiments where you have people try and reason in (simplified) versions of the systems and see what can and can't be easily fixed.

Comment by truepath on Best of Rationality Quotes, 2013 Edition · 2014-02-01T10:12:19.297Z · score: 0 (2 votes) · LW · GW

But it really should be from Carnap.

Comment by truepath on Results from MIRI's December workshop · 2014-01-30T04:44:46.564Z · score: 0 (0 votes) · LW · GW

I meant useful in the context of AI since any such sequence would obviously have to be non-computable and thus not something the AI (or person) could make pragmatic use of.

Also, it is far from clear that T_0 is the union of all theories (and this is the problem in the proof in the other rightup). It may well be that there is a sequence of theories like this all true in the standard model of arithmetic but that their construction requires that Tn add extra statements beyond the schema for the proof predicate in T{n+1}

Also, the claim that Tn must be stronger than T{n+1} (prove a superset of it...to be computable we can't take all these theories to be complete) is far from obvious if you don't require that T_n be true in the standard model. If T_n is true in the standard model than, as it proves that Pf(Tn+1, \phi) -> \phi this is true so if T{n+1} |- \phi then (as this witnessed in a finite proof) there is a proof that this holds from T_n and thus a proof of \phi. However, without this assumption I don't even see how to prove the containment claim.

Comment by truepath on Results from MIRI's December workshop · 2014-01-29T15:56:05.935Z · score: 2 (2 votes) · LW · GW

Actually, the `proof' you gave that no true list of theories like this exists made the assumption (not listed in this paper) that the sequence of indexes for the computable theories is definable over arithmetic. In general there is no reason this must be true but of course for the purposes of an AI it must.

Ultimately, you can always collapse any computable sequence of computable theories (necessary for the AI to even manipulate) into a single computable theory so there was never any hope this kind of sequence could be useful.

Comment by truepath on Dangers of steelmanning / principle of charity · 2014-01-29T15:32:52.664Z · score: 0 (0 votes) · LW · GW

It seems to me there are too separate issues.

1) Do you act like other people actually SAID the better argument (or interpretation of that argument) that you can put in his mouth?

2) Do you suggest the better alternative in debates and discussions of the idea before arguing against it.


2 is certainly a good idea while all the problems come from item 1. Indeed, I would suggest that both parties do best when everyone ACTS LIKE OTHER PEOPLE SAID WHATEVER YOU JUDGE TO BE MOST LIKELY THEY ACTUALLY INTENDED TO SAY. So you don't don't then on misspeaking nor do you pretend they argued for some straw-man position. However, everyone benefits the most when they learn why what they actually argued wasn't right,(especially if you offer a patched version when available).

This way people actually learn when they make erroneous arguments but the best arguments on each side are still addressed.

Comment by truepath on Dangers of steelmanning / principle of charity · 2014-01-29T15:26:32.188Z · score: 0 (0 votes) · LW · GW

Indeed, I think a huge reason for the lack of useful progress in philosophy is too much charity.

People charitably assume that if they don't fully understand something (and aren't themselves an expert in the area) the person advancing the notion is likely contributing something of value that you just don't understand yet.

This is much of the reason for the continued existence of continental philosophy drivel like claims that set theory entails morality or the deeply confused erudite crap in Being and Time. Anyone who isn't actually an expert in this kind of philosophy feels it would be uncharitable (or at least seem uncharitable) to get up and denounce it as psuedo-philosophical mumbo-jumbo it is. It may seem harmless but the existence of this kind of stuff within the boundaries of philosophy means that less extreme but also wrong views are also not cut out.

Charity is more directly harmful within analytic (logic/math based philosophy as opposed to continental nonsense) philosophy where people frequently make the naive assumption that various theories, e.g., the definite description theory of reference and the baptismal naming theory of reference, are somehow either right or wrong and argue for these positions just as they would argue for the claims about the fundamental theory of physics. Yet, more sophisticated philosophers have frequently realized this entire naive realism viewpoint is flawed. There isn't a real thing meaning, just speech and writing, and thus these theories can only be taken as theoretical tools that help provide a useful framework for organizing patterns observed in speech acts and despite their incompatible assumptions can both be useful as approximations.

Unfortunately, I have observed time and time again that in situations like this the insight isn't passed on since it would be uncharitable to assume the philosophers who publish in this manner aren't really just debating which is a better approximation to help organize patterns in speech/writing.

Similarly charity stops people from being called out when they continue to wrestle in print with problems (surprise quiz etc..) that have a clear correct solution that was given decades ago since it would be uncharitable to assume (as it true) they simply don't have a good grip on the way mathematics can be applied or fails to apply to real world situations.

Evidence For Simulation

2012-01-27T23:07:42.694Z · score: 14 (17 votes)