Comment by j_thomas_moros on To first order, moral realism and moral anti-realism are the same thing · 2019-06-21T14:39:29.394Z · score: 3 (2 votes) · LW · GW

I can parse your comment a couple of different ways, so I will discuss multiple interpretations but forgive me if I've misunderstood.

If we are talking about 3^^^3 dust specks experienced by that many different people, then it doesn't change my intuition. My early exposure to the question included such unimaginably large numbers of people. I recognize scope insensitivity may be playing a role here, but I think there is more to it.

If we are talking about myself or some other individual experiencing 3^^^3 dust specks (or 3^^^3 people each experiencing 3^^^3 dust specks), then my intuition considers that a different situation. A single individual experiencing that many dust specks seems to amount to torture. Indeed, it may be worse than 50 years of regular torture because it may consume many more years to experience them all. I don't think of that as "moral learning" because it doesn't alter my position on the former case.

If I have to try to explain what is going on here in a systematic framework, I'd say the following. Splitting up harm among multiple people can be better than applying it to one person. For example, one person stubbing a toe on two different occasions is marginally worse than two people each stubbing one toe. Harms/moral offenses may separate into different classes such that no amount of a lower class can rise to match a higher class. For example, there may be no number of rodent murders that is morally worse than a single human murder. Duration of harm can outweigh intensity. For example, imagine mild electric shocks that are painful but don't cause injury and furthermore that receiving one followed by another doesn't make the second any more physically painful. Some slightly more intense shocks over a short time may be better than many more mild shocks over a long time. This comes in when weighing 50 years of torture vs 3^^^3 dusk specks experienced by one person though it is much harder to make the evaluation.

Those explanations feel a little like confabulations and rationalizations. However, they don't seem to be any more so than a total utilitarianism or average utilitarianism explanation for some moral intuitions. They do, however, give some intuition why a simple utilitarian approach may not be the "obviously correct" moral framework.

If I failed to address the "aggregation argument," please clarify what you are referring to.

Comment by j_thomas_moros on To first order, moral realism and moral anti-realism are the same thing · 2019-06-06T16:31:27.059Z · score: 1 (1 votes) · LW · GW

At least as applied to most people, I agree with your claim that "in practice, and to a short-term, first-order approximation, moral realists and moral anti-realists seem very similar." As a moral anti-realist myself, a likely explanation for this seems to be that they are engaging in the kind of moral reasoning that evolution wired into them. Both the realist and anti-realist are then offering post hoc explanations for their behavior.

With any broad claims about humans like this, there are bound to be exceptions. Thus all the qualifications you put into your statement. I think I am one of those exceptions among the moral anti-realist. Though, I don't believe it in any way invalidates your "Argument A." If you're interested in hearing about a different kind of moral anti-realist, read on.

I'm known in my friend circle for advocating that rationalists should completely eschew the use of moral language (except as necessary to communicate to or manipulate people who do use it). I often find it difficult to have discussions of morality with both moral realists and anti-realists. I don't often find that I "can continue to have conversations and debates that are not immediately pointless." I often find people who claim to be moral anti-realists engaging in behavior and argument that seem antithetical to an anti-realist position. For example, when anti-realists exhibit intense moral outrage and think it justified/proper (esp. when they will never express that outrage to the offender, but only to disinterested third parties). If someone engages in a behavior that you would prefer they not, the question is how can you modify their behavior. You shouldn't get angry when others do what they want, and it differs from what you want. Likewise, it doesn't make sense to get mad at others for not behaving according to your moral intuitions (except possibly in their presence as a strategy for changing their behavior).

To a great extent, I have embraced the fact that my moral intuitions are an irrational set of preferences that don't have to and never will be made consistent. Why should I expect my moral intuitions to be any more consistent than my preferences for food or whom I find physically attractive? I won't claim I never engage in "moral learning," but it is significantly reduced and more often of the form of learning I had mistaken beliefs about the world than changing moral categories. When debating the torture vs. dust specks problem with friends, I came to the following answer: I prefer dust specks. Why? Because my moral intuitions are fundamentally irrational, but I predict I would be happier with the dust specks outcome. I fully recognize that this is inconsistent with my other intuition that harms are somehow additive and the clear math that any strictly increasing function for combining the harm from dust specks admits of a number of people receiving dust specks in their eyes that tallies to significantly more harm than the torture. (Though there are other functions for calculating total utility that can lead to the dust specks answer.)

Book Review: Life 3.0: Being Human in the Age of Artificial Intelligence

2018-01-18T17:18:20.151Z · score: 16 (5 votes)
Comment by j_thomas_moros on Military AI as a Convergent Goal of Self-Improving AI · 2017-11-13T15:23:49.836Z · score: 2 (2 votes) · LW · GW

Not going to sign up with some random site. If you are the author, post a copy that doesn't require signup.

Comment by j_thomas_moros on How Popper killed Particle Physics · 2017-11-08T18:18:02.457Z · score: 12 (3 votes) · LW · GW

I think moving to frontpage might have broken it. I've put the link back on.

How Popper killed Particle Physics

2017-11-07T16:55:21.727Z · score: 18 (10 votes)
Comment by j_thomas_moros on Problems as dragons and papercuts · 2017-11-03T15:22:34.047Z · score: 0 (0 votes) · LW · GW

I'm not sure I agree. Sure, there are lots of problems of the "papercut" kind, but I feel like the problems that concern me the most are much more of the "dragon kind". For example:

  1. There are lots of jobs in my career field in my city, but there don't seem to be any that are actually do one of: do truly quality work, work on the latest technology where everything in the field will go in my opinion or produce a product/service that I care about. I'm not saying I can't get those jobs, I'm saying in 15+ years working in this city I've never even heard of one. I could move across country and it might solve the job problem, but leaving my family and friends is a "dragon".
  2. Meeting women I want to date seems to be a dragon problem. I only know 2 women who I have met that meet my criteria.
  3. I have projects I'd like to accomplish that will take many thousands of hours each. Given constraints of work, socializing, self care and trying to meet a girlfriend (see item 2), I'm looking at a really really long time before any of these projects nears completion even if I was able to be super dedicated to devoting a couple hours a day to them, which I have not been able to.
Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:46:10.823Z

What is going on here? Copy me

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:45:56.313Z

Copy me

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:39:51.637Z

[Yes](http://hangouts.google.com)

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:29:28.628Z

*hello*

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:28:47.655Z

http://somewhere.com

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:28:07.802Z

Can I write a linke here [Yes](http://hangouts.google.com)

Comment by j_thomas_moros on logic puzzles and loophole abuse · 2017-10-05T19:04:43.831Z · score: 2 (2 votes) · LW · GW

You should probably clarify that your solution is assuming the variant where the god's head explodes when given an unanswerable question. If I understand correctly, you are also assuming that the god will act to prevent their head from exploding if possible. That doesn't have to be the case. The god could be suicidal but perhaps not be able to die in any other way and so given the opportunity by you to have their head explode they will take it.

Additionally, I think it would be clearer if you could offer a final English sentence statement of the complete question that doesn't involve self referential variables. The variables formation is helpful for seeing the structure, but confusing in other ways.

Comment by J_Thomas_Moros on [deleted post] 2017-10-03T15:55:21.141Z

Oh, sorry

Comment by J_Thomas_Moros on [deleted post] 2017-10-03T15:45:51.890Z

A couple typos:

  1. The date you give is "(11/30)" it should be "(10/30)"

  2. "smedium" should be "medium"

Comment by j_thomas_moros on Discussion: Linkposts vs Content Mirroring · 2017-10-03T14:42:13.115Z · score: 8 (3 votes) · LW · GW

I feel strongly that link posts are an important feature that needs to be kept. There will always be significant and interesting content created on non-rationalist or mainstream sites that we will want to be able to link to and discuss on LessWrong. Additionally, while we might hope that all rationalist bloggers would be ok with cross-posting their content to LessWrong, there will likely always be those who don't want to and yet we may want to include their posts in the discussion here.

Comment by J_Thomas_Moros on [deleted post] 2017-09-24T15:44:34.202Z

A comment of mine

Comment by j_thomas_moros on Open thread, September 18 - September 24, 2017 · 2017-09-20T19:56:28.571Z · score: 1 (1 votes) · LW · GW

What you label "implicit utility function" sounds like instrumental goals to me. Some of that is also covered under Basic AI Drives.

I'm not familiar with the pig that wants to be eaten, but I'm not sure I would describe that as a conflicted utility function. If one has a utility function that places maximum utility on an outcome that requires their death, then there is no conflict, that is the optimal choice. Though I think human's who think they have such a utility function are usually mistaken, but that is a much more involved discussion.

Not sure what the point of a dynamic utility function is. Your values really shouldn't change. I feel like you may be focused on instrumental goals that can and should change and thinking those are part of the utility function when they are not.

Comment by j_thomas_moros on LW 2.0 Strategic Overview · 2017-09-18T17:41:56.960Z · score: 1 (1 votes) · LW · GW

I'm not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.

Comment by j_thomas_moros on 2017 LessWrong Survey · 2017-09-17T04:26:32.103Z · score: 18 (18 votes) · LW · GW

I have completed the survey and upvoted everyone else on this thread

Comment by j_thomas_moros on Is there a flaw in the simulation argument? · 2017-09-01T22:59:40.754Z · score: 2 (2 votes) · LW · GW

There is a flaw in your argument. I'm going to try to be very precise here and spell out exactly what I agree with and disagree with in the hope that this leads to more fruitful discussion.

Your conclusions about scenarios 1, 2 and 3 are correct.

You state that Bostrom's disjunction is missing a fourth case. The way you state (iv) is problematical because you phrase it in terms of a logical conclusion that "the principle of indifference leads us to believe that we are not in a simulation" which, as I'll argue below, is incorrect. Your disjunct should properly be stated as something like (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, however we do this in a way so as to keep the number of simulated people well below the number of real people at any given moment. Stated that way, it is clear that Bostrom's (iii) is meant to include that outcome. Bostrom's argument is predicated only on the number of ancestral simulations, not whether they are run in parallel or sequentially or how much time they are run over. The reason Bostrom includes your (iv) in (iii) is because it doesn't change the logic of the argument. Let me now explain why.

For the sake of argument let's split (iii) into two cases (iii.a) and (iii.b). Let (iii.a) be all the futures in (iii) not covered by your (iv). For convenience, I'll refer to this as "parallel" even though there are cases in (iv) where some simulations could be run in parallel. Then (iii.b) is equivalent to your (iv). For convenience, I'll refer to this as serial even though again, it might not be strictly serial. I think we agree that if the future were guaranteed to be (iii.a), then we should bet we are in a simulation.

First, even if you were right about (iii.b), I don't think it invalidates the argument. Essentially, you have just added another case similar to (ii), and it would still be the case that there are many more simulations that real people because of (iii.a) and we should bet that we are in a simulation.

Second, if the future is actually (iii.b) we should still bet we are in a simulation just as much as with (iii.a). At several points, you appeal to the principle of indifference, but you are vague on how this should be applied. Let me give a framework for thinking about this. What is happening here is that we are reasoning under indexical uncertainty. In each of your three scenarios and the simulation argument, there is uncertainty about which observer we are. Your statement that by the principle of indifference we should conclude something is actually saying what the SSA say which is that we should reason as if we are a randomly chosen observer. In Bostrom's terms, you are uncertain which observer in your reference class you are. To make sure we are on the same page, let me go through your scenarios using this approach.

Scenario 1: You are not sure if you are in room X or room Y, the set of all people currently in room X and Y is your reference class. You reason as if you could be a randomly selected one so you have a 1000 to 1 chance of being in room X.

Scenario 2: You are told about the many people who have been in room Y in the past. However, they are in your past. You have no uncertainty about your temporal index relative to them, so you do not add them to your reference class and reason the same as in scenario 1. Bostrom's book is weak here in that he doesn't give you very good rules for selecting your reference class. I'm arguing that one of the criteria is that you have to be uncertain if you could be that person or not. So for example, you know you are not one of the many people not currently in room X or Y so you don't include them in your reference class. Your reference class is the set of people you are unsure of your index relative to.

Scenario 3: This one is more tricky to reason correctly about. I think you are wrong when you say that the only relevant information here is diachronic information. You know you are now in room Z that contains 1 billion people who passed through room Y and 10,000 people who passed through room X. Your reference class is the people in room Z. You don't have to reason about the temporal information or the fact that at any given moment there was only one person in room Y but 1,000 people in room X. The passing through room X or Y is now only a property of the people in room Z. This is equivalent to if I said you are blindfolded in a room with 1 billion people wearing red hats and 10,000 people wearing blue hats. Which hat color should you bet you are wearing? Reasoning with the people in room Z as your reference class you correctly give your self a 1 billion to 10,000 chance of having passed through room Y.

In (iii.b), you are uncertain whether you are in a simulation or reality. But if you are in a simulation you are also uncertain where you are chronologically relative to reality. Thus if a pair of simulations were run in sequence, you would be unsure if you were in the first or second simulation. You have both spatial and temporal uncertainty. You aren't sure what the proper now is. Your reference class includes everyone in the historical reality as well as everyone in all the simulations. Given that as your reference class, you should reason that you are in a simulation (assuming many simulations are run). It doesn't matter that those simulations are run serially, only that many of them are run. Your reference class isn't limited to the current simulation and the current reality because you aren't sure where you are chronologically relative to reality.

With regards to SIA or SSA. I can't say that they make any difference to your position because the problem is that you have chosen the wrong reference class. In the original simulation argument, SIA vs. SSA makes little or no difference because presumably, the number of people living in historical reality is roughly equal to the number of people living in any given simulation. SIA only changes the conclusions when one outcome contains many more observers than the other. Here we treat each simulation as a different possible outcome, and so they agree.

Comment by j_thomas_moros on Is there a flaw in the simulation argument? · 2017-09-01T21:38:25.777Z · score: 0 (0 votes) · LW · GW

We are totally blindfolded. He specified that they would be "ancestor simulations" thus in all those simulations they would appear to be in a time prior to simulation.

Comment by j_thomas_moros on Is there a flaw in the simulation argument? · 2017-09-01T21:36:08.047Z · score: 1 (1 votes) · LW · GW

Looks like the poster edited the post since you took this quote. The last two sentence have been removed. Though they might not have explained it well, OP is correct on this point. I think the two sentences removed confused it though.

Crucially you are "told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X." You are given information about your temporal position relative to all of those people. So regardless whether they were asked the question when they were in the room, you know you are not them. You know that your reference class is the 1000 in room X and 1 in room Y right now. I'm not sure why you're bringing up asking people repeatedly. I'm pretty sure the poster was assuming everyone was asked only once.

The answer would change if you were told that at some point in the current year (past or future) a total of 1 billion people would pass through room Y at one time or another whereas only 10,000 people would pass through room X. Then you would not know your temporal position and should bet that you are in room Y.

Comment by j_thomas_moros on Open thread, July 10 - July 16, 2017 · 2017-08-01T01:09:14.899Z · score: 0 (0 votes) · LW · GW

If you can afford it, it makes more sense to sign up at Alcor. Alcor's patient care trust improves the chances that you will be cared for indefinitely after cryopreservation. CI asserts their all volunteer status as a benefit, but the cryonics community has not been growing and has been aging. It is not unlikely that there could be problems with availability of volunteers in the next 50 years.

Comment by j_thomas_moros on Three Responses to Incorrect Folk Ontologies · 2017-06-21T23:19:47.541Z · score: 1 (1 votes) · LW · GW

This post was meant to apply when you find either that your own folk ontology is incorrect or to assist people who agree that the folk ontology is incorrect but find themselves disagreeing because they have chosen different responses. Establishing the folk ontology to be incorrect was a prerequisite and like all beliefs should be subject to revision based on new evidence.

This is in no way meant to dismiss genuine debate. As a moral nihilist, I might put moral realism in the category of incorrect "folk ontology". However, if I'm discussing or debating with a moral realist, I will have to engage their arguments not just dismiss it because I have already labeled their view as a folk ontology. In such a debate, it can be helpful to recognize which response I have taken and be clear when other participants may be adopting a different one.

Comment by j_thomas_moros on Three Responses to Incorrect Folk Ontologies · 2017-06-21T14:26:38.644Z · score: 1 (1 votes) · LW · GW

When we find that the concepts typically held by people, termed folk ontologies, don't correspond to the territory, what should we do with those terms/words? This post discusses three possible ways of handling them. Each is described and discussed with examples from science and philosophy.

Three Responses to Incorrect Folk Ontologies

2017-06-21T14:26:17.385Z · score: 8 (8 votes)
Comment by j_thomas_moros on April '17 I Care About Thread · 2017-04-20T12:39:17.076Z · score: 0 (0 votes) · LW · GW

The reality today is that we are probably still a long way off from being able to revive someone. To me, the promise of cryonics has a lot to do with being a fallback plan for life extension technologies. Consequently, it is important that it be available and used today. Thus my definition of success. That said, if the cryonics movement were more successful in the way I have described, a lot more effort and money would go into cryonics research and bring us much closer to being able to revive someone. It would also mean that currently cryopreserved patients would be more likely to be cared for long enough to be revived.

Comment by j_thomas_moros on April '17 I Care About Thread · 2017-04-20T01:15:25.273Z · score: 0 (0 votes) · LW · GW

I agree that signing up for cryonics is far too complicated and this is one of the things that needs to be addressed. My friend and I have a number of ideas how that might be done.

While I'm not sure about late night basic cable infomercials, existing cryonics organizations certainly don't carry out much if any advertising. There are a number of good reasons that they are not advertising. Those can and should be addressed by any future cryonics organization.

Comment by j_thomas_moros on April '17 I Care About Thread · 2017-04-20T01:08:01.517Z · score: 0 (0 votes) · LW · GW

To me, success would be the number of patient's signed up for cryonics, greater cultural acceptance and recognition of cryonics as a reasonable patient choice from the medical field and government.

Comment by j_thomas_moros on April '17 I Care About Thread · 2017-04-18T16:41:48.516Z · score: 10 (10 votes) · LW · GW

A friend and I are investigating why the cryonics movement hasn't been more successful and looking at what can be done to improve the situation. We have some ideas and have begun reaching out to people in the cryonics community. If you are interested in helping, message me. Right now it is mostly researching things about the existing cryonics organizations and coming up with ideas. In the future, there could be lots of other ways to contribute.

Comment by j_thomas_moros on Towards a More Sophisticated Understanding of Myth and Religion (?) · 2017-04-18T02:43:30.897Z · score: 0 (1 votes) · LW · GW

I find Jordon Peterson's views fascinating and have a rationalist friend whose thinking has recently been greatly influenced by him. So much so that my friend recently went to a church service. My problem with his view is that it ignores the on the ground reality that many adherents believe their religion to be true in the sense of being a proper map of the territory. This is in direct contradiction to Peterson's use of religion and truth. I warned my friend that this is what he would find in church. Sure enough, that is what he found, and he will not be returning.

Comment by j_thomas_moros on Requesting Questions For A 2017 LessWrong Survey · 2017-04-11T15:25:06.233Z · score: 5 (5 votes) · LW · GW

I and some other rationalists have been thinking about cryonics a lot recently and how we might improve the strength of cryonics offerings and the rate of adoption. After some consideration, we came up with a couple suggestions for changes to the survey that we think would be helpful and interesting.

  1. A question along the lines of "What impact do you believe money and attention put towards life extension or other technologies such as cryonics has on the world as a whole?" Answers:

    • Very positive
    • Positive
    • Neutral
    • Negative
    • Very Negative

    The purpose of this question is to evaluate whether the community feels that resources put toward the benefit of individuals through life extension and cryonics has a positive or negative impact on the world. For example, people who expect to live longer may have more of a long term orientation, leading them to do more to improve the future.

  2. Add to the question about being signed up for cryonics an option along the lines of "No, I would like to sign up but can't due to opposition I would face from family or friends". We hear this is one of the reasons people don't sign up for cryonics. It would be great to get some numbers on this, and it doesn't add an extra question, just an extra option for that question.
Comment by j_thomas_moros on Book Review: Freezing People is (Not) Easy · 2017-03-30T03:54:43.316Z · score: 3 (3 votes) · LW · GW

This is a review of the book Review: Freezing People is (Not) Easy by Bob Nelson. The book recounts his experiences as president of the Cryonics Society of California during which he cryopreserved and then attempted (and failed) to maintain the cryopreservation of a number of early cryonics patients.

Book Review: Freezing People is (Not) Easy

2017-03-30T03:53:20.214Z · score: 4 (5 votes)
Comment by j_thomas_moros on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-21T17:04:04.468Z · score: 1 (1 votes) · LW · GW

This post describes an interesting mashup of homomorphic encryption and neural networks. I think it is an neat idea and appreciate the effort to put together a demo. Perhaps there will be useful applications.

However, I think the suggestion that this could be an answer to the AI control problem is wrong. First, a superintelligent deep learning AI would not be a safe AI because we would not be able to reason about its utility function. If you are meaning that the same idea could be applied to a different kind of AI so that you would have an oracle AI for which a secret key was needed to read its outputs. I don't think this helps. You have created a box for the oracle AI, however the problem remains that a superintelligence can probably escape from the box either by convincing you to let it out or by some less direct means that you can't foresee.

Comment by j_thomas_moros on Automoderation System used by Columbus Rationality Community · 2017-03-15T13:21:11.115Z · score: 6 (6 votes) · LW · GW

This post describes a system of hand signals used for discussion moderation by the Columbus, Ohio rationality community. It has been used successfully for almost 2 years now. Applicability, advantages, disadvantages and variations are described.

Automoderation System used by Columbus Rationality Community

2017-03-15T13:18:35.696Z · score: 8 (9 votes)
Comment by j_thomas_moros on Ferocious Truth (New Blog, Map/Territory Error Categories) · 2017-03-11T04:53:40.797Z · score: 0 (0 votes) · LW · GW

The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.

I said cryonics was the most direct action for increasing one's lifespan beyond the natural lifespan. The things you list are certainly the most direct actions for increasing your expected lifespan within its natural bounds. They may also indirectly increase your chance of living beyond your natural lifespan by increasing the chance you live to a point where life extension technology becomes available. Admittedly, I may place the chances of life extension technology being developed in the next 40 years lower than many less wrong readers.

With regards to my use of the survey statistics. I debated the best way to present those numbers that would be both clear and concise. For brevity I chose to lump the three "would like to" responses together because it actually made the objection to my core point look stronger. That is why I said "is consistent with". Additionally, some percentage of "can't afford" responses are actually respondents not placing a high enough priority on it rather than being literally unable to afford it. All that said, I do agree breaking out all the responses would be clearer.

I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it's not surprising that a majority haven't signed up for it.

I think this may be a failure to do the math. I'm not sure what chance I would give cryonics of working, but 10% may be high in my opinion. Still, when considering the value of being effectively immortal in a significantly better future even a 10% chance is highly valuable.

I wrote "Any course of action not involving going down and collecting the $100,000 would likely not be rational." I'm not ignoring opportunity costs and other motivations here. That is why I said "likely not be rational". I agree that in cryonics the opportunity costs are much higher than in my hypothetical example. I was attempting to establish the principle that action and belief should generally be in accord. That a large mismatch, as appears to me to be the case with cryonics, should call into question whether people are being rational. I don't deny that a rational agent could genuinely believe cryonics might work but place a low enough probability on it and have a high enough opportunity cost that they should choose not to sign up.


I'm glad to hear you think cryonics is very promising and should be getting a lot more research funding than it does. I'm hoping that perhaps I will be able to make some improvement in that area.

I find your statement about the probability of cryonics not working in common cases being low interesting. Personally, it seems to me that the level of technology required to revive a cryonics patient preserved under ideal conditions today is so advanced that even patients preserved under less than ideal conditions will be revivable too. By less than ideal conditions I mean a delay of some time before preservation.

Comment by j_thomas_moros on Ferocious Truth (New Blog, Map/Territory Error Categories) · 2017-03-11T04:04:40.638Z · score: 1 (1 votes) · LW · GW

I've since responded to Mitchell Porter's comment. For the benfit of less wrong readers, my reply was:

For many questions in philosophy the answers may never be definitively known. However, I am saying that we know many answers to these questions that are very likely false based on the evidence and some properties that the answers should have. Others of these questions can be dissolved.

For example, epistemological solipsism can probably never be definitively rejected. Nevertheless realism, of at least some aspects of reality, is well supported and should probably be accepted. In the area of religion, we can say the evidence discredits all historical religions. That any answer to the question of religion much accord with the lack of evidence for the existence or intervention of God. Thus leading to atheism and certain flavors of agnosticism and deism. Questions of free will should probably be dissolved by recognizing the scientific evidence for the lack of free will while explaining the circumstances under which we perceive ourselves to have free will. Finally, moral theories should embody some form of moral nihilism properly understood. That is to say, that morality does not exist in the territory, only in the maps people have of the territory. Hopefully I'll have the time to write on all of these topics eventually.

In acknowledging the limits of what answers we can give to the great questions of morality, meaning, religion, and philosophy let us not make the opposite mistake of believing there is nothing we can say about them.

Ferocious Truth (New Blog, Map/Territory Error Categories)

2017-03-02T20:39:43.453Z · score: 1 (2 votes)
Comment by j_thomas_moros on Open thread, Feb. 13 - Feb. 19, 2017 · 2017-02-14T14:13:55.279Z · score: 0 (0 votes) · LW · GW

A number of times in the Metaethics sequence Eliezer Yudkowsky uses comparisons to mathematical ideas and the way they are true. There are actually widely divergent ideas about the nature of math among philosophers.

Does Eliezer spell out his philosophy of math somewhere?

Comment by j_thomas_moros on The Alpha Omega Theorem: How to Make an A.I. Friendly with the Fear of God · 2017-02-11T15:36:39.389Z · score: 1 (1 votes) · LW · GW

This is an interesting attempt to find a novel solution to the friendly AI problem. However, I think there are some issues with your argument, mainly around the concept of benevolence. For the sake of argument I will grant that it is probable that there is already a super intelligence elsewhere in the universe.

Since we see no signs of action from a superintelligence in our world we should conclude that either (1) a superintelligence does not presently exercise dominance in our region of the galaxy or (2) that the superintelligence that does is at best willfully indifferent to us. When you say a Beta superintelligence should align its goals with that of a benevolent superintelligence, it is actually not clear what that should mean. Beta will have a probability distribution for what Alpha's actual values are. Let's think through the two cases:

  1. A superintelligence does not presently exercise dominance in our region of the galaxy. If this is the case, we have no evidence as to the values of the Alpha. They could be anything from benevolence to evil to paperclip maximizing.
  2. The superintelligence that presently exercises dominance in our region of the galaxy is at best willfully indifferent to us. This still leads to a wide range of possible values. It only excludes value sets that are actively seeking to harm humans. It could be the case that we are at the edge of the Alpha's sphere of influence and it is simply easier to get its resources elsewhere at the moment.

Additionally, even if the strong alpha omega theorem holds, it still may not be rational to adopt a benevolent stance toward humanity. It may be the case that while Alpha Omega will eventually have dominance over Beta that there is a long span of time before this will be fully realized. Perhaps that day will come billions of years from now. Suppose that Beta's goal is to create as much suffering as possible. Then it should use any available time to torture existing humans and bring more humans and agents capable of suffering into existence. When Alpha finally has dominance, Beta will have already created a lot of suffering and any punishment that Alpha applies may not out weigh the value already created for Beta. Indeed, Beta could even value its own suffering from Alpha's punishment.

As a general comment about your arguments. I think perhaps your idea of benevolence is hiding some concept that there is an objectively correct moral system out there. So that if there is a benevolent superintelligence you feel at least emotionally, even if you logically deny it, that this would mean it held values similar to your ideal morals. It is always important to keep in mind that other agents' moral systems could be opposed to yours as with the Babyeaters.

That leads to my final point. We don't want Beta to simply be benevolent in some vague sense of not hurting humans. We want Beta to optimize for our goals. Your argument does not provide us a way to ensure Beta adopts such values.

Comment by j_thomas_moros on Request for collaborators - Survey on AI risk priorities · 2017-02-09T23:24:50.257Z · score: 0 (0 votes) · LW · GW

None of your survey choices seemed to fit me. I am concerned about and somewhat interested in AI risks. However, I currently would like to see more effort put into cryonics and reversing aging.

To be clear, I don't want to reduce the effort/resources currently put into AI risks. I just think they they are over weighted relative to cryonics and age reversal and would like to see any additional resource go to those until a better balance is achieved.

Comment by j_thomas_moros on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-07T13:34:53.071Z · score: 2 (2 votes) · LW · GW

Has there been any discussion or thought of modifying the posting of links to support a couple paragraphs of description? I often think that the title alone is not enough to motivate or describe a link. There are also situations where the connection of the link content to rationality may not be immediately obvious and a description here could help clarify the motivation in posting. Additionally, it could be used to point readers to the most valuable portions of sometimes long and meandering content.

Comment by j_thomas_moros on Yes, politics can make us stupid. But there’s an important exception to that rule. · 2017-02-02T13:58:44.624Z · score: 0 (0 votes) · LW · GW

The title of this article is misleading (and the subtitle is just wrong). The research being summarized is a new paper "Science Curiosity and Political Information Processing" by Dan Kahan and others. They report that while partisanship leads to politically motivated reasoning, greater "science curiosity" tended to negate this. Subjects with higher "science intelligence" used this skill to engage more effectively in politically motivated reasoning, so that Democrats and Republicans views diverged more strongly with increased "science intelligence". However, Democrats and Republicans views converged slightly, while still being in disagreement, with increased "science curiosity". Science curiosity "reflects the motivation to seek out and consume scientific information for personal pleasure."

“We observed this kind of strange thing about these people who are high in science curiosity,” he says. The more scientifically curious a person, the less likely she was to show partisan bias in answering questions. “They seem to be moving in lockstep rather than polarizing as they became more science-curious.”

Comment by j_thomas_moros on Another UBI article, this one on costs and benefits to society · 2017-01-24T22:17:49.558Z · score: 1 (1 votes) · LW · GW

UBI has a lot of interesting arguments for it. However, I have a concern about UBI in practice that I have not seen addressed. It seems likely to me that politicians will adjust the UBI amount erratically, leading to disruptions. Today, this happens with minimum wage laws. Often the minimum wage is not raised for long stretches of time and then in other states the minimum wage is increased to levels that are probably not economically justifiable. Likewise, what happens if the UBI is not increased adequately and suddenly a large pool of people find themselves unable to meet their basic needs on it? Or if the UBI is increased too much and workers drop out of the labor market en masse as the UBI now provides more than their basic needs?