Posts

Book Review: Life 3.0: Being Human in the Age of Artificial Intelligence 2018-01-18T17:18:20.151Z
How Popper killed Particle Physics 2017-11-07T16:55:21.727Z
Three Responses to Incorrect Folk Ontologies 2017-06-21T14:26:17.385Z
Book Review: Freezing People is (Not) Easy 2017-03-30T03:53:20.214Z
Automoderation System used by Columbus Rationality Community 2017-03-15T13:18:35.696Z
Ferocious Truth (New Blog, Map/Territory Error Categories) 2017-03-02T20:39:43.453Z

Comments

Comment by J Thomas Moros (J_Thomas_Moros) on How to choose what to work on · 2024-09-19T01:50:53.591Z · LW · GW

Thanks for the summary of various models of how to figure out what to work on. While reading it, I couldn't help but focus on my frustration about the "getting paid for it" part. Personally, I want to create a new programming language. I think we are still in the dark age of computer programming and that programming languages suck. I can't make a perfect language, but I can take a solid step in the right direction. The world could sure use a better programming language if you ask me. I'm passionate about this project. I'm a skilled software developer with a longer career than all the young guns I see. I think I've proved with my work so far that I am a top-tier language designer capable of writing a compiler and standard library. But...... this is almost the definition of something you can't and won't be paid for. At least not until you've already published a successful language. That fact greatly contributes to why we can't have better programming languages. No one can afford to let them incubate as long as needed. Because of limited resources, everyone has to push to release it as fast as possible. Unlike other software, languages have very strict backward compatibility requirements, so improving them is a challenge and inevitably leads to real issues as the language grows over time. However, they can never fix previous mistakes or address design changes needed to support new features.

Comment by J Thomas Moros (J_Thomas_Moros) on NYU Code Debates Update/Postmortem · 2024-05-25T05:34:59.588Z · LW · GW

I'm confused by the judges' lack of use of the search capabilities. I think we need more information about how the judges are selected. It isn't clear to me that they are representative of the kinds of people we would expect to be acting as judges in future scenarios of superintelligent AI debates. For example, a simple and obvious tactic would be to ask both AIs what one ought to search for in order to be able to verify their arguments. An AI that can make very compelling arguments still can't change the true facts that are known to humanity to suit their needs.

Comment by J Thomas Moros (J_Thomas_Moros) on What do you think is wrong with rationalist culture? · 2023-03-10T17:17:22.111Z · LW · GW

This is not sound reasoning because of selection bias. If any of those predictions had been correct, you would not be here to see it. Thus, you cannot use their failure as evidence.

Comment by J Thomas Moros (J_Thomas_Moros) on Progress, humanism, agency: An intellectual core for the progress movement · 2022-01-13T03:07:10.687Z · LW · GW

As someone who believes in moral error theory, I have problems with the moral language ("responsibility to lead ethical lives of personal fulfillment", "Ethical values are derived from human need and interest as tested by experience.").

I don't think that "Life’s fulfillment emerges from individual participation in the service of humane ideals" or "Working to benefit society maximizes individual happiness." Rather I would say some people find some fulfillment in those things.

I am vehemently opposed to the deathist language of "finding wonder and awe in the joys and beauties of human existence, its challenges and tragedies, and even in the inevitability and finality of death." Death is bad and should not be accepted.

I assume there are other things I would disagree with, but those are a few that stand out when skimming it.

Comment by J Thomas Moros (J_Thomas_Moros) on Progress, humanism, agency: An intellectual core for the progress movement · 2022-01-12T04:45:05.537Z · LW · GW

I agree with your three premises. However, I would recommend using a different term than "humanism".

Humanism is more than just the broad set of values you described. It is also a specific movement with more specific values. See for example the latest humanist manifesto. I agree with what you described as "humanism" but strongly reject the label humanist because I do not agree with the other baggage that goes with it. If possible, try to come up with a term that directly states the value you are describing. Perhaps something along the lines of "human flourishing as the standard of value"?

Comment by J Thomas Moros (J_Thomas_Moros) on Has anyone had weird experiences with Alcor? · 2022-01-12T04:31:25.754Z · LW · GW

I am signed up for cryonics with Alcor and did so in 2017. I checked and the two options you listed are consistent with the options I was given. I didn't have a problem with them, but I can understand your concern.

I have had a number of interactions with Alcor staff both during the signup process and since. I always found them pleasant and helpful. I'm sorry to hear that you are having a bad experience. My suggestion would be to get the representative on the phone and discuss your concerns. Obviously, final wording should be handled in writing but I think a phone conversation would help you both understand what would be acceptable to both of you.

In my opinion, the responses you have gotten probably arise from one of two sources. It is possible that she simply didn't read what you wrote carefully enough and fell back to boilerplate language that is closer to what their legal counsel has approved. She likely doesn't have the authority to accept major changes herself. If that is not what happened, then it is most likely that Alcor is trying to push the option they are pushing to avoid legal issues, the issues they have had with family in the past, and delay in cryopreservation. They want a clear-cut decision procedure that doesn't depend on too many third parties. If cryopreservation is to go well, it needs to be done in a timely fashion. Ideally, you want whoever is performing it to have a clear and immediate path to begin if it is warranted. Any judgment call or requirement to get consent could cause unnecessary delays. You might think it will be clear, but any chance your wife could claim that she should have been consulted and wasn't could cause legal problems. Thus, Alcor may be forced to consult her in all but the most clear-cut cases. Again, just schedule a call.

As a proponent of cryonics, I hope you will persist and work through this issue. Please message me if there are other questions I can answer for you. If you choose not to proceed, you can choose to keep the insurance policy and designate another recipient rather than canceling it.

P.S. Having researched all the Cryonics organizations, Alcor is by far the best. They are still small but they are working the hardest to become a fully professional organization. Their handling of the legal issues and financial structure is much better. The Cryonics Institute (CI) is run by well-meaning people who are less professional. They are more of a volunteer organization. Having attended a CI annual meeting I was disappointed by the insufficiently conservative and far-sighted investment strategy. I think CI may actually be underfunded for the goal of existing 100 years from now.

Comment by J Thomas Moros (J_Thomas_Moros) on Whole Brain Emulation: No Progress on C. elegans After 10 Years · 2021-10-04T05:54:25.071Z · LW · GW

While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.

Comment by J Thomas Moros (J_Thomas_Moros) on Whole Brain Emulation: No Progress on C. elegans After 10 Years · 2021-10-04T05:47:33.371Z · LW · GW

As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.

Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. Note that it isn't accurate to say it must be alive. It would be sufficient to freeze an individual worm and then spend extensive time and effort reading that information. Nevertheless, I can imagine that might be very difficult to do. According to wormbook.org, C. elegans has on the order of 7,000 synapses. I am not sure we know how to read the weight and threshold of a synapse. This strikes me as a task requiring significant technological development that isn't in line with existing research programs. That is, most research is not attempting to develop the technology to read specific weights and thresholds. So it would require a significant well-funded effort focused specifically on it. I am not surprised this has not been achieved given reports of lack of funding. Furthermore, I am not surprised there is a lack of funding for this.

Simulating a worm should only require an accurate model of the behavior of the worm nervous system and a simulation environment. Given that all C. elegans have the same 302 neurons this seems like it should be feasible. Furthermore, the learning mechanism of individual neurons, operation of synapses, etc. should all be things researchers outside of the worm emulation efforts should be interested in studying. Were I wanting to advance the state of the art, I would focus on making an accurate simulation of a generic worm that was capable of learning. Then simulate it in an environment similar to its native environment and try to demonstrate that it eventually learned behavior matching real C. elegans including under conditions which C. elegans would learn. That is why I was very disappointed to learn that the "simulations are far from realistic because they are not capable of learning." It seems to me this is where the research effort should focus and I would like to hear more about why this is challenging and hasn't already been done.

I believe that worm uploading is not needed to make significant steps toward showing the feasibility of WBE. The kind of worm simulation I describe would be more than sufficient. At that point, reading the weights and thresholds of an individual worm becomes only an engineering problem that should be solvable given a sufficient investment or level of technological advancement.

Comment by J Thomas Moros (J_Thomas_Moros) on Whole Brain Emulation: No Progress on C. elegans After 10 Years · 2021-10-04T05:12:10.548Z · LW · GW

A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.

Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans

Comment by J Thomas Moros (J_Thomas_Moros) on Cryptographic Boxes for Unfriendly AI · 2021-08-27T15:09:19.714Z · LW · GW

I think you are being overly optimistic about homomorphic encryption. The uFAI doesn't need to have absolute control over how the computation happens. Nor does it need to be able to perfectly predict the real-world results of running some computation. It only needs some amount of information leakage. The current best example I can think of is timing attacks on cryptographic protocols. The protocol itself should be secure, but a side-channel causes insecurity. Another example would be the meltdown and spectre vulnerabilities. How do you know your computational system doesn't have something like that?

The first idea that comes to mind is pausing. Assuming the uFAI has some way to pause computation (which is a very common operation but, an equivalent might be achieved by performing a specific computation in a tight loop). Then the uFAI might be able to create morse code by alternating steps of pause, long computation, and short computation. I imagine that exact thing is naive, but do you really believe the uFAI won't find any such vulnerability in your system?

Comment by J Thomas Moros (J_Thomas_Moros) on Six-Door Cars · 2021-03-21T23:48:11.645Z · LW · GW

I doubt the lack of 6-door cars has much to do with aesthetics. Doors and tight door seals are some of the more complex and expensive portions of the car body. Doors also pose challenges for crash safety as you have a large opening in the car body weakening the main body's structural integrity in an accident. I suspect that the reason there are so few cars with 6 doors is the extra cost of manufacturing cars. That would lead to increased car costs. Most purchasers don't value the extra convenience of the additional doors enough relative to the added price. Any company producing such a car would find a very small market which might make it not worth it to the manufacturer.

Comment by J Thomas Moros (J_Thomas_Moros) on Covid 2/25: Holding Pattern · 2021-02-26T04:53:24.528Z · LW · GW

Recently many sources have reported a "CA variant" with many of the same properties as the English and South African strains. I haven't personally investigated, but that might be something to look into. Especially given the number of rationalists in CA.

Comment by J Thomas Moros (J_Thomas_Moros) on How can I protect my bank account from large, surprise withdrawals? · 2021-02-24T03:28:23.084Z · LW · GW

As others have already answered better than I, first avoid being obligated for such large unexpected charges. The customer in the example may have canceled their credit card, but they are still legally obligated to pay that money.

To answer the actual question of how to put limits. You can use privacy.com They allow you to create new credit card numbers that bill to your bank account but can have limits both in terms of total charges and monthly charges. You can also close any number at any time without impact on your personal finances. It is meant for safety and privacy for online shopping. You set up a card for each service. For example, create a card with you auto-bill the electric bill to. Set a limit that no more than say $200 can be charged to it each month. Any transaction that would push it over that limit will be declined, even automatic payments you have scheduled.

Comment by J Thomas Moros (J_Thomas_Moros) on Covid 2/11: As Expected · 2021-02-15T04:28:37.292Z · LW · GW

I'd be interested in seeing a write up on whether people who've had COVID need to be vaccinated. I have a friend who was sick with COVID symptoms for 3 weeks and tested positive for SARS-CoV-2 shortly after the onset of symptoms. He is now being told by medical professionals that he needs to be vaccinated just the same as everyone else. I tried to look up the data on this. Sources like CDC, Cleavland Clinic, and Mayo Clinic all state that people need to be vaccinated even if they have had COVID. However, their messaging seems to be contradictory. There are many appeals to "we don't know". The reasoning doesn't appear to be any more complex than "vaccine good" and "immunity from infection 'not known'". There is no discussion of things I would expect like, the difference between testing positive with no symptoms, had symptoms but never tested, or tested positive and never had symptoms. While I can imagine reasons why immunity induced from the vaccine and from infection would be different, my prior is that most of the effects are going to be the same. There is repeated reference to not knowing how long immunity developed from infection lasts, but by definition, we have had less time to see how long immunity from the vaccine lasts. So our evidence about the vaccine would be weaker. I could say a lot more, but I'll leave it at that.

To avoid any confusion: My actual model is that if you've had COVID19 then the vaccine would act as a booster. So I'd say people who've had it should get vaccinated eventually but should be among the lowest priority. That should be modulated by the probability that you had COVID and the fact that asymptotic COVID may be less likely to develop immunity. On the other hand, having had asymptomatic COVID is probably evidence that you will be asymptotic if you get it again. That is not the message that is being given to the public.

Comment by J Thomas Moros (J_Thomas_Moros) on History of the Public Suffix List · 2021-02-07T23:08:58.897Z · LW · GW

It's unfortunate that we have this mess. But couldn't this have been avoided by defaulting to minimal access? Per Mozilla (https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies), if a cookie's domain isn't set, it defaults to the domain of the site excluding subdomains. If instead, this defaulted to the full domain, wouldn't that resolve the issue? The harm isn't in allowing people to create cookies that span sites, but in doing so accidentally, correct? The only concern is then tracking cookies. For this, a list of TLDs which it would be invalid to specify as the domain would cover most cases. Situations like github.io are rare enough that there could simply be some additional DNS property they set which makes it invalid to have a cookie at that domain level.

Similarly, the secure and http-only properties ought to default to true.

Comment by J Thomas Moros (J_Thomas_Moros) on Grokking illusionism · 2021-01-09T17:29:18.032Z · LW · GW

Even after reading your post, I don't think I'm any closer to comprehending the illusionist view of reality. One of my good and most respected friends is an illusionist. I'd really like to understand his model of consciousness.

Illusionists often seem to be arguing against strawmen to me. (Notwithstanding the fact that some philosophers actually do argue for such "strawman" positions). Dennet's argument against "mental paint" seems to be an example of this. Of course, I don't think there is something in my mental space with the property of redness. Of course "according to the story your brain is telling, there is a stripe with a certain type of property." I accept that the most likely explanation is that everything about consciousness is the result of computational processes (in the broadest sense that the brain is some kind of neural net doing computation, not in the sense that it is anything actually like the Von Neumann architecture computer that I am using to write this comment). For me, that in no way removes the hard problem of consciousness, it only sharpens it.

Let me attempt to explain why I am unable to understand what the strong illusionist position is even saying. Right now, I'm looking at the blue sky outside my window. As I fix my eyes on a specific point in the sky and focus my attention on the color, I have an experience of "blueness." The sky itself doesn't have the property of phenomenological blueness. It has properties that cause certain wavelengths of light to scatter and other wavelengths to pass through. Certain wavelengths of light are reaching my eyes. That is causing receptors in my eyes to activate which in turn causes a cascade of neurons to fire across my brain. My brain is doing computation which I have no mental access to and computing that I am currently seeing blue. There is nothing in my brain that has the property of "blue". The closest thing is something analogous to how a certain pattern of bits in a computer has the "property" of being ASCII for "A". Yet I experience that computation as the qualia of "blueness." How can that be? How can any computation of any kind create, or lead to qualia of any kind? You can say that it is just a story my brain is telling me that "I am seeing blue." I must not understand what is being claimed, because I agree with it and yet it doesn't remove the problem at all. Why does that story have any phenomenology to it? I can make no sense of the claim that it is an illusion. If the claim is just that there is nothing involved but computation, I agree. But the claim seems to be that there are no qualia, there is no phenomenology. That my belief in them is like an optical illusion or misremembering something. I may be very confused about all the processes that lead to my experiencing the blue qualia. I may be mistaken about the content and nature of my phenomenological world. None of that in any way removes the fact that I have qualia.

Let me try to sharpen my point by comparing it to other mental computation. I just recalled my mother's name. I have no mental access to the computation that "looks up" my mother's name. Instead, I go from seemingly not having ready access to the name to having it. There is no qualia associated with this. If I "say the name in my head", I can produce an "echo" of the qualia. But I don't have to do this. I can simply know what her name is and know that I know it. That seems to be consistent with the model of me as a computation. That if I were a computation and retrieved some fact from memory, I wouldn't have direct access to the process by which it was retrieved from memory, but I would suddenly have the information in "cache." Why isn't all thought and experience like that? I can imagine an existence where I knew I was currently receiving input from my eyes that were looking at the sky and perceiving a shade which we call blue without there being any qualia. 

For me, the hard problem of consciousness is exactly the question, "How can a physical/computational process give rise to qualia or even the 'illusion' of qualia?" If you tell me that life is not a vital force but is instead very complex tiny machines which you cannot yet explain to me, I can accept that because, upon close examination, those are not different kinds of things. They are both material objects obeying physical laws. When we say qualia are instead complex computations that you cannot yet explain to me, I can't quite accept that because even on close examination, computation and qualia seem to be fundamentally different kinds of things and there seems to be an uncrossable chasm between them.

I sometimes worry that there are genuine differences in people's phenomenological experiences which are causing us to be unable to comprehend what others are talking about. Similar to how it was discovered that certain people don't actually have inner monologues or how some people think in words while others think only in pictures.

Comment by J Thomas Moros (J_Thomas_Moros) on South Bay Meetup, Saturday 1/25 · 2020-01-10T16:29:31.140Z · LW · GW

Do we need to RSVP in some way?

Comment by J Thomas Moros (J_Thomas_Moros) on To first order, moral realism and moral anti-realism are the same thing · 2019-06-21T14:39:29.394Z · LW · GW

I can parse your comment a couple of different ways, so I will discuss multiple interpretations but forgive me if I've misunderstood.

If we are talking about 3^^^3 dust specks experienced by that many different people, then it doesn't change my intuition. My early exposure to the question included such unimaginably large numbers of people. I recognize scope insensitivity may be playing a role here, but I think there is more to it.

If we are talking about myself or some other individual experiencing 3^^^3 dust specks (or 3^^^3 people each experiencing 3^^^3 dust specks), then my intuition considers that a different situation. A single individual experiencing that many dust specks seems to amount to torture. Indeed, it may be worse than 50 years of regular torture because it may consume many more years to experience them all. I don't think of that as "moral learning" because it doesn't alter my position on the former case.

If I have to try to explain what is going on here in a systematic framework, I'd say the following. Splitting up harm among multiple people can be better than applying it to one person. For example, one person stubbing a toe on two different occasions is marginally worse than two people each stubbing one toe. Harms/moral offenses may separate into different classes such that no amount of a lower class can rise to match a higher class. For example, there may be no number of rodent murders that is morally worse than a single human murder. Duration of harm can outweigh intensity. For example, imagine mild electric shocks that are painful but don't cause injury and furthermore that receiving one followed by another doesn't make the second any more physically painful. Some slightly more intense shocks over a short time may be better than many more mild shocks over a long time. This comes in when weighing 50 years of torture vs 3^^^3 dusk specks experienced by one person though it is much harder to make the evaluation.

Those explanations feel a little like confabulations and rationalizations. However, they don't seem to be any more so than a total utilitarianism or average utilitarianism explanation for some moral intuitions. They do, however, give some intuition why a simple utilitarian approach may not be the "obviously correct" moral framework.

If I failed to address the "aggregation argument," please clarify what you are referring to.

Comment by J Thomas Moros (J_Thomas_Moros) on To first order, moral realism and moral anti-realism are the same thing · 2019-06-06T16:31:27.059Z · LW · GW

At least as applied to most people, I agree with your claim that "in practice, and to a short-term, first-order approximation, moral realists and moral anti-realists seem very similar." As a moral anti-realist myself, a likely explanation for this seems to be that they are engaging in the kind of moral reasoning that evolution wired into them. Both the realist and anti-realist are then offering post hoc explanations for their behavior.

With any broad claims about humans like this, there are bound to be exceptions. Thus all the qualifications you put into your statement. I think I am one of those exceptions among the moral anti-realist. Though, I don't believe it in any way invalidates your "Argument A." If you're interested in hearing about a different kind of moral anti-realist, read on.

I'm known in my friend circle for advocating that rationalists should completely eschew the use of moral language (except as necessary to communicate to or manipulate people who do use it). I often find it difficult to have discussions of morality with both moral realists and anti-realists. I don't often find that I "can continue to have conversations and debates that are not immediately pointless." I often find people who claim to be moral anti-realists engaging in behavior and argument that seem antithetical to an anti-realist position. For example, when anti-realists exhibit intense moral outrage and think it justified/proper (esp. when they will never express that outrage to the offender, but only to disinterested third parties). If someone engages in a behavior that you would prefer they not, the question is how can you modify their behavior. You shouldn't get angry when others do what they want, and it differs from what you want. Likewise, it doesn't make sense to get mad at others for not behaving according to your moral intuitions (except possibly in their presence as a strategy for changing their behavior).

To a great extent, I have embraced the fact that my moral intuitions are an irrational set of preferences that don't have to and never will be made consistent. Why should I expect my moral intuitions to be any more consistent than my preferences for food or whom I find physically attractive? I won't claim I never engage in "moral learning," but it is significantly reduced and more often of the form of learning I had mistaken beliefs about the world than changing moral categories. When debating the torture vs. dust specks problem with friends, I came to the following answer: I prefer dust specks. Why? Because my moral intuitions are fundamentally irrational, but I predict I would be happier with the dust specks outcome. I fully recognize that this is inconsistent with my other intuition that harms are somehow additive and the clear math that any strictly increasing function for combining the harm from dust specks admits of a number of people receiving dust specks in their eyes that tallies to significantly more harm than the torture. (Though there are other functions for calculating total utility that can lead to the dust specks answer.)

Comment by J Thomas Moros (J_Thomas_Moros) on Military AI as a Convergent Goal of Self-Improving AI · 2017-11-13T15:23:49.836Z · LW · GW

Not going to sign up with some random site. If you are the author, post a copy that doesn't require signup.

Comment by J Thomas Moros (J_Thomas_Moros) on How Popper killed Particle Physics · 2017-11-08T18:18:02.457Z · LW · GW

I think moving to frontpage might have broken it. I've put the link back on.

Comment by J Thomas Moros (J_Thomas_Moros) on Problems as dragons and papercuts · 2017-11-03T15:22:34.047Z · LW · GW

I'm not sure I agree. Sure, there are lots of problems of the "papercut" kind, but I feel like the problems that concern me the most are much more of the "dragon kind". For example:

  1. There are lots of jobs in my career field in my city, but there don't seem to be any that are actually do one of: do truly quality work, work on the latest technology where everything in the field will go in my opinion or produce a product/service that I care about. I'm not saying I can't get those jobs, I'm saying in 15+ years working in this city I've never even heard of one. I could move across country and it might solve the job problem, but leaving my family and friends is a "dragon".
  2. Meeting women I want to date seems to be a dragon problem. I only know 2 women who I have met that meet my criteria.
  3. I have projects I'd like to accomplish that will take many thousands of hours each. Given constraints of work, socializing, self care and trying to meet a girlfriend (see item 2), I'm looking at a really really long time before any of these projects nears completion even if I was able to be super dedicated to devoting a couple hours a day to them, which I have not been able to.
Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:46:10.823Z

What is going on here? Copy me

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:45:56.313Z

Copy me

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:39:51.637Z

[Yes](http://hangouts.google.com)

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:29:28.628Z

*hello*

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:28:47.655Z

http://somewhere.com

Comment by J_Thomas_Moros on [deleted post] 2017-11-03T14:28:07.802Z

Can I write a linke here [Yes](http://hangouts.google.com)

Comment by J Thomas Moros (J_Thomas_Moros) on logic puzzles and loophole abuse · 2017-10-05T19:04:43.831Z · LW · GW

You should probably clarify that your solution is assuming the variant where the god's head explodes when given an unanswerable question. If I understand correctly, you are also assuming that the god will act to prevent their head from exploding if possible. That doesn't have to be the case. The god could be suicidal but perhaps not be able to die in any other way and so given the opportunity by you to have their head explode they will take it.

Additionally, I think it would be clearer if you could offer a final English sentence statement of the complete question that doesn't involve self referential variables. The variables formation is helpful for seeing the structure, but confusing in other ways.

Comment by J_Thomas_Moros on [deleted post] 2017-10-03T15:55:21.141Z

Oh, sorry

Comment by J_Thomas_Moros on [deleted post] 2017-10-03T15:45:51.890Z

A couple typos:

  1. The date you give is "(11/30)" it should be "(10/30)"

  2. "smedium" should be "medium"

Comment by J Thomas Moros (J_Thomas_Moros) on Discussion: Linkposts vs Content Mirroring · 2017-10-03T14:42:13.115Z · LW · GW

I feel strongly that link posts are an important feature that needs to be kept. There will always be significant and interesting content created on non-rationalist or mainstream sites that we will want to be able to link to and discuss on LessWrong. Additionally, while we might hope that all rationalist bloggers would be ok with cross-posting their content to LessWrong, there will likely always be those who don't want to and yet we may want to include their posts in the discussion here.

Comment by J_Thomas_Moros on [deleted post] 2017-09-24T15:44:34.202Z

A comment of mine

Comment by J Thomas Moros (J_Thomas_Moros) on Open thread, September 18 - September 24, 2017 · 2017-09-20T19:56:28.571Z · LW · GW

What you label "implicit utility function" sounds like instrumental goals to me. Some of that is also covered under Basic AI Drives.

I'm not familiar with the pig that wants to be eaten, but I'm not sure I would describe that as a conflicted utility function. If one has a utility function that places maximum utility on an outcome that requires their death, then there is no conflict, that is the optimal choice. Though I think human's who think they have such a utility function are usually mistaken, but that is a much more involved discussion.

Not sure what the point of a dynamic utility function is. Your values really shouldn't change. I feel like you may be focused on instrumental goals that can and should change and thinking those are part of the utility function when they are not.

Comment by J Thomas Moros (J_Thomas_Moros) on LW 2.0 Strategic Overview · 2017-09-18T17:41:56.960Z · LW · GW

I'm not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.

Comment by J Thomas Moros (J_Thomas_Moros) on 2017 LessWrong Survey · 2017-09-17T04:26:32.103Z · LW · GW

I have completed the survey and upvoted everyone else on this thread

Comment by J Thomas Moros (J_Thomas_Moros) on Is there a flaw in the simulation argument? · 2017-09-01T22:59:40.754Z · LW · GW

There is a flaw in your argument. I'm going to try to be very precise here and spell out exactly what I agree with and disagree with in the hope that this leads to more fruitful discussion.

Your conclusions about scenarios 1, 2 and 3 are correct.

You state that Bostrom's disjunction is missing a fourth case. The way you state (iv) is problematical because you phrase it in terms of a logical conclusion that "the principle of indifference leads us to believe that we are not in a simulation" which, as I'll argue below, is incorrect. Your disjunct should properly be stated as something like (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, however we do this in a way so as to keep the number of simulated people well below the number of real people at any given moment. Stated that way, it is clear that Bostrom's (iii) is meant to include that outcome. Bostrom's argument is predicated only on the number of ancestral simulations, not whether they are run in parallel or sequentially or how much time they are run over. The reason Bostrom includes your (iv) in (iii) is because it doesn't change the logic of the argument. Let me now explain why.

For the sake of argument let's split (iii) into two cases (iii.a) and (iii.b). Let (iii.a) be all the futures in (iii) not covered by your (iv). For convenience, I'll refer to this as "parallel" even though there are cases in (iv) where some simulations could be run in parallel. Then (iii.b) is equivalent to your (iv). For convenience, I'll refer to this as serial even though again, it might not be strictly serial. I think we agree that if the future were guaranteed to be (iii.a), then we should bet we are in a simulation.

First, even if you were right about (iii.b), I don't think it invalidates the argument. Essentially, you have just added another case similar to (ii), and it would still be the case that there are many more simulations that real people because of (iii.a) and we should bet that we are in a simulation.

Second, if the future is actually (iii.b) we should still bet we are in a simulation just as much as with (iii.a). At several points, you appeal to the principle of indifference, but you are vague on how this should be applied. Let me give a framework for thinking about this. What is happening here is that we are reasoning under indexical uncertainty. In each of your three scenarios and the simulation argument, there is uncertainty about which observer we are. Your statement that by the principle of indifference we should conclude something is actually saying what the SSA say which is that we should reason as if we are a randomly chosen observer. In Bostrom's terms, you are uncertain which observer in your reference class you are. To make sure we are on the same page, let me go through your scenarios using this approach.

Scenario 1: You are not sure if you are in room X or room Y, the set of all people currently in room X and Y is your reference class. You reason as if you could be a randomly selected one so you have a 1000 to 1 chance of being in room X.

Scenario 2: You are told about the many people who have been in room Y in the past. However, they are in your past. You have no uncertainty about your temporal index relative to them, so you do not add them to your reference class and reason the same as in scenario 1. Bostrom's book is weak here in that he doesn't give you very good rules for selecting your reference class. I'm arguing that one of the criteria is that you have to be uncertain if you could be that person or not. So for example, you know you are not one of the many people not currently in room X or Y so you don't include them in your reference class. Your reference class is the set of people you are unsure of your index relative to.

Scenario 3: This one is more tricky to reason correctly about. I think you are wrong when you say that the only relevant information here is diachronic information. You know you are now in room Z that contains 1 billion people who passed through room Y and 10,000 people who passed through room X. Your reference class is the people in room Z. You don't have to reason about the temporal information or the fact that at any given moment there was only one person in room Y but 1,000 people in room X. The passing through room X or Y is now only a property of the people in room Z. This is equivalent to if I said you are blindfolded in a room with 1 billion people wearing red hats and 10,000 people wearing blue hats. Which hat color should you bet you are wearing? Reasoning with the people in room Z as your reference class you correctly give your self a 1 billion to 10,000 chance of having passed through room Y.

In (iii.b), you are uncertain whether you are in a simulation or reality. But if you are in a simulation you are also uncertain where you are chronologically relative to reality. Thus if a pair of simulations were run in sequence, you would be unsure if you were in the first or second simulation. You have both spatial and temporal uncertainty. You aren't sure what the proper now is. Your reference class includes everyone in the historical reality as well as everyone in all the simulations. Given that as your reference class, you should reason that you are in a simulation (assuming many simulations are run). It doesn't matter that those simulations are run serially, only that many of them are run. Your reference class isn't limited to the current simulation and the current reality because you aren't sure where you are chronologically relative to reality.

With regards to SIA or SSA. I can't say that they make any difference to your position because the problem is that you have chosen the wrong reference class. In the original simulation argument, SIA vs. SSA makes little or no difference because presumably, the number of people living in historical reality is roughly equal to the number of people living in any given simulation. SIA only changes the conclusions when one outcome contains many more observers than the other. Here we treat each simulation as a different possible outcome, and so they agree.

Comment by J Thomas Moros (J_Thomas_Moros) on Is there a flaw in the simulation argument? · 2017-09-01T21:38:25.777Z · LW · GW

We are totally blindfolded. He specified that they would be "ancestor simulations" thus in all those simulations they would appear to be in a time prior to simulation.

Comment by J Thomas Moros (J_Thomas_Moros) on Is there a flaw in the simulation argument? · 2017-09-01T21:36:08.047Z · LW · GW

Looks like the poster edited the post since you took this quote. The last two sentence have been removed. Though they might not have explained it well, OP is correct on this point. I think the two sentences removed confused it though.

Crucially you are "told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X." You are given information about your temporal position relative to all of those people. So regardless whether they were asked the question when they were in the room, you know you are not them. You know that your reference class is the 1000 in room X and 1 in room Y right now. I'm not sure why you're bringing up asking people repeatedly. I'm pretty sure the poster was assuming everyone was asked only once.

The answer would change if you were told that at some point in the current year (past or future) a total of 1 billion people would pass through room Y at one time or another whereas only 10,000 people would pass through room X. Then you would not know your temporal position and should bet that you are in room Y.

Comment by J Thomas Moros (J_Thomas_Moros) on Open thread, July 10 - July 16, 2017 · 2017-08-01T01:09:14.899Z · LW · GW

If you can afford it, it makes more sense to sign up at Alcor. Alcor's patient care trust improves the chances that you will be cared for indefinitely after cryopreservation. CI asserts their all volunteer status as a benefit, but the cryonics community has not been growing and has been aging. It is not unlikely that there could be problems with availability of volunteers in the next 50 years.

Comment by J Thomas Moros (J_Thomas_Moros) on Three Responses to Incorrect Folk Ontologies · 2017-06-21T23:19:47.541Z · LW · GW

This post was meant to apply when you find either that your own folk ontology is incorrect or to assist people who agree that the folk ontology is incorrect but find themselves disagreeing because they have chosen different responses. Establishing the folk ontology to be incorrect was a prerequisite and like all beliefs should be subject to revision based on new evidence.

This is in no way meant to dismiss genuine debate. As a moral nihilist, I might put moral realism in the category of incorrect "folk ontology". However, if I'm discussing or debating with a moral realist, I will have to engage their arguments not just dismiss it because I have already labeled their view as a folk ontology. In such a debate, it can be helpful to recognize which response I have taken and be clear when other participants may be adopting a different one.

Comment by J Thomas Moros (J_Thomas_Moros) on Three Responses to Incorrect Folk Ontologies · 2017-06-21T14:26:38.644Z · LW · GW

When we find that the concepts typically held by people, termed folk ontologies, don't correspond to the territory, what should we do with those terms/words? This post discusses three possible ways of handling them. Each is described and discussed with examples from science and philosophy.

Comment by J Thomas Moros (J_Thomas_Moros) on April '17 I Care About Thread · 2017-04-20T12:39:17.076Z · LW · GW

The reality today is that we are probably still a long way off from being able to revive someone. To me, the promise of cryonics has a lot to do with being a fallback plan for life extension technologies. Consequently, it is important that it be available and used today. Thus my definition of success. That said, if the cryonics movement were more successful in the way I have described, a lot more effort and money would go into cryonics research and bring us much closer to being able to revive someone. It would also mean that currently cryopreserved patients would be more likely to be cared for long enough to be revived.

Comment by J Thomas Moros (J_Thomas_Moros) on April '17 I Care About Thread · 2017-04-20T01:15:25.273Z · LW · GW

I agree that signing up for cryonics is far too complicated and this is one of the things that needs to be addressed. My friend and I have a number of ideas how that might be done.

While I'm not sure about late night basic cable infomercials, existing cryonics organizations certainly don't carry out much if any advertising. There are a number of good reasons that they are not advertising. Those can and should be addressed by any future cryonics organization.

Comment by J Thomas Moros (J_Thomas_Moros) on April '17 I Care About Thread · 2017-04-20T01:08:01.517Z · LW · GW

To me, success would be the number of patient's signed up for cryonics, greater cultural acceptance and recognition of cryonics as a reasonable patient choice from the medical field and government.

Comment by J Thomas Moros (J_Thomas_Moros) on April '17 I Care About Thread · 2017-04-18T16:41:48.516Z · LW · GW

A friend and I are investigating why the cryonics movement hasn't been more successful and looking at what can be done to improve the situation. We have some ideas and have begun reaching out to people in the cryonics community. If you are interested in helping, message me. Right now it is mostly researching things about the existing cryonics organizations and coming up with ideas. In the future, there could be lots of other ways to contribute.

Comment by J Thomas Moros (J_Thomas_Moros) on Towards a More Sophisticated Understanding of Myth and Religion (?) · 2017-04-18T02:43:30.897Z · LW · GW

I find Jordon Peterson's views fascinating and have a rationalist friend whose thinking has recently been greatly influenced by him. So much so that my friend recently went to a church service. My problem with his view is that it ignores the on the ground reality that many adherents believe their religion to be true in the sense of being a proper map of the territory. This is in direct contradiction to Peterson's use of religion and truth. I warned my friend that this is what he would find in church. Sure enough, that is what he found, and he will not be returning.

Comment by J Thomas Moros (J_Thomas_Moros) on Requesting Questions For A 2017 LessWrong Survey · 2017-04-11T15:25:06.233Z · LW · GW

I and some other rationalists have been thinking about cryonics a lot recently and how we might improve the strength of cryonics offerings and the rate of adoption. After some consideration, we came up with a couple suggestions for changes to the survey that we think would be helpful and interesting.

  1. A question along the lines of "What impact do you believe money and attention put towards life extension or other technologies such as cryonics has on the world as a whole?" Answers:

    • Very positive
    • Positive
    • Neutral
    • Negative
    • Very Negative

    The purpose of this question is to evaluate whether the community feels that resources put toward the benefit of individuals through life extension and cryonics has a positive or negative impact on the world. For example, people who expect to live longer may have more of a long term orientation, leading them to do more to improve the future.

  2. Add to the question about being signed up for cryonics an option along the lines of "No, I would like to sign up but can't due to opposition I would face from family or friends". We hear this is one of the reasons people don't sign up for cryonics. It would be great to get some numbers on this, and it doesn't add an extra question, just an extra option for that question.
Comment by J Thomas Moros (J_Thomas_Moros) on Book Review: Freezing People is (Not) Easy · 2017-03-30T03:54:43.316Z · LW · GW

This is a review of the book Review: Freezing People is (Not) Easy by Bob Nelson. The book recounts his experiences as president of the Cryonics Society of California during which he cryopreserved and then attempted (and failed) to maintain the cryopreservation of a number of early cryonics patients.

Comment by J Thomas Moros (J_Thomas_Moros) on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-21T17:04:04.468Z · LW · GW

This post describes an interesting mashup of homomorphic encryption and neural networks. I think it is an neat idea and appreciate the effort to put together a demo. Perhaps there will be useful applications.

However, I think the suggestion that this could be an answer to the AI control problem is wrong. First, a superintelligent deep learning AI would not be a safe AI because we would not be able to reason about its utility function. If you are meaning that the same idea could be applied to a different kind of AI so that you would have an oracle AI for which a secret key was needed to read its outputs. I don't think this helps. You have created a box for the oracle AI, however the problem remains that a superintelligence can probably escape from the box either by convincing you to let it out or by some less direct means that you can't foresee.