Posts

Does consciousness persist? 2015-02-14T15:52:20.881Z
My Skepticism 2015-01-31T02:00:52.864Z
Immortality: A Practical Guide 2015-01-26T16:17:23.179Z

Comments

Comment by G0W51 on Open Thread Feb 16 - Feb 23, 2016 · 2016-03-06T16:25:24.038Z · LW · GW

Panexistential risk is a good, intuitive, name.

Comment by G0W51 on Open Thread Feb 16 - Feb 23, 2016 · 2016-02-23T05:32:12.956Z · LW · GW

True. Also, the Great Filter is more akin to an existential catastrophe than to existential risk, that is, the risk of an existential catastrophe.

Comment by G0W51 on Open Thread Feb 16 - Feb 23, 2016 · 2016-02-20T19:25:54.049Z · LW · GW

Is there a term for a generalization of existential risk that includes the extinction of alien intelligences or the drastic decrease of their potential? Existential risk, that is, the extinction of Earth-originating intelligent life or the drastic decrease of its potential, does not sound nearly as harmful if there are alien civilizations that become sufficiently advanced in place of Earth-originating life. However, an existential risk sounds far more harmful if it compromises all intelligent life in the universe, or if there is no other intelligent life in the universe to begin with. Perhaps this would make physics experiments more concerning than other existential risks, as even if their chance of causing the extincion of Earth-originating life is much smaller than other existential risks, their chance of eliminating all life in the universe may be higher.

Comment by G0W51 on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-20T19:18:03.331Z · LW · GW

That sounds about right.

Comment by G0W51 on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-15T19:58:38.908Z · LW · GW

It's later, but, unless I am mistaken, the arrival of the intelligence explosion isn't that much later than when most people will retire, so I don't think that fully explains it.

Comment by G0W51 on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-15T19:55:07.428Z · LW · GW

People could vote for government officials who have FAI research on their agenda, but currently, I think few if any politicians even know what FAI is. Why is that?

Comment by G0W51 on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-14T04:49:39.446Z · LW · GW

Why do people spend much, much more time worrying about their retirement plans than the intelligence explosion if they are a similar distance in the future? I understand that people spend less time worrying about the intelligence explosion than what would be socially optimal because the vast majority of its benefits will be in the very far future, which people care little about. However, it seems probable that the intelligence explosion will still have a substantial effect on many people in the near-ish future (within the next 100 years). Yet, hardly anyone worries about it. Why?

Comment by G0W51 on Open thread, Dec. 14 - Dec. 20, 2015 · 2015-12-23T21:48:32.097Z · LW · GW

I would like to improve my instrumental rationality and improve my epistemic rationality as a means to do so. Currently, my main goal is to obtain useful knowledge (mainly in college) in order to obtain resources (mainly money). I'm not entirely sure what I want to do after that, but whatever it is, resources will probably be useful for it.

Comment by G0W51 on Open thread, Dec. 14 - Dec. 20, 2015 · 2015-12-22T00:06:07.877Z · LW · GW

Improving my rationality. Are you looking for something more specific?

Comment by G0W51 on Open thread, Dec. 14 - Dec. 20, 2015 · 2015-12-20T02:43:33.390Z · LW · GW

How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?

Comment by G0W51 on A Guide to Rational Investing · 2015-12-19T07:30:18.197Z · LW · GW

What exactly do you suggest using to invest, then?

Comment by G0W51 on How can I reduce existential risk from AI? · 2015-10-09T05:59:31.290Z · LW · GW

Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.

Comment by G0W51 on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-09T00:36:49.967Z · LW · GW

Presumably "employ the same strategy" should be interpreted loosely, as it seems problematic to give no consideration to agents who would use a slightly different allocation strategy.

Thanks for the idea. I will look into it.

Comment by G0W51 on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-05T06:54:34.657Z · LW · GW

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV) and how much weight each agent will be given?

Nick Bostrom's Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don't mention this topic, so I am doubtful on the usefulness of reading their entireties.

(This is a repost.)

Comment by G0W51 on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-29T03:41:43.589Z · LW · GW

Is it okay to re-ask questions on open threads if they were not answered the last time the were ask on it? I had asked this question but received no answer, and I am concerened it would be spammy to re-ask.

Comment by G0W51 on Wear a Helmet While Driving a Car · 2015-09-25T20:20:21.222Z · LW · GW

My gut says that the performance in a vehicle collision will probably bring the head to a halt against a relatively immobile object, so the hat won't do much of anything as the crushable bits are crushed too fast to be effective.

I don't see how the latter clause follows from the former. You said that in the drop test, the impact reduction was roughly 25%. This isn't huge, but I can't say it "won't do much of anything." Were you thinking of something else to support your claim?

Comment by G0W51 on Open thread, Sep. 21 - Sep. 27, 2015 · 2015-09-25T00:55:53.107Z · LW · GW

What literature is available on who will be given moral consideration in a superintelligence's coherent extrapolated volition (CEV), and how much weight each agent will be given?

Nick Bostrom's Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don't mention this topic, so I am doubtful on the usefulness of reading their entireties.

Comment by G0W51 on Open thread, Sep. 21 - Sep. 27, 2015 · 2015-09-24T04:15:20.149Z · LW · GW

Don't worry, I don't mind math. Alas, I mainly have difficulty understanding why people act how they do, so I doubt mathematics will help much with that. I think I'm going to take the suggestion someone gave of reading more textbooks. A psychology course should also help.

Comment by G0W51 on Wear a Helmet While Driving a Car · 2015-09-23T21:25:04.633Z · LW · GW

Severity Index (I assume this is based on the head injury criterion?)

Actually, in an email they said the head of NOCSAE did the test, so presumably the NOCSAE Severity Index was used. An NOCSAE article says, "There is no measurable difference in safety of helmets with scores below the 1200 SI threshold." So in other words, in the test the hats did not protect against any significant damage, because no significant damage was done even without the hat. Despite this, the webpage said said that, "The Crasche hat reduces the severity of blunt force impact by 94%." I count this deceptive marketing as a strike against the product.

That said, given the low cost of purchasing and wearing the hat, it seems worthwhile for a transhumanist to purchase, simply due the vast gains to be had from a slight reduction in risk of death.

Comment by G0W51 on Open thread, Sep. 21 - Sep. 27, 2015 · 2015-09-23T20:18:06.788Z · LW · GW

I don't see how this would really help unless I am trying to do original research.

Comment by G0W51 on Open thread, Sep. 21 - Sep. 27, 2015 · 2015-09-23T04:23:19.163Z · LW · GW

Where can one find information on the underlying causes of phenomena? I have noticed that most educational resources discuss superficial occurrences and trends but not their underlying causes. For example, this Wikipedia article discusses the happenings in the Somali Civil War but hardly discusses the underlying motivations of each side and why the war turned out how it did. Of course, such discussions are often opinionated and have no clear-cut answers, perhaps making Wikipedia a sub-optimal place for them.

I know LW might not be the best place to ask this, but my intuition suggests that LWers may care more about this deeper-level understanding, so may be able to suggest resources.

Comment by G0W51 on Wear a Helmet While Driving a Car · 2015-09-21T03:54:23.900Z · LW · GW

My gut says that the performance in a vehicle collision will probably bring the head to a halt...

Presumably, the impact would cause the pedestrian to fly back in roughly the same direction the car was moving during the impact, rather than come to a complete stop. That said, I don't really know enough about the tests to know if this would make a difference in efficacy. Could you link the exact data you received?

Comment by G0W51 on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-16T01:22:24.554Z · LW · GW

Perhaps the endowment effect evolved because placing high value on an object you own signals to others that the object is valuable, which signals that you are wealthy, which can increase social status, which can increase mating prospects. I have not seen this idea mentioned previously, but I only skimmed parts of the literature.

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-10T04:07:47.272Z · LW · GW

Yes, but I'm looking to see if it increases existential risk more than it decreases it, and if the increase is significant.

Comment by G0W51 on A model of UDT with a concrete prior over logical statements · 2015-08-09T21:42:36.715Z · LW · GW

Where exactly is the logical prior being used in the decision procedure? It doesn't seem like it would be used for calculating U, as U was implied to be computable. I don't see why it would be needed for p, as p could be computed from the complexity of the program, perhaps via kolmogorov complexity. Also, what is the purpose of us? Can't the procedure just set us to be whatever U outputs?

Comment by G0W51 on Effects of Castration on the Life Expectancy of Contemporary Men · 2015-08-09T21:38:33.597Z · LW · GW

Oh, I think I see. Confidence is a feeling, while credence is a belief.

Comment by G0W51 on Effects of Castration on the Life Expectancy of Contemporary Men · 2015-08-09T14:54:13.382Z · LW · GW

I find it interesting that you both are underconfident and realize you are underconfident. Have you tried adjusting for underconfidence like you would any other cognitive bias? (At least you need not adjust for overconfidence!)

Comment by G0W51 on Effects of Castration on the Life Expectancy of Contemporary Men · 2015-08-09T14:45:35.247Z · LW · GW

Alternatively, the site could let the users determine what it good. Users could "like" or "dislike" articles, and these likes and dislikes would affect the reputation of the publisher. The higher the publisher's reputation, the more likes, and the fewer dislikes and article has, the higher rank the article would get when being searched for, and articles with sufficiently low rankings would be hidden. Think Stack Exchange for science.

It could be expanded in many ways, for example by weighing likes and dislikes by high-status users more heavily than low-status ones, or by using numeric ratings instead.

Comment by G0W51 on Effects of Castration on the Life Expectancy of Contemporary Men · 2015-08-08T21:07:47.593Z · LW · GW

Good job. Why hasn't this been published in a journal?

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-07T17:33:08.830Z · LW · GW

Oppression could cause an existential catastrophe if the oppressive regime is never ended.

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-07T09:09:30.856Z · LW · GW

I have heard (from the book Global Catastrophic Risks) that life extension could increase existential risk by giving oppressive regimes increased stability by decreasing how frequently they would need to select successors. However, I think it may also decrease existential risk by giving people a greater incentive to care about the far future (because they could be in it). What are your thoughts on the net effect of life extension?

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-06T18:18:53.289Z · LW · GW

The book Global Catastrophic Risks states that it does not appear plausible that molecular manufacturing will not come into existence before 2040 or 2050. I am not at all an expert on molecular manufacturing, but this seems hard to believe, given how little work seems to be going into it. I couldn't find any sources discussing when molecular manufacturing will come into existence. Thoughts?

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-06T09:18:38.121Z · LW · GW

But other than self-importance, why don't people take it seriously? Is it otherwise just due to the absurdity and availability heuristics?

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T12:20:26.727Z · LW · GW

I think something else is going on. The responses to this question about the feasibility of strong AI mostly stated that it was possible, though selection bias is probably largely at play, as knowledgable people would be more likely to answer than the ignorant would be.

Comment by G0W51 on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T03:58:20.099Z · LW · GW

Why don't people (outside small groups like LW) advocate the creation of superintelligence much? If it is Friendly, it would have tremendous benefits. If superintelligence's creation isn't being advocated out of fears of it being unFriendly, then why don't more people advocate FAI research? Is it just too long-term for people to really care about? Do people not think managing the risks is tractable?

Comment by G0W51 on Open Thread, Jul. 13 - Jul. 19, 2015 · 2015-07-18T10:28:46.492Z · LW · GW

Despite there being multiple posts on recommended reading, there does not seem to be any comprehensive and non-redundant list stating what one ought to read. The previous lists do not seem to cover much non-rationality-related but still useful material that LWers might not have otherwise learned about (e.g. material on productivity, happiness, health, and emotional intelligence). However, there still is good material on these topics, often in the form of LW blog posts.

So, what is the cause of the absence of a single, comprehensive list? Such a list sounds incredibly useful for making efficient use of LWers' time. Should one be made? If so, I am happy to make a post about it and state my recommendations.

Comment by G0W51 on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-06-30T04:50:13.926Z · LW · GW

See the post in the sequence "37 Ways That Words Can Be Wrong" called "Sneaking in Connotations. It seems to be roughly what you're talking about.

Comment by G0W51 on Open Thread, Jun. 29 - Jul. 5, 2015 · 2015-06-30T04:43:30.826Z · LW · GW

Here's a potential existential risk. Suppose a chemical is used for some task or made as a byproduct of another task, especially one that is spread throughout the atmosphere. Additionally, suppose it causes sterility, but it takes a very long time to cause sterility. Perhaps such a chemical could attain widespread use before its deleterious effects are discovered, and by then, it would have already sterilized everyone, potentially causes an existential catastrophe. I know this scenario for causing an existential catastrophe seems very small compared to other risks, but is it worthy of consideration?

Comment by G0W51 on Open Thread, Jun. 22 - Jun. 28, 2015 · 2015-06-28T06:20:46.998Z · LW · GW

Perhaps it would be beneficial to introduce life to Mars in the hope that it could eventually evolve into intelligent life in the event that Earth becomes sterilized. There are some lifeforms on Earth that could survive on Mars. The outer space treaty would need to be amended to make this legal, though, as it currently prohibits placing life on Mars. That said, I find it doubtful that intelligent life ever would evolve from the microbes, given how extreme Mar's conditions are.

Comment by G0W51 on Open Thread, Jun. 15 - Jun. 21, 2015 · 2015-06-20T07:17:20.939Z · LW · GW

What are some recommended readings for those who want to decrease existential risk? I know Nick Bostrom's book Superintelligence, How can I reduce existential risk from AI?, and MIRI's article Reducing Long-Term Catastrophic Risks from Artificial Intelligence are useful, but what else? What about non-AI-related existential risks?

Comment by G0W51 on Open Thread, Jun. 8 - Jun. 14, 2015 · 2015-06-12T05:27:38.590Z · LW · GW

I've been reading the discussion between Holden et al on the utility of charities aimed at directly decreasing existential risk, but the discussion seems to have ended prematurely. It (basically) started with this post, then went to this post. Holden made a comment addressing the post, but I think it didn't fully address the post and I don't think Holden's comment was fully addressed either. Is there any place that continues the discussion?

Comment by G0W51 on Bayesian Adjustment Does Not Defeat Existential Risk Charity · 2015-06-12T05:11:23.357Z · LW · GW

That said, I don't accept any of the arguments given here for why it's unacceptable to assign a very low probability to a proposition. I think there is a general confusion here between "low subjective probability that a proposition is correct" and "high confidence that a proposition isn't correct"; I don't think those two things are equivalent.

I don't think you've really explained why you don't accept the arguments in the post. Could you please explain why and how the difference between assigning low probability to something and having high confidence it's incorrect is relevant? I have several points to discuss, but I need to fully understand your argument before doing so.

And yes, I know I am practicing the dark art of post necromancy. But the discussion has largely been of great quality and I don't think your comment has been appropriately addressed.

Comment by G0W51 on Bayesian Adjustment Does Not Defeat Existential Risk Charity · 2015-06-12T04:18:56.296Z · LW · GW

Karnofsky has, as far as I know, not endorsed measures of charitable effectiveness that discount the utility of potential people.

Actually, according to this transcript on page four, Holden finds that the claim that the value of creating a life is "some reasonable" ratio of the value of saving a current life is very questionable. More exactly, the transcript sad:

Holden: So there is this hypothesis that the far future is worth n lives and this causing this far future to exist is as good as saving n lives. That I meant to state as an accurate characterization of someone else's view.

Eliezer: So I was about to say that it's not my view that causing a life to exist is on equal value of saving the life.

Holden: But it's some reasonable multiplier.

Eliezer: But it's some reasonable multiplier, yes. It's not an order of magnitude worse.

Holden: Right. I'm happy to modify it that way, and still say that I think this is a very questionable hypothesis, but that I'm willing to accept it for the sake of argument for a little bit. So yeah, then my rejoinder, as like a parenthetical, which is not meant to pass any Ideological Turing Test, it’s just me saying what I think, is that this is very speculative, that it’s guessing at the number of lives we're going to have, and it's also very debatable that you should even be using the framework of applying a multiplier to lives allowed versus lives saved. So I don't know that that's the most productive discussion, it's a philosophy discussion, often philosophy discussions are not the most productive discussions in my view.

Comment by G0W51 on Reason Poetry: f(me.0) · 2015-06-10T08:00:38.225Z · LW · GW

I don't think you were being obtuse. Your post wasn't bad per se, it was just off-topic as, unless I am failing to interpret it, it doesn't really add anything new to rationality or applied rationality.

Also, in case you had trouble locating the open thread, just click "Discussion" at the top of the page, then click the link under "Latest Open Thread" on the right of the page.

Comment by G0W51 on Reason Poetry: f(me.0) · 2015-06-08T05:46:47.413Z · LW · GW

I don't think Less Wrong discussion is the best place for poetry, though someone please correct me if I am mistaken.

Comment by G0W51 on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2015-06-07T23:55:39.136Z · LW · GW

You're absolutely right. I'm not sure how I missed or forgot about reading that.

Comment by G0W51 on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2015-06-07T20:39:12.481Z · LW · GW

The article said the leverage penalty "[penalizes] hypotheses that let you affect a large number of people, in proportion to the number of people affected." If this is all the leverage penalty does, then it doesn't matter if it takes 3^^^3 atoms or units of computation, because atoms and computations aren't people.

That said, the article doesn't precisely define what the leverage penalty is, so there could be something I'm missing. So, what exactly is the leverage penalty? Does it penalize how many units of computation, rather than people, you can affect? This sounds much less arbitrary than the vague definition of "person" and sounds much easier to define: simply divide the prior of a hypothesis by the number of bits flipped by your actions in it and then normalize.

Comment by G0W51 on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2015-06-07T02:23:54.827Z · LW · GW

What if the mugger says he will give you a single moment of pleasure that is 3^^^3 times more intense than a standard good experience? Wouldn't the leverage penalty not apply and thus make the probability of the mugger telling the truth much higher?

I think the real reason the mugger shouldn't be given money is that people are more likely to be able to attain 3^^^3 utils by donating the five dollars to an existential risk-reducing charity. Even though the current universe presumably couldn't support 3^^^3 utils, there is a chance of being able to create or travel to vast numbers of other universes, and I think this chance is greater than the chance of the mugger being honest.

Am I missing something? These points seem too obvious to miss, so I'm assigning a fairly large probability to me either being confused or that these were already mentioned.

Comment by G0W51 on Logical and Indexical Uncertainty · 2015-06-04T05:31:04.201Z · LW · GW

The second problem can easily be explained by having your utility function not be linear in the number of non-destroyed universes.

Comment by G0W51 on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-01T22:36:03.108Z · LW · GW

Is Solomonoff induction a theorem for making optimal probability distributions or a definition of them? That is to say, has anyone proved that Solomonoff induction produces probability distributions that are "optimal," or was Solomonoff induction created to formalize what it means for a prediction to be optimal. In the former case, how could they define optimality?

(And another question: I posted this a couple days ago on the last open thread, but it was closed before I got a response. Is it okay to repost it?)