Posts

The Dumbest Possible Gets There First 2022-08-13T10:20:26.564Z
Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" 2015-03-28T09:17:55.577Z
Musk on AGI Timeframes 2014-11-17T01:36:12.012Z

Comments

Comment by Artaxerxes on We're all in this together · 2023-12-05T21:40:49.760Z · LW · GW

But ultimately, for the parts that really matter here, this is a matter of explaining, not of defeating

Of course, defeating people who are mistakenly doing the wrong thing could also work, no? Even if we take the assumption that people doing the wrong thing are merely making a mistake by their own lights to be doing so, it might be practically much more feasible to divert them away from doing it or otherwise prevent them from doing it, rather than to rely on successfully convincing them not to do it. 

Not all people are going to be equally amenable to explanation. It's not obvious to me at least that we should limit ourselves to that tool in the toolbox as a rule, even under an assumption where everyone chasing bad outcomes is simply mistaken/confused.

But I'm pretty sure nobody in charge is on purpose trying to kill everyone; they're just on accident functionally trying to kill everyone.

I'm less sure about this. I've met plenty of human extinctionists. You could argue that they're just making a mistake, that it's just an accident. But I do think it is meaningful that there are people who are willing to profess that they want humanity to go extinct and take actions in the world that they think nudge us towards that direction, and other people that don't do those things. The distinction is a meaningful one, even under a model where you claim that such people are fundamentally confused and that if they were somehow less confused they would pursue better things. 

Comment by Artaxerxes on So you want to save the world? An account in paladinhood · 2023-11-23T06:28:41.575Z · LW · GW

What kinds of reactions to and thoughts about the post did you have that you got a lot out of observing?

Comment by Artaxerxes on When do "brains beat brawn" in Chess? An experiment · 2023-07-01T12:47:20.820Z · LW · GW

On the other hand, the potential resource imbalance could be ridiculously high, particularly if a rogue AI is caught early on it’s plot, with all the worlds militaries combined against them while they still have to rely on humans for electricity and physical computing servers. It’s somewhat hard to outthink a missile headed for your server farm at 800 km/h. ... I hope this little experiment at least explains why I don’t think the victory of brain over brawn is “obvious”. Intelligence counts for a lot, but it ain’t everything.

While this is a true and important thing to realise, I don't think of it as the kind of information that does much to comfort me with regards to AI risk. Yes, if we catch a misaligned AI sufficicently early enough, such that it is below whatever threshold of combined intelligence and resources that is needed to kill us, then there is a good chance we will choose to prevent it from doing so. But this is something that could happen thousands of times and it would still feel rather besides the point, because it only takes one situation where one isn't below that threshold and therefore does still kill us all. 

If we can identify even roughly where various thresholds are, and find some equivalent of leaving the AI with a king and three pawns where we have a ~100% chance of stopping it, then sure, that information could be useful and perhaps we could coordinate around ensuring that no AI that would kill us all should it get more material from indeed ever getting more than that. But even after clearing the technical challenge of finding such thresholds with much certainty in such a complex world, the coordination challenge of actually getting everyone to stick to them despite incentives to make more useful AI by giving it more capability and resources, would still remain.

Still worthwhile research to do of course, even if it ends up being the kind of thing that only buys some time.

Comment by Artaxerxes on love, not competition · 2022-10-31T16:17:26.949Z · LW · GW

So you are effectively a revolutionary.

I'm not sure about this label, how government/societal structures will react to eventual development of life extension technology remains to be seen, so any revolutionary action may not be necessary. But regardless of which label you pick, it's true that I would prefer not to be killed merely so others can reproduce. I'm more indifferent as to the specifics as to how that should be achieved than you seem to imagine - there are a wide range of possible societies in which I am allowed to survive, not just variations on those you described. 

Comment by Artaxerxes on love, not competition · 2022-10-31T15:01:28.822Z · LW · GW

I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The decision is between using those resources to support you vs using those resources to support someone else's child.

That's an example of something the resources could go towards, under some value systems, sure. Different value systems would suggest that different entities or purposes would make best moral use of those resources, of course.

To try and make things clear: yes, what I said is perfectly compatible with what you said. Your reply to this point feels like you're trying to tell me something that you think I'm not aware of, but the point you're replying to encompasses the example you gave - "someone else's child" is potentially a candidate for "the next best thing you could do with the resources to run me" under some value systems.

Comment by Artaxerxes on love, not competition · 2022-10-31T03:49:00.396Z · LW · GW

I don't think you have engaged with my core point so I"ll just state it again in a different way: continuous economic growth can support some mix of both reproduction and immortality, but at some point in the not distant future ease/speed of reproduction may outstrip economic growth, at which point there is a fundemental inescapable choice that societies must make between rentier immortality and full reproduction rights.

I think you may be confusing me for arguing for reproduction over immortality, or arguing against rentier existence - I am not. Instead I'm arguing simply that you haven't yet acknowledged the fundemental tradeoff and its consequences.

I thought I made myself very clear, but if you want I can try to say it again differently. I simply choose myself and my values over values that aren't mine.

The tradeoff between reproduction and immortality is only relevant if reproduction has some kind of benefit - if it doesn't then you're trading off a good with something that has no value. For some, with different values, they might have a difficult choice to make and the tradeoff is real. But for me, not so much. 

As for the consequences, sacrificing immortality for reproduction means I die, which is itself the thing I'm trying to avoid. Sacrificing reproduction for immortality on the other hand seems to get me the thing I care about. The choice is fairly clear on the consequences.

Even on a societal level, I simply wish not to be killed, including for the purpose of allowing for the existence of other entities that I value less than my own existence, and whose values are not mine. I merely don't want the choice to be made for me in my own case, and if that can be guaranteed, I am more than fine with others being allowed to make their own choices for themselves too.

Say you asked me anyway what I would prefer for the rest of society? What I might advocate for others would be highly dependent on individual factors. Maybe I would care about things like how much a particular existing person shares my values, and compare that to how much a new person would share my values. Eventually perhaps I would be happy with the makeup of the society I'm in, and prefer no more reproduction take place. But really it's only an interesting question insofar as it's instrumentally relevant to much more important concerns, and it doesn't seem likely that I will be in a privileged position to affect such decisions in any case.

Comment by Artaxerxes on love, not competition · 2022-10-31T01:36:14.069Z · LW · GW

Of course I have a moral opportunity cost. However, I personally believe that this opportunity cost is low, or at least it seems that way to me. I think that the next best thing you could do with the resources used to run me if you were to liquidate me would be very likely to be of less moral value than running me, at least to my lights, if not to others'.

The question of what to do about scarcity of resources seems like a potentially very scary one then for exactly the reasons that you bring up - I don't particularly think for example that a political zeitgeist that guarantees my death to be one that does a great job of maximizing what I believe to be valuable. 

In the long term the evolution of a civilization does seem to benefit from turnover - ie fresh minds being born - which due to the simple and completely unavoidable physics of energy costs necessarily implies indefinite economic growth or that some other minds must sleep.

I will say that I am skeptical of the idea that what "benefit" here is capturing is what I think we should really care about. Perhaps some amount of turnover will help in order to successfully compete with other alien civilsations that we run across - I can understand that, if hope that it isn't necessary. But absent competitive pressures like this, I think it's okay to take a stand for your own life and values over those of newer, different minds, with new, different values. Their values are not necessarily mine and we should be careful not to sacrifice our own values for some nebulous "benefit" that may never come to be. 

Of course, if it is your preference, if it is genuinely you truthfully pursuing your own values to sleep or die so that some new minds can be born, then I can understand why you might choose to voluntarily do so and sacrifice yourself. But I think it is a decision people should take very carefully, and I certainly don't wish for the civilisation I live in to make the choice for me and sacrifice me for such reasons. 

Comment by Artaxerxes on Musk on AGI Timeframes · 2022-10-31T00:14:33.560Z · LW · GW

The "10 years at most" part of the prediction is still open, to be fair.

Comment by Artaxerxes on love, not competition · 2022-10-30T23:48:04.384Z · LW · GW

While this seems to me to be true, as a non-maximally competitive entity by various metrics myself I see it more as an issue to overcome or sidestep somehow, in order to enjoy the relative slack that I would prefer. It would seem distatefully molochian to me if someone were to suggest that I and people like me should be retired/killed in order to use the resources to power some more "efficient" entity, by whatever metrics this efficiency is calculated.

To me it seems likely that pursuing economic efficiencies of this kind could easily wipe out what I personally care about, at the very least. I see Hanson's Em worlds for example as being probably quite hellish as a future, or maybe if luckier closer to a "Disneyland with no Children" style scenario.

I strongly hope that my values and people who share my values aren't outcompeted in this way in the future, as I want to be able to have nice things and enjoy my life. As we may yet succeed in extending the Dream Time, I would urge people to recognize that we still have the power to do so and preserve much of what we care about, and not be too eager to race to the bottom and sacrifice everything we know and love.

Comment by Artaxerxes on Five Areas I Wish EAs Gave More Focus · 2022-10-28T20:35:10.047Z · LW · GW

You also appeal to just open-ended uncertainty

I think it would be more accurate to say that I'm simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall. 

I would go as far as to say the implementation details of how we get life extension itself could change the sign of the impact with regards to AI risk - there are enough different possible scenarios as to how it could go that could each amplify different components of its impact on AI risk to produce a different overall net effect.

What are some additional concrete scenarios where longevity research makes things better or worse? 

So first you didn't respond to the example I gave with regards to preventing human capital waste (preventing people with experience/education/knowledge/expertise dying from aging-related disease), and the additional slack from the additional general productive capacity in the economy more broadly that is able to go into AI capabilities research.

Here's another one. Lets say medicine and healthcare becomes a much smaller field after the advent of popularly available regenerative therapies that prevent diseases of old age. In this world people only need to go see a medical professional when they face injury or the increasingly rare infection by a communicable disease. The demand for medical professionals disappears to a massive extent, and the best and brightest (medical programs often have the highest/most competitive entry requirements) that would have gone into medicine are routed elsewhere, including AI which accelerating capabilities and causing faster overall timelines. 

An assumption that much might hinge on is that I expect differential technological development with regards to capability versus safety to be pretty heavily favouring accelerating capabilities over safety in circumstances where additional resources are made available for both. This isn't necessarily going to be the case of course, for example the resources in theory could be exclusively routed towards safety, but I just don't expect most worlds to go that way, or even for the ratio of resources to be allocated towards safety enough such that you get better posistive expected value from the additional resources very often. But even something as basic as this is subject to a lot of uncertainty. 

Comment by Artaxerxes on Five Areas I Wish EAs Gave More Focus · 2022-10-28T16:49:53.454Z · LW · GW

Strongly agree on life extension and the sheer scale of the damage caused by aging-related disease. Has always confused me somewhat that more EA attention hasn't gone towards this cause area considering how enormous the potential impact is and how well it has always seemed to perform to me on the important/tractable/neglected criteria.

Comment by Artaxerxes on Five Areas I Wish EAs Gave More Focus · 2022-10-28T16:19:43.262Z · LW · GW

An alternative to a tractability-and-neglect based argument is an importance-based argument. There's a lot of pessimism about the prospects for technical AI alignment. If serious life extension becomes a real possibility without depending on an AI singularity, that might convince AI capabilities researchers to slow down or stop their research and prioritize AI safety much more. Possibly, they might become more risk-averse, realizing that they no longer have to make their mark on humanity within the few decades that ordinary lifespans allow for a career. Possibly, they might even be creating AI with the main hope that the AI will cure aging and let them live a very long time. Showing that superintelligent AI isn't necessary for this outcome might convince them to slow down. If we're as pessimistic as Eliezer Yudkowsky about the prospects for technical AI alignment, then maybe we ought to move to an array of alternative strategies.

This is a very interesting line of argument that I wish was true but I'm not sure is very convincing as it is. We can hypothesize about capabilities researchers who are relying on making advancements in AI in order to make a mark during their finite lifespans, or in order for the AI to cure aging-related disease to save them from dying. But how many capabilities researchers are actually primarily motivated by these factors, such that solving aging will significantly move the needle in convincing them not to work on AI?

What's also missing is acknowledgement that some of the forces could push in the other direction - that solving the diseases of old age would contribute to greater AI risk in various ways. Aubrey de Grey is an example of a highly prominent figure in life extension and aging-related disease who was originally an AI capabilities researcher, and only changed careers because he thought aging was both more neglected and important. 

Another possibility is that solving aging-related disease could result in extending the productive lifespan of capabilities researchers. John Carmack for example is a prodigous software engineer in his 50s who has recently decided to put all of his energy into AI capabilities research, and that he's pushing on with this despite people trying to convince him about the risks[1]. Morbid and tasteless as it might sound, it's possible in principle that succeeding in life extension/aging-related-disease research would give people like him enough additional productive and healthy years with which to become the creator of doom, wheras in worlds like ours where such breakthroughs are not made, they are limited by when they are struck down by death or dementia. 

Those are very small examples, but in any case it isn't obvious to me where things would balance out to, considering the myriad complicated possible nth-order effects of such a massive change. You could speculate all day about these, maybe the sheer surplus of economic resources/growth from e.g. not having to deal with massive human capital loss/turnover that occurs thanks to aging-related disease killing everyone after a while results in significantly more resources going into capabilities research, speeding up timelines. There are plenty of ways things could go.

  1. ^

    Eliezer Yudkowsky has personally tried to convince him about AI risk without success. This despite Carmack being an HPMOR fan.

Comment by Artaxerxes on confusion about alignment requirements · 2022-10-06T20:20:27.012Z · LW · GW

What you are looking for sounds very much like Vanessa Kosoy's agenda

As it so happens, the author of the post also wrote this overview post on Vanessa Kosoy's PreDCA protocol.

Comment by Artaxerxes on everything is okay · 2022-08-24T10:45:36.175Z · LW · GW

Thanks for writing! I'm a big fan of utopian fiction, it's really interesting to hear idealised depictions of how people would want to live and how they might want the universe to look. The differences and variation between attempts is fascinating - I genuinely enjoy seeing how different people think different things are important, the different things they value and what aspects they focus on in their stories. It's great when you can get new ideas yourself about what you want out of life, things to aspire to. 

I wouldn't mind at all if writing personal utopian fiction on LW were to become a trend. Like you say, it feels important, not just to help a potential AI and get people thinking about it, but also to help inspire each other, to give each other new ideas about what we could enjoy in the future.

Comment by Artaxerxes on The Dumbest Possible Gets There First · 2022-08-15T11:43:27.317Z · LW · GW

Yes, I do expect that if we don't get wiped out that maybe we'll get somewhat bigger "warning shots" that humanity may be likely to pay more attention to. I don't know how much that actually moves the needle though.

Ok sure but extra resources and attention is still better than none. 

This isn't obvious to me, it might make things harder. Like how when Elon Musk read Superintelligence and started developing concerns about AI risk but the result was that he founded OpenAI and gave it a billion dollars to play with, regarding which I think you could make an argument that doing so accelerated timelines and reduced our chances of avoiding negative outcomes. 

Comment by Artaxerxes on The Dumbest Possible Gets There First · 2022-08-14T14:35:18.310Z · LW · GW

I'm fairly agnostic about how dumb we're talking - what kinds of acts or confluence of events are actually likely to be effective complete x-risks, particularly at relatively low levels of intelligence/capability. But that's besides the point in some ways, because whereever someone might place the threshold for x-risk capable AI, as long as you assume that greater intelligence is harder to produce (an assumption that doesn't necessarily hold, as I acknowledged), I think that suggests that we will be killed by something not much higher than that threshold once it's first reached.

Evolution tries many very similar designs, always moving in tiny steps through the search space. Humans are capable of moving in larger jumps. Often the difference between one attempt and the next is several times more compute. No one trained something 90% as big as GPT3 before GPT3. 

This is true for now, but there's a sense in which the field is in a low hanging fruit picking stage of development where there's plenty of room to scale massively fairly easily. If the thresholds are crossed during a stage like this where everyone is rushing to collect big, easy advances, then yes, I would expect the gap between how much more intelligent/capable the AI that kills us is, relative to how intelligent it needed to be, to be much higher (but still not that much higher, unless e.g. fast takeoff etc.). Conversely in a world where progress is in a more incremental stage, then I would expect a smaller gap. 

If the AI is on a computer system where it can access its own code. (Either deliberately, or through dire security even a dumb AI can break) the amount of intelligence needed to see "neuron count" and turn it up isn't huge. Basically, the amount of power needed to do a bit of AI research to make itself a little smarter is on the level of a smart AI expert. The intelligence needed to destroy humanity is higher than that. 

Sure, we may well have dumb AI failures first. That stock market crash. Maybe some bug in a flight control system that makes lots of planes do a nosedive at once. But it's really hard to see how an AI failure could lead to human extinction, unless the AI was smart enough to develop new nanotech (And if it can do that, it can make itself smarter). 

The first uranium pile to put out a lethal dose of radiation can put out many times a lethal dose of radiation. Because a subcritical pile doesn't give out enough radiation. And once the pile is critical, it quickly ramps up to loads of radiation. 

Can you name any strategy the AI could use to wipe out humanity, without strongly implying an AI smart enough for substantial self improvement?

Self-improvement to me doesn't automatically mean RSI takeoff to infinity - an AI that self-improves up to a certain point capable of wiping out humanity but not yet reaching criticality seems to me to be possible.

I agree though that availability of powerful grey/black ball technologies like nanotech that could require fewer variables going wrong and less intelligence to nevertheless enable an AI to plausibly represent an x-risk is a big factor. Other existing technologies like engineered pandemics or nuclear weapons, while dangerous, seem somewhat difficult even with AI to leverage into fully wiping out humanity at least by themselves, even if they could lead to worlds that are much more vulnerable to further shocks. 

Comment by Artaxerxes on 2017 LessWrong Survey · 2017-09-15T13:13:55.805Z · LW · GW

I did it!

Comment by Artaxerxes on Superintelligence discussed on Startalk · 2017-03-20T05:31:18.581Z · LW · GW

The segment on superintelligence starts at 45:00, it's a rerun of a podcast from 2 years ago. Musk says it's a concern, Bill Nye commenting on Musk's comments about it afterwards says that we would just unplug it and is dismissive. Neil is similarly skeptical and half heartedly plays devils advocate but clearly agrees with Nye.

Comment by Artaxerxes on Open thread, Feb. 13 - Feb. 19, 2017 · 2017-02-16T13:49:02.246Z · LW · GW

I'd even suspect that it's possible that it's even more open to being abused by assholes. Or at least, pushing in the direction of "tell" may mean less opportunity for asshole abuse in many cases.

Comment by Artaxerxes on Dan Carlin six hour podcast on history of atomic weapons · 2017-02-09T17:47:45.984Z · LW · GW

I've heard good things about Dan Carlin's podcasts about history but I've never been sure which to listen to first. Is this a good choice, or does it assume you've heard some of his other ones, or perhaps are other podcasts better to listen to first?

Comment by Artaxerxes on Open thread, Jan. 30 - Feb. 05, 2017 · 2017-02-07T09:26:00.524Z · LW · GW

Whose Goodreads accounts do you follow?

Comment by Artaxerxes on Open thread, Jan. 30 - Feb. 05, 2017 · 2017-01-31T21:33:17.614Z · LW · GW

If you buy a Humble Bundle these days, it's possible to use their neat sliders to allocate all of the money you're spending towards charities of your choice via the PayPal giving fund, including Lesswrong favourites like MIRI, SENS and the Against Malaria Foundation. This appears to me to be a relatively interesting avenue for charitable giving, considering that it is (at least apparently) as effective per dollar spent as a direct donation would be to these charities.

Contrast this with buying games from the Humble Store, which merely allocates 5% of money spent to a chosen charity, or using Amazon Smile which allocates a miniscule 0.5% of the purchase price of anything you buy. While these services are obviously a lot more versatile in terms of the products on offer, they to me are clearly more something you set up if you're going to be buying stuff anyway rather than what this appears to be to me, a particular opportunity.

Here are a couple of examples of the kinds of people for whom I think this might be worthwhile:

  1. People who are interested in video games or comics or whatever including any that are available in Humble Bundles to purchase them entirely guilt-free, with the knowledge that the money is going to organisations they like.

  2. People who are averse to more direct giving and donations for whatever reason to be able to support organisations they approve of in a more comfortable, transactional way, in a manner similar to buying merchandise.

  3. People who may be expected to give gifts as part of social obligation, and for whom giving gifts of the kinds of products offered in these bundles is appropriate, to do so while all of the money spent goes to support their pet cause.

Comment by Artaxerxes on Open thread, Nov. 14 - Nov. 20, 2016 · 2016-11-15T05:18:03.100Z · LW · GW

Can anyone explain to me what non-religious spirituality means, exactly? I had always thought it was an overly vague to meaningless new age term in that context but I've been hearing people like Sam Harris use the term unironically, and 5+% of LW are apparently "atheist but spiritual" according to the last survey, so I figure it's worth asking to find out if I'm missing out on something not obvious. The wikipedia page describes a lot of distinct, different ideas when it isn't impenetrable, so that didn't help. There's one line there where it says

The term "spiritual" is now frequently used in contexts in which the term "religious" was formerly employed.

and that's mostly how I'm familiar with its usage as well.

Comment by Artaxerxes on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-14T03:01:18.247Z · LW · GW

This is a really good comment, and I would love to hear responses to objections of this flavour from Eliezer etc.

Saying "we haven't had a nuclear exchange with Russia yet, therefor our foreign policy and diplomatic strategy is good" is an obvious fallacy. Maybe we've just been lucky.

I mean it's less about whether or not it's good as much as it is trying to work out the likelihood of whether policies resulting from Trump's election are going to be worse. You can presuppose that current policies are awful and still think that Trump is likely to make things much worse.

Comment by Artaxerxes on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-14T02:38:59.134Z · LW · GW

Like, reading through Yudkowsky's stuff, his LW writings and HPMOR, there is the persistent sense that he is 2 guys.

One guy is like "Here are all of these things you need to think about to make sure that you are effective at getting your values implemented". I love that guy. Read his stuff. Big fan.

Other guy is like "Here are my values!" That guy...eh, not a fan. Reading him you get the idea that the whole "I am a superhero and I am killing God" stuff is not sarcastic.

It is the second guy who writes his facebook posts.

Yes, I agree with this sentiment and am relieved someone else communicated it so I didn't have to work out how to phrase it.

I don't share (and I don't think my side shares), Yudkowsky's fetish for saving every life. When he talks about malaria nets as the most effective way to save lives, I am nodding, but I am nodding along to the idea of finding the most effective way to get what you want done, done. Not at the idea that I've got a duty to preserve every pulse.

I don't think Yudkowsky think malaria nets are the best use of money anyway, even if they are in the short term the current clearest estimate as to where to put your money in in order to maximise lives saved. In that sense I don't think you disagree with him, he doesn't fetishize preserving pulses in the same way that you don't. Or at least, that's what I remember reading. First thing I could find corroborating that model of his viewpoint is his interview with Horgan.

There is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.

There’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don’t actually know what the final hours will be like and whether nanomachines will be involved. But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)

I think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of Givewell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.

I think it’s the first world that’s improbable and the second one that’s probable. I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality – the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.

Also, on this:

Yes, electing Hillary Clinton would have been a better way to ensure world prosperity than electing Donald Trump would. That is not what we are trying to do. We want to ensure American prosperity.

Especially here, I'm pretty sure Eliezer is more concerned about general civilisational collapse and other globally negative outcomes which he sees as non-trivially more likely with Trump as president. I don't think this is as much of a difference in values and specifically differences with regards to how much you each value each level of the concentric circles of the proximal groups around you. At the very least, I don't think he would agree that a Trump presidency would be likely to result in improved American prosperity over Clinton.

I just want to point out that Yudkowsky is making the factual mistake of modeling us as being shitty at achieving his goals, when in truth we are canny at achieving our own.

I think this is probably not what's going on, I honestly think Eliezer is being more big picture about this, in the sense that he is concerned more about increased probability of doomsday scenarios and other outcomes unambiguously bad for most human goals. That's the message I got from his facebook posts anyway.

Comment by Artaxerxes on Open thread, Sep. 12 - Sep. 18, 2016 · 2016-09-12T15:50:07.328Z · LW · GW

LessWrong has made me if anything more able to derive excitement and joy from minor things, so if I were you I would check if LW is really to blame or otherwise find out if there are other factors causing this problem.

Comment by Artaxerxes on September 2016 Media Thread · 2016-09-04T02:28:40.523Z · LW · GW

You didn't link to your MAL review for Wind Rises!

Comment by Artaxerxes on September 2016 Media Thread · 2016-09-04T02:25:14.076Z · LW · GW

Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity by Holden Karnofsky. Somehow missed this when it was posted in May.

Compare, for example, Thoughts on the Singularity Institute (SI) one of the most highly upvoted posts ever on LessWrong.

Edit: See also Some Key Ways in Which I've Changed My Mind Over the Last Several Years

Comment by Artaxerxes on Open Thread, Aug. 1 - Aug 7. 2016 · 2016-08-01T04:03:50.211Z · LW · GW

What's the worst case scenario involving climate change given that for some reason no large scale wars occur due to its contributing instability?

Climate change is very mainstream, with plenty of people and dollars working on the issue. LW and LW-adjacent groups discuss many causes that are thought to be higher impact and have more room for attention.

But I realised recently that my understanding of climate change related risks could probably be better, and I'm not easily able to compare the scale of climate change related risks to other causes. In particular I'm interested in estimations of metrics such as lives lost, economic cost, and similar.

If anyone can give me a rundown or point me in the right direction that would be appreciated.

Comment by Artaxerxes on April 2016 Media Thread · 2016-04-08T06:11:36.359Z · LW · GW

Sure, but that doesn't change all the tax he evaded.

Comment by Artaxerxes on April 2016 Media Thread · 2016-04-06T11:47:45.059Z · LW · GW

There is this.

Comment by Artaxerxes on April 2016 Media Thread · 2016-04-06T11:45:25.703Z · LW · GW

Not to mention all that tax evasion never actually got resolved.

Comment by Artaxerxes on Open Thread March 28 - April 3 , 2016 · 2016-04-01T01:29:16.094Z · LW · GW

CGP Grey has read Bostrom's Superintelligence.

Transcript of the relevant section:

Q: What do you consider the biggest threat to humanity?

A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don't want it to be true.

I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.

He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.

Grey's video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn't too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it's cool to see that it happened.

Comment by Artaxerxes on Open Thread March 28 - April 3 , 2016 · 2016-03-31T10:39:17.301Z · LW · GW

This exists, at least.

Comment by Artaxerxes on Lesswrong 2016 Survey · 2016-03-27T07:42:08.188Z · LW · GW

Took it!

It ended somewhat more quickly this time.

Comment by Artaxerxes on Lesswrong 2016 Survey · 2016-03-27T07:09:56.999Z · LW · GW

Typo question 42

Yes but I don't think it's logical conclusions apply for other reasons

Comment by Artaxerxes on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T06:16:26.825Z · LW · GW

Dawkins' Greatest Show on Earth is pretty comprehensive. The shorter the work as compared to that, the more you risk missing widely held misconceptions people have.

Comment by Artaxerxes on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T06:58:41.762Z · LW · GW

Not a guide, but I think the vocab you use matters a lot. Try tabooing 'rationality', the word itself mindkills some people straight to straw vulcan etc. Do the same with any other words that have the same effect.

Comment by Artaxerxes on Open thread, December 7-13, 2015 · 2015-12-12T03:46:03.446Z · LW · GW

I recall being taught to argue towards the predetermined point of view in schools and extra-curriculum activities like debating. Is that counterproductive or suboptimal?

This has been talked about before. One suggestion is to not make it a habit.

Comment by Artaxerxes on Open thread, December 7-13, 2015 · 2015-12-12T03:37:46.189Z · LW · GW

Could you without intentionally listening to music for 30 days?

Can you rephrase this?

Comment by Artaxerxes on December 2015 Media Thread · 2015-12-02T20:41:43.252Z · LW · GW

Yeah, I pretty much agree, but the important point to make is that any superintelligent ant hive hypotheses would have to be at least as plausible and relevant to the topic of the book as Hanson's ems to make it in. Note Bostrom dismisses brain-computer interfaces as a superintelligence pathway fairly quickly.

Comment by Artaxerxes on December 2015 Media Thread · 2015-12-02T16:44:29.992Z · LW · GW

This interlude is included despite the fact that Hanson’s proposed scenario is in contradiction to the main thrust of Bostrom’s argument, namely, that the real threat is rapidly self-improving A.I.

I can't say I agree with your reasoning behind why Hanson's ideas are in the book. I think the book's content is written with accuracy in mind first and foremost, and I think Hanson's ideas are there because Bostrom thinks they're genuinely a plausible direction the future could go, especially in the circumstances where recursive self improving AI of the kinds traditionally envisioned turns out to be unlikely or difficult or impossible for whatever reasons. I don't think those ideas are there in an effort to mine the Halo effect.

And really, the book's main thrust is in the title. Paths, Dangers, Strategies. Even if these outcomes are not necessarily mutually exclusive (inc. the possibility of singletons forming out of initially multi-polar outcomes as discussed in p.176 onwards), talking about potential pathways is very obviously relevant, I would have thought.

Comment by Artaxerxes on November 2015 Media Thread · 2015-11-11T21:28:25.327Z · LW · GW

Announced? Orokamonogatari came out in October.

Comment by Artaxerxes on Rationality Quotes Thread November 2015 · 2015-11-08T17:45:22.331Z · LW · GW

This is a great quote, but even moreso than Custers and Lees I feel like we need someone not so much on the front lines, but someone to win the whole war - maybe Lincoln, but my knowledge of the American Civil War is poor. Preventing death from most relevant causes (aging, infectious disease, etc.) seems within reach before the end of the century, as a conservative guess. Hastening winning that war means that society will no longer need so many generals, Lees, Custers or otherwise.

Comment by Artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-06T09:32:47.627Z · LW · GW

it's rather depressing that progress of this kind seems so impossible. Thanks for the link.

Comment by Artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-06T09:29:14.382Z · LW · GW

The videos you linked were already accounted for. The vid of the Superintelligence panel with Musk, Soares, Russell, Bostrom, etc. is the one that's been missing for so long.

Comment by Artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-11-01T12:47:52.639Z · LW · GW

There are still plenty of videos from EA Global nowhere to be found on the net. If anyone could point me in the direction of, for example, the superintelligence panel with Elon Musk, Nate Soares, Stuart Russell, and Nick Bostrom that'd be great.

Why has organisation of uploading these videos been so poor? I am assuming that the intention is not to hide away any record of what went on at these events. Only the EA Global Melbourne vids are currently easily findable.

Comment by Artaxerxes on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-30T04:24:07.114Z · LW · GW

https://www.reddit.com/r/nootropics is a decent start, also check the sidebar

Comment by Artaxerxes on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-10-21T05:55:04.035Z · LW · GW

This post discusses MIRI, and what they can do with funding of different levels.

What are you looking for, more specifically?

Comment by Artaxerxes on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-14T10:49:01.502Z · LW · GW

You could be depressed.