Posts

The "Post-Singularity Social Contract" and Bostrom's "Vulnerable World Hypothesis" 2018-11-25T01:34:23.662Z · score: -1 (9 votes)
A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 2) 2018-05-13T19:41:15.326Z · score: 22 (10 votes)
A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 1) 2018-05-12T13:34:21.381Z · score: 33 (13 votes)
Is life worth living? 2017-08-30T10:42:36.483Z · score: 4 (6 votes)
Is there a flaw in the simulation argument? 2017-08-29T14:34:33.109Z · score: 2 (2 votes)
Could the Maxipok rule have catastrophic consequences? (I argue yes.) 2017-08-25T10:00:52.097Z · score: 6 (6 votes)
A problem in anthropics with implications for the soundness of the simulation argument. 2016-10-19T21:07:51.380Z · score: 5 (6 votes)
Agential Risks: A Topic that Almost No One is Talking About 2016-10-15T18:41:15.088Z · score: 11 (11 votes)
Estimating the probability of human extinction 2016-02-17T16:19:26.793Z · score: 5 (6 votes)

Comments

Comment by philosophytorres on Possible worst outcomes of the coronavirus epidemic · 2020-03-20T13:03:13.707Z · score: 1 (1 votes) · LW · GW

Also worth noting: if the onset of global catastrophes is better, then global catastrophes will tend to cluster together, so we might expect another global catastrophe before this one is over. (See the "clustering illusion.")

Comment by philosophytorres on A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 1) · 2018-05-13T19:41:52.948Z · score: 5 (3 votes) · LW · GW

Part 2 can now be read here: https://www.lesswrong.com/posts/pbFGhMSWfccpW48wd/a-detailed-critique-of-one-section-of-steven-pinker-s

Comment by philosophytorres on Is life worth living? · 2017-08-30T14:08:44.565Z · score: 0 (0 votes) · LW · GW

It's amazing how many people on FB answered this question, "Annihilation, no question." Really, I'm pretty shocked!

Comment by philosophytorres on Is there a flaw in the simulation argument? · 2017-08-29T18:31:32.863Z · score: 0 (0 votes) · LW · GW

“I'm getting kind of despairing at breaking through here, but one more time.” Same here. Because you still haven’t addressed the relevant issue, and yet appear to be getting pissy, which is no bueno.

By analogy: in Scenario 2, everyone who has wandered through room Y but says that they’re in room X is wrong, yeah? The answer they give does not accurately represent reality. The “right” call for those who’ve pass through room Y is that they actually passed through room Y. I hope we can at least agree on this.

Yet, it remains 100% true that if everyone at any given timeslice, when the question “Which room are you in right now, this very moment” is posed, nearly everyone will win some money if they say room X.

Not sure how to make this point clearer to you: if you want to make the argument that you’re making, then you’ll have to say that the new, additional historical/diachronic information of Scenario 2 changes your mind about which room you’re in, from room X to room Y.

The point: diachronic information is irrelevant to winning a bet about whether you're in room X or room Y at Tx. If this logic holds with rooms, it should also hold with simulations: diachronic information is irrelevant to winning a bet about whether you're in a simulation or not, if the number of non-sims far exceeds the number of sims when you answer the question, "Where are you right now?"

Comment by philosophytorres on Is there a flaw in the simulation argument? · 2017-08-29T18:07:52.962Z · score: 0 (0 votes) · LW · GW

"The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real." You're right about this, because it's a metaphysical issue. The question, though, is epistemology: what does one have reason to believe at any given moment. If you want to say that one should bet on being a sim, then you should also say that one is in room Y in Scenario 2, which seems implausible.

Comment by philosophytorres on Is there a flaw in the simulation argument? · 2017-08-29T16:49:58.975Z · score: 1 (1 votes) · LW · GW

"Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. ... Who cares? No reason to think that's our future." The point is to imagine a possible future -- and that's all it needs to be -- that instantiates none of the three disjuncts of the simulation argument. If one can show that, then the simulation argument is flawed. So far as I can tell, I've identified a possible future that is neither (i), (ii), nor (iii).

Comment by philosophytorres on Could the Maxipok rule have catastrophic consequences? (I argue yes.) · 2017-08-29T13:12:59.677Z · score: 1 (1 votes) · LW · GW

"My 5 dollars: maxipoc is mostly not about space colonisation, but prevention of total extinction." But the goal of avoiding an x-catastrophe is to reach technological maturity, and reaching technological maturity would require space colonization (to satisfy the requirement that we have "total control" over nature). Right?

Comment by philosophytorres on Could the Maxipok rule have catastrophic consequences? (I argue yes.) · 2017-08-29T13:10:38.347Z · score: 0 (0 votes) · LW · GW

Yes, good points. As for "As result, we only move risks from one side equation to another, and even replace known risks with unknown risks," another way to put the paper's thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it's an argument for why MIRI's work matters.

Comment by philosophytorres on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-20T20:08:23.590Z · score: 1 (1 votes) · LW · GW

What do you mean? How is mitigating climate change related to blackmail?

Comment by philosophytorres on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-20T20:07:56.495Z · score: 1 (1 votes) · LW · GW

I actually think most historical groups wanted to vanquish the enemy, but not destroy either themselves or the environment to the point at which it's no longer livable. This is one of the interesting things that shifts to the foreground when thinking about agents in the context of existential risks. As for people fighting to the death, often this was done for the sake of group survival, where the group is the relevant unit here. (Thoughts?)

Comment by philosophytorres on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-20T20:06:02.303Z · score: 2 (2 votes) · LW · GW

Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)

Comment by philosophytorres on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-20T20:04:53.237Z · score: 1 (1 votes) · LW · GW

(2) is quite different in that it isn't motivated by supernatural eschatologies. Thus, the ideological and psychological profiles of ecoterrorists are quite different than apocalyptic terrorists, which are bound together by certain common worldview-related threads.

Comment by philosophytorres on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-20T20:02:50.556Z · score: 2 (2 votes) · LW · GW

I think my language could have been more precise: it's not merely genocidal, but humanicidal or omnicidal that we're talking about in the context of x-risks. Also, Khmer Rough wasn't suicidal to my knowledge. Am I less right?

Comment by philosophytorres on A problem in anthropics with implications for the soundness of the simulation argument. · 2016-10-20T19:58:05.440Z · score: 1 (1 votes) · LW · GW

As for your first comment, imagine that everyone "wakes up" in a room with only the information provided and no prior memories. After 5 minutes, they're put back to sleep -- but before this occurs they're asked about which room they're in. (Does that make sense?)

Comment by philosophytorres on A problem in anthropics with implications for the soundness of the simulation argument. · 2016-10-20T15:23:44.741Z · score: 0 (0 votes) · LW · GW

Yes to both possibilities. But gbear605 is closer to what I was thinking.

Comment by philosophytorres on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-16T00:20:14.416Z · score: 2 (2 votes) · LW · GW

Great question. I think there are strong reasons for anticipating the total number of apocalyptic terrorists and ecoterrorists to nontrivially increase in the future. I've written two papers on the former, linked below. There's weaker evidence to suggest that environmental instability will exacerbate conflicts in general, and consequently produce more malicious agents with idiosyncratic motives. As for the others -- not sure! I suspect we'll have at least one superintelligence around by the end of the century.

Comment by philosophytorres on Estimating the probability of human extinction · 2016-02-22T17:51:11.743Z · score: 1 (1 votes) · LW · GW

Thanks so much for these incredibly thoughtful responses. Very, very helpful.

Comment by philosophytorres on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-24T00:40:40.181Z · score: 7 (7 votes) · LW · GW

Hello! I'm working on a couple of papers that may be published soon. Before this happens, I'd be extremely curious to know what people think about them -- in particular, what people think about my critique of Bostrom's definition of "existential risks." A very short write-up of the ideas can be found at the link below. (If posting links is in any way discouraged here, I'll take it down right away. Still trying to figure out what the norms of conversation are in this forum!)

A few key ideas are: Bostrom's definition is problematic for two reasons: first, it's account of who an existential risk affects is too promiscuous. It opens up the door for counterexamples in which humanity is violently destroyed yet no existential risk occurs. And second, Bostrom's typology is incoherent. It fails to recognize that a consequence's scope has both spatial and temporal components, where different degrees of each can be combined with the other in different ways. At the end of the paper, I propose my own definition - one that attempts to solve both of these problems. Figure C may be particularly helpful.

Thoughts? I am more than open to feedback!

http://philosophytorres.org/XRiskologytheConceptofanExistentialRisk.pdf

Comment by philosophytorres on Open thread, October 2011 · 2015-01-22T17:54:34.495Z · score: 2 (2 votes) · LW · GW

I'd love to know what the community here thinks of some critiques of Nick Bostrom's conception of existential risks, and his more general typology of risks. I'm new to the community, so a bit unsure whether I should completely dive in with a new article, or approach the subject some other way. Thoughts?