Posts

Comments

Comment by ninety-three on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-22T03:10:33.774Z · score: 6 (3 votes) · LW · GW

Without rejecting any of the premises in your question I can come up with:

Low tractability: you assign almost all of the probability mass to one or both of "alignment will be easily solved" and "alignment is basically impossible"

Currently low tractability: If your timeline is closer to 100 years than 10, it is possible that the best use of resources for AI risk is "sit on them until the field developers further" in the same sense that someone in the 1990s wanting good facial recognition might have been best served by waiting for modern ML.

Refusing to prioritize highly uncertain causes in order to avoid the Winner's Curse outcome of your highest priority ending up as something with low true value and high noise

Flavours of utilitarianism that don't value the unborn and would not see it as an enormous tragedy if we failed to create trillions of happy post-Singularity people (depending on the details human extinction might not even be negative, so long as the deaths aren't painful)

Comment by ninety-three on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-11T23:50:16.303Z · score: 1 (1 votes) · LW · GW

I got all of the octopus questions right (six recalled facts, #6 intuitively plausible, #9 seems rare enough that it should be unlikely for humans to observe, and #2 was uncertain until I completed the others then metagamed that a 7/2 split would be "too unbalanced" for a handcrafted test) so the only surprising fact I have to update on is that the recognition thing is surprising to others. My model was that many wild animals are capable of recognizing humans, and octopuses are particularly smart as animals go, no other factors weigh heavily. That octopuses evolved totally separated from humans didn't seem significant because although most wild animals were exposed to humans I see no obvious incentive for most of them to recognize individual humans, so the cases should be comparable on that axis. I also put little weight on octopuses not being social creatures because while there may be social recognition modules, A: animals are able to recognize humans and all of them generalizing their social modules to our species seems intuitively unlikely and B: At some level of intelligence it must be possible to distinguish individuals based on sheer general pattern-recognition, for ten humans an octopus would only need four or five bits of information and animal intelligence in general seems good at distinguishing between a few totally arbitrary bits.

The evolutionary theory of aging is interesting and seems to predict that an animal's maximum age will be proportionate to its time -to-accidental-death. Just thinking of animals and their ages at random this seems plausible but I'm hardly being rigorous, have there been proper analyses done of that?

Comment by ninety-three on Why Destructive Value Capture? · 2018-06-19T01:50:16.196Z · score: 1 (1 votes) · LW · GW

Could it be that the average customer hasn't thought it through enough to realize they are incinerating $1.67 of time-value, and would thus prefer to pay $15 plus *mumble* time as opposed to $15.25 plus zero time?

Comment by ninety-three on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-05-21T14:06:40.889Z · score: 7 (2 votes) · LW · GW

If you're not saying to go into AI safety research, what non-business-as-usual course of action are you expecting? Is your premise that everyone taking this seriously should figure out their comparative advantage within an AI risk organization because they contain many non-researcher roles, or are you imagining some potential course of action outside of "Give your time/money to MIRI/HCAI/etc"?