Posts

The value of preserving reality 2010-11-08T23:51:17.130Z · score: -1 (4 votes)

Comments

Comment by lukstafi on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T18:23:46.947Z · score: 1 (1 votes) · LW · GW

The general tool: residual networks variant of convolutional NNs, MCTS-like variable-depth tree search. Prerequisites: input can be presented as K layers of N-D data (where N=1,2,3... not too large), the action space is discrete. If the actions are not discrete, an additional small module would be needed to quantize the action space based on the neural network's action priors.

Comment by lukstafi on Robert Aumann on Judaism · 2015-08-23T17:55:39.280Z · score: 0 (0 votes) · LW · GW

Perhaps a satisfactory answer can be found in "Jewish Philosophy as a Guide to Life: Rosenzweig, Buber, Levinas, Wittgenstein" by Hilary Putnam (who seemed to me to be a reasonable philosopher, but converted to Judaism). I've just started listening to its audiobook version, prompted by this post.

Comment by lukstafi on What attracts smart and curious young people to physics? Should this be encouraged? · 2014-03-16T11:46:26.997Z · score: 1 (1 votes) · LW · GW

At high-school level, physics has perhaps the richest tightly-knit concept structures.

Comment by lukstafi on Learn (and Maybe Get a Credential in) Data Science · 2014-02-01T22:26:44.559Z · score: 1 (1 votes) · LW · GW

Including signaling "thanks" to the university. :-)

Comment by lukstafi on an ethical puzzle about brain emulation · 2013-12-17T20:35:00.730Z · score: 0 (0 votes) · LW · GW

Reminds me of the error -- on charitable reading, of the characters, but perhaps of the author -- in "Permutation City". There's no such a thing as out-of-order simulation.

Comment by lukstafi on Does the simulation argument even need simulations? · 2013-10-14T12:08:55.035Z · score: 0 (0 votes) · LW · GW

Only in objective modal sense. Beliefs are probabilistic constraints over observations anticipated given a context. So in the example with stars moving away, the stars are still observables because there is counterfactual context where we observe them from nearby (by traveling with them etc.)

Comment by lukstafi on Does the simulation argument even need simulations? · 2013-10-12T12:45:26.646Z · score: 1 (1 votes) · LW · GW

(1) It's totally tongue-in-cheek. (2) By "modern" I don't mean "contemporary", I mean "since Descartes onwards". (3) By "notes" I mean criticisms. (4) The point is that I see responses to the simulation aka. Daemon argument recurring in philosophy.

Comment by lukstafi on Does the simulation argument even need simulations? · 2013-10-12T11:48:55.374Z · score: 1 (1 votes) · LW · GW

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

Comment by lukstafi on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-26T10:29:02.339Z · score: 3 (3 votes) · LW · GW

All our values are fallible, but doubt requires justification.

Comment by lukstafi on Thought experiment: The transhuman pedophile · 2013-09-23T13:35:13.932Z · score: 0 (0 votes) · LW · GW

Persons do not have fixed value systems anyway. A value system is a partly-physiologically-implemented theory of what is valuable (good, right, etc.) One can recognize a better theory and try to make one's habits and reactions fit to it. Pedophilia is bad if it promotes a shallower reaction to a young person, and good if it promotes a richer reaction, it depends on particulars of brain-implementing-pedophilia. Abusing anyone is bad.

Comment by lukstafi on Eudaimonic Utilitarianism · 2013-09-06T11:05:45.545Z · score: 0 (0 votes) · LW · GW

It is not necessary for Nazis hating Jews to be rational that there are reasons for hating Jews, only that the reasons for not hating Jews do not outweigh the reasons for hating Jews. But their reasons for hating Jews are either self-contradictory or in fact support not hating Jews when properly worked out.

Comment by lukstafi on Yet more "stupid" questions · 2013-09-03T20:44:58.233Z · score: 0 (0 votes) · LW · GW

I liked "Diaspora" more.

Comment by lukstafi on . · 2013-09-03T19:21:06.773Z · score: 0 (0 votes) · LW · GW

Let me get this straight. You want to promote the short-circuiting the mental circuit of promotion?

Comment by lukstafi on Theists are wrong; is theism? · 2013-09-03T19:07:32.183Z · score: 1 (1 votes) · LW · GW

If God created the universe, then that's some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.

Comment by lukstafi on How I Am Productive · 2013-08-27T22:23:23.411Z · score: 2 (2 votes) · LW · GW

Set up automatic filters.

Comment by lukstafi on Reality is weirdly normal · 2013-08-26T11:06:41.829Z · score: 0 (2 votes) · LW · GW

As a function of how long the universe will exist? ETA: a short period of time might be significantly located.

Comment by lukstafi on Reality is weirdly normal · 2013-08-26T10:42:15.113Z · score: 0 (0 votes) · LW · GW

The absurd claim is "there is nothing you ought to do or ought to not do". The claim "life is tough" is not absurd. ETA: existentialism in the absurdist flavor (as opposed to for example the Christian flavor) is a form of value anti-realism which is not nihilism. It denies that there are values that could guide choices, but puts intrinsic value into making choices.

Comment by lukstafi on Reality is weirdly normal · 2013-08-26T10:10:44.040Z · score: 0 (0 votes) · LW · GW

I would still be curious how much I can get out of life in billions of years.

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-26T09:55:15.139Z · score: 0 (0 votes) · LW · GW

I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-25T20:24:04.032Z · score: 1 (1 votes) · LW · GW

Isn't it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism -- accommodate some aspects of identity theory -- but not to directly deny it.

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-24T22:29:48.386Z · score: 1 (1 votes) · LW · GW

The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn't be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-24T18:19:16.337Z · score: 1 (1 votes) · LW · GW

The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.

Comment by lukstafi on What Bayesianism taught me · 2013-08-11T09:45:11.997Z · score: 1 (1 votes) · LW · GW

I'd like to add that if the curriculum has a distinction between "probability" and "statistics", it is taught in the "probability" class. Much later, the statistics class has "frequentist" part and "bayesian" part.

Comment by lukstafi on [Link] Cosmological Infancy · 2013-08-04T18:32:07.784Z · score: 1 (1 votes) · LW · GW

Inflationary multiverse is essentially infinite. But as you take a slice through (a part of) the multiverse, there is way more young universes. The proportion of universes of given age is inversely (exponentially, as in memoryless distribution) proportional to the age. This resolves the doomsday paradox (because our universe is very young relative to its lifespan). http://youtu.be/qbwcrEfQDHU?t=32m10s

Another argument to similar effect would be to consider a measure over possible indices. Indices pointing into old times would be less probable -- by needing more bits to encode -- than indices pointing to young times.

Our universe might be very old on this picture (relative to the measure), so the conclusion regarding Fermi paradox is to update towards the "great filter in the past" hypothesis. (It's more probable to be the first observer-philosopher having these considerations in one's corner of a universe.)

See also http://www.youtube.com/watch?v=jhnKBKZvb_U

Comment by lukstafi on Responses to Catastrophic AGI Risk: A Survey · 2013-07-08T17:01:37.516Z · score: 0 (0 votes) · LW · GW

I'm glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I've confused it with http://friendly-ai.com/faq.html which has an explicitly narrower focus.

Comment by lukstafi on Living in the shadow of superintelligence · 2013-06-28T18:34:26.713Z · score: 0 (0 votes) · LW · GW

We should continue growing so that we join the superintelligentsia.

Comment by lukstafi on [deleted post] 2013-06-22T20:09:57.100Z

Although I wouldn't say this, I don't see how my comment contradicts this.

Comment by lukstafi on [deleted post] 2013-06-22T20:00:51.275Z

Let's take "the sexual objectification of women in some advertisement" as an example. Do you mean that sexual objectification takes place when the actress feels bad about playing in an erotic context, and agreed only because of commercial incentive, or something similar? ETA: I guess objectification generally means not treating someone as a person. With a focus on this explication, objectification in (working on) a film (advertisement is a short film) would be when the director does not collaborate with the actors, but rather is authoritarian in demanding that the actors fit his vision. ETA2: and objectification in the content of a film would be depicting an act of someone not treating another as a person; in case of "sexual objectification" depicting sexual violence.

Comment by lukstafi on [deleted post] 2013-06-22T19:28:28.458Z

I see it this way. It is "objectification" when it's used to attract attention. It's "for the purpose of appreciation" when it's used to enrich emotional reaction (usually of the aesthetic evaluation, but sometimes of the moral evaluation). So it is hard to say just by the content, but if the content is both erotic and boring it's objectification.

Comment by lukstafi on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T16:06:08.524Z · score: 1 (1 votes) · LW · GW

You might be interested in reading TDT chapter 5 "Is Decision-Dependency Fair" if you haven't already.

Comment by lukstafi on Learning programming: so I've learned the basics of Python, what next? · 2013-06-19T10:57:50.134Z · score: 0 (0 votes) · LW · GW

I mean learning Prolog in the way it would be taught in a "Programming Languages" course, not as an attempt at facilitating AI. Two angles are important here: (1) programming paradigm features: learning the concept of late-bound / dataflow / "logical" variables. http://en.wikipedia.org/wiki/Oz_(programming_language) is an OK substitute. (2) logic, which is also something to be taught in a "Programming Languages" context, not (only) in AI context. With Prolog, this means learning about SLD-resolution and perhaps making some broader forays from there. But one could also explore connections between functional programming and intuitionistic logics.

Comment by lukstafi on Learning programming: so I've learned the basics of Python, what next? · 2013-06-18T11:19:05.142Z · score: 0 (0 votes) · LW · GW

OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.

Comment by lukstafi on Normative uncertainty in Newcomb's problem · 2013-06-16T09:46:13.772Z · score: 0 (0 votes) · LW · GW

Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to "verifying if Omega was right" -- if Omega is right "apriori" then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, "verifying" or "being playful" might mean for example tossing a coin to decide.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-10T00:11:47.593Z · score: 0 (0 votes) · LW · GW

An interesting problem with CEV is demonstrated in chapter 5 "On the Rationality of Preferences" of Hilary Putnam "The Collapse of the Fact/Value Dichotomy and Other Essays". The problem is that a person might assign value to that a choice of a preference, underdetermined at a given time, being of her own free will.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-06-09T09:43:29.821Z · score: 0 (0 votes) · LW · GW

I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple "basins of attraction". Some of the dynamics consists of decreasing inconsistencies and increasing robustness.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T14:08:21.288Z · score: 2 (2 votes) · LW · GW

I agree. In case it's not clear, my opinion is that an essential part of being a person is developing one's value system. It's not something that you can entirely outsource because "the journey is part of the destination" (but of course any help one can get matters) and it's not a requirement for having ethical people or AI. ETA: i.e. having a fixed value system is not a requirement for being ethical.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:52:44.053Z · score: 0 (0 votes) · LW · GW

The last forbidden transition would be the very last one, since it's outright wrong while the previous ones do seem to have reasons behind them.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:49:15.349Z · score: 0 (0 votes) · LW · GW

Valuing everything means you want to go as far from nothingness as you can get. You value that more types are instantiated over less types being instantiated.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:45:58.911Z · score: 1 (3 votes) · LW · GW

Logically.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:44:10.492Z · score: 1 (1 votes) · LW · GW

By letting people evolve their values at their own pace, within ethical boundaries.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-06T16:39:56.085Z · score: 4 (4 votes) · LW · GW

I'm with you up to 6. Having a terminal value on everything does not mean that the final consistent evaluation is uniform over everything, because instrumental values come into play -- some values cancel out and some add up. But it does mean that you have justifications to make before you start destroying stuff.

Comment by lukstafi on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-06T11:20:34.264Z · score: 1 (1 votes) · LW · GW

Is each participant limited to submitting a single program? Have you considered "team mode", where the results of programs from a single team are summed up?

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-06-04T21:12:48.041Z · score: 0 (0 votes) · LW · GW

No, I mean that we might give a shit even about quite alien beings.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-05-30T18:10:33.564Z · score: 2 (2 votes) · LW · GW

E.g. proofreading in biology

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-05-30T12:35:33.972Z · score: 0 (0 votes) · LW · GW

I presume by "the same world" you mean a sufficiently overlapping class of worlds. I don't think that "the same world" is well defined. I think that determining in particular cases what is "the world" you want affects who you are.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-05-30T12:11:59.533Z · score: 0 (0 votes) · LW · GW

My point is that the origin of values, the initial conditions, is not the sole criterion for determining whether a culture appreciates given values. There can be convergence or "discovery" of values.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-05-29T23:44:27.321Z · score: 0 (0 votes) · LW · GW

Another point is that value (actually, a structure of values) shouldn't be confused with a way of life. Values are abstractions: various notions of beauty, curiosity, elegance, so called warmheartedness... The exact meaning of any particular such term is not a metaphysical entity, so it is difficult to claim that an identical term is instantiated across different cultures / ways of life. But there can be very good translations that map such terms onto a different way of life (and back). ETA: there are multiple ways of life in our cultures; a person can change her way of life by pursuing a different profession or a different hobby.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-05-29T11:07:09.451Z · score: 0 (0 votes) · LW · GW

I appeal to (1) the consideration of whether inter-translatability of science, and valuing of certain theories over others, depends on the initial conditions of civilization that develops it. (2) Universality of decision-theoretic and game-theoretic situations. (3) Evolutionary value of versatility hinting at evolved value of diversity.

Comment by lukstafi on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2013-05-29T08:04:11.274Z · score: 1 (1 votes) · LW · GW

? I have a different conception of romantic love. I could swear I've been in love with my kindergarten teacher. And I was "dating" girls two years later. It ended though as this part of myself grew introvert, still before puberty.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-05-29T06:59:08.155Z · score: 0 (0 votes) · LW · GW

Do you think that CEV-generating mechanisms are negotiable across species? I.e. whether other species would have a concept of CEV and would agree to at least some of the mechanisms that generate a CEV. It would enable determining which differences are reconcilable and where we have to agree to disagree.