Posts

The value of preserving reality 2010-11-08T23:51:17.130Z

Comments

Comment by lukstafi on One Minute Every Moment · 2023-09-05T05:45:31.322Z · LW · GW

However, if you're like me, you probably aren't quite comfortable with Miller's rejection of bits as the information currency of the brain.

What if "system 2" is like a register machine with 4 pointer-storing registers / slots / tabs, 2 per hemisphere? (Peter Carruthers "The Centered Mind".) I don't reject information processing, but rather consider "working memory" to be a misnomer. The brain does not implement memcpy.

Comment by lukstafi on Stephen Wolfram's ideas are under-appreciated · 2023-08-13T15:25:27.736Z · LW · GW

People allergic to Stephen Wolfram and criticizing "Wolfram Physics" (e.g. Scott Aaronson) would better contribute to the conversation by reading Jonathan Gorard's publications, forming their opinion of the content of that published work, and expressing their non-ad-Wolfram criticisms. The broader "culture clash" problem is that Wolfram Physics is a metaphysical theory rather than a physics theory: it explains physics theories such as general relativity and quantum mechanics, rather than explaining a specific physical phenomenon. (Further physics theories can be advanced within Wolfram Physics.)

Comment by lukstafi on [deleted post] 2022-03-14T12:40:45.664Z

Rodney Brooks' predictions made at the beginning of 2018: https://rodneybrooks.com/predictions-scorecard-2022-january-01/

Comment by lukstafi on Is veganism morally correct? · 2022-03-09T12:15:36.932Z · LW · GW

Moral anti-realists do not claim that people don't have preferences. Rather, they claim that there are no preference-assumption-free facts regarding preference system comparisons. Therefore moral anti realists will not seek such facts. Moral realists may seek such facts in order to improve/correct their preferences.

Comment by lukstafi on Is veganism morally correct? · 2022-03-02T09:18:52.038Z · LW · GW

The implications of moral anti realism for action revolve around pursuing facts to feed into terminal preference updates.

Comment by lukstafi on Is veganism morally correct? · 2022-02-21T13:04:34.045Z · LW · GW

"if anti-realism is true, it doesn't matter [to us] what we do" -- that's false. Whether something does matter to us is a fact independent of whether something ought to matter to us.

Comment by lukstafi on Reinforcement Learning Study Group · 2021-12-28T08:44:07.953Z · LW · GW

I advise using JAX instead of Tensorflow.

Comment by lukstafi on Pain is not the unit of Effort · 2020-11-29T16:58:46.229Z · LW · GW

I recently read David Goggins "Can't Hurt Me". On one level it does glorify superhuman pain tolerance. But a constructive perspective on such attitudes is: they illustrate courage. Do not tolerate pain, laugh at it! Do not tense under cold shower, relax into it. Do not bear problems, solve them.

Comment by lukstafi on Industrial literacy · 2020-10-04T09:50:43.414Z · LW · GW

The fridge / the freezer!

Comment by lukstafi on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-27T11:00:57.878Z · LW · GW

Would you consider MuZero an advance in causal reasoning? Despite intentionally not representing causality / explicit model dynamics, it supports hypothetical reasoning via state tree search.

Do you think there's a chance of MuZero - AlphaStar crossover?

Comment by lukstafi on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T18:23:46.947Z · LW · GW

The general tool: residual networks variant of convolutional NNs, MCTS-like variable-depth tree search. Prerequisites: input can be presented as K layers of N-D data (where N=1,2,3... not too large), the action space is discrete. If the actions are not discrete, an additional small module would be needed to quantize the action space based on the neural network's action priors.

Comment by lukstafi on Robert Aumann on Judaism · 2015-08-23T17:55:39.280Z · LW · GW

Perhaps a satisfactory answer can be found in "Jewish Philosophy as a Guide to Life: Rosenzweig, Buber, Levinas, Wittgenstein" by Hilary Putnam (who seemed to me to be a reasonable philosopher, but converted to Judaism). I've just started listening to its audiobook version, prompted by this post.

Comment by lukstafi on What attracts smart and curious young people to physics? Should this be encouraged? · 2014-03-16T11:46:26.997Z · LW · GW

At high-school level, physics has perhaps the richest tightly-knit concept structures.

Comment by lukstafi on Learn (and Maybe Get a Credential in) Data Science · 2014-02-01T22:26:44.559Z · LW · GW

Including signaling "thanks" to the university. :-)

Comment by lukstafi on an ethical puzzle about brain emulation · 2013-12-17T20:35:00.730Z · LW · GW

Reminds me of the error -- on charitable reading, of the characters, but perhaps of the author -- in "Permutation City". There's no such a thing as out-of-order simulation.

Comment by lukstafi on Does the simulation argument even need simulations? · 2013-10-14T12:08:55.035Z · LW · GW

Only in objective modal sense. Beliefs are probabilistic constraints over observations anticipated given a context. So in the example with stars moving away, the stars are still observables because there is counterfactual context where we observe them from nearby (by traveling with them etc.)

Comment by lukstafi on Does the simulation argument even need simulations? · 2013-10-12T12:45:26.646Z · LW · GW

(1) It's totally tongue-in-cheek. (2) By "modern" I don't mean "contemporary", I mean "since Descartes onwards". (3) By "notes" I mean criticisms. (4) The point is that I see responses to the simulation aka. Daemon argument recurring in philosophy.

Comment by lukstafi on Does the simulation argument even need simulations? · 2013-10-12T11:48:55.374Z · LW · GW

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

Comment by lukstafi on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-26T10:29:02.339Z · LW · GW

All our values are fallible, but doubt requires justification.

Comment by lukstafi on Thought experiment: The transhuman pedophile · 2013-09-23T13:35:13.932Z · LW · GW

Persons do not have fixed value systems anyway. A value system is a partly-physiologically-implemented theory of what is valuable (good, right, etc.) One can recognize a better theory and try to make one's habits and reactions fit to it. Pedophilia is bad if it promotes a shallower reaction to a young person, and good if it promotes a richer reaction, it depends on particulars of brain-implementing-pedophilia. Abusing anyone is bad.

Comment by lukstafi on Eudaimonic Utilitarianism · 2013-09-06T11:05:45.545Z · LW · GW

It is not necessary for Nazis hating Jews to be rational that there are reasons for hating Jews, only that the reasons for not hating Jews do not outweigh the reasons for hating Jews. But their reasons for hating Jews are either self-contradictory or in fact support not hating Jews when properly worked out.

Comment by lukstafi on Yet more "stupid" questions · 2013-09-03T20:44:58.233Z · LW · GW

I liked "Diaspora" more.

Comment by lukstafi on . · 2013-09-03T19:21:06.773Z · LW · GW

Let me get this straight. You want to promote the short-circuiting the mental circuit of promotion?

Comment by lukstafi on Theists are wrong; is theism? · 2013-09-03T19:07:32.183Z · LW · GW

If God created the universe, then that's some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.

Comment by lukstafi on How I Am Productive · 2013-08-27T22:23:23.411Z · LW · GW

Set up automatic filters.

Comment by lukstafi on Reality is weirdly normal · 2013-08-26T11:06:41.829Z · LW · GW

As a function of how long the universe will exist? ETA: a short period of time might be significantly located.

Comment by lukstafi on Reality is weirdly normal · 2013-08-26T10:42:15.113Z · LW · GW

The absurd claim is "there is nothing you ought to do or ought to not do". The claim "life is tough" is not absurd. ETA: existentialism in the absurdist flavor (as opposed to for example the Christian flavor) is a form of value anti-realism which is not nihilism. It denies that there are values that could guide choices, but puts intrinsic value into making choices.

Comment by lukstafi on Reality is weirdly normal · 2013-08-26T10:10:44.040Z · LW · GW

I would still be curious how much I can get out of life in billions of years.

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-26T09:55:15.139Z · LW · GW

I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-25T20:24:04.032Z · LW · GW

Isn't it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism -- accommodate some aspects of identity theory -- but not to directly deny it.

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-24T22:29:48.386Z · LW · GW

The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn't be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.

Comment by lukstafi on How sure are you that brain emulations would be conscious? · 2013-08-24T18:19:16.337Z · LW · GW

The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.

Comment by lukstafi on What Bayesianism taught me · 2013-08-11T09:45:11.997Z · LW · GW

I'd like to add that if the curriculum has a distinction between "probability" and "statistics", it is taught in the "probability" class. Much later, the statistics class has "frequentist" part and "bayesian" part.

Comment by lukstafi on [Link] Cosmological Infancy · 2013-08-04T18:32:07.784Z · LW · GW

Inflationary multiverse is essentially infinite. But as you take a slice through (a part of) the multiverse, there is way more young universes. The proportion of universes of given age is inversely (exponentially, as in memoryless distribution) proportional to the age. This resolves the doomsday paradox (because our universe is very young relative to its lifespan). http://youtu.be/qbwcrEfQDHU?t=32m10s

Another argument to similar effect would be to consider a measure over possible indices. Indices pointing into old times would be less probable -- by needing more bits to encode -- than indices pointing to young times.

Our universe might be very old on this picture (relative to the measure), so the conclusion regarding Fermi paradox is to update towards the "great filter in the past" hypothesis. (It's more probable to be the first observer-philosopher having these considerations in one's corner of a universe.)

See also http://www.youtube.com/watch?v=jhnKBKZvb_U

Comment by lukstafi on Responses to Catastrophic AGI Risk: A Survey · 2013-07-08T17:01:37.516Z · LW · GW

I'm glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I've confused it with http://friendly-ai.com/faq.html which has an explicitly narrower focus.

Comment by lukstafi on Living in the shadow of superintelligence · 2013-06-28T18:34:26.713Z · LW · GW

We should continue growing so that we join the superintelligentsia.

Comment by lukstafi on [deleted post] 2013-06-22T20:09:57.100Z

Although I wouldn't say this, I don't see how my comment contradicts this.

Comment by lukstafi on [deleted post] 2013-06-22T20:00:51.275Z

Let's take "the sexual objectification of women in some advertisement" as an example. Do you mean that sexual objectification takes place when the actress feels bad about playing in an erotic context, and agreed only because of commercial incentive, or something similar? ETA: I guess objectification generally means not treating someone as a person. With a focus on this explication, objectification in (working on) a film (advertisement is a short film) would be when the director does not collaborate with the actors, but rather is authoritarian in demanding that the actors fit his vision. ETA2: and objectification in the content of a film would be depicting an act of someone not treating another as a person; in case of "sexual objectification" depicting sexual violence.

Comment by lukstafi on [deleted post] 2013-06-22T19:28:28.458Z

I see it this way. It is "objectification" when it's used to attract attention. It's "for the purpose of appreciation" when it's used to enrich emotional reaction (usually of the aesthetic evaluation, but sometimes of the moral evaluation). So it is hard to say just by the content, but if the content is both erotic and boring it's objectification.

Comment by lukstafi on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T16:06:08.524Z · LW · GW

You might be interested in reading TDT chapter 5 "Is Decision-Dependency Fair" if you haven't already.

Comment by lukstafi on Learning programming: so I've learned the basics of Python, what next? · 2013-06-19T10:57:50.134Z · LW · GW

I mean learning Prolog in the way it would be taught in a "Programming Languages" course, not as an attempt at facilitating AI. Two angles are important here: (1) programming paradigm features: learning the concept of late-bound / dataflow / "logical" variables. http://en.wikipedia.org/wiki/Oz_(programming_language) is an OK substitute. (2) logic, which is also something to be taught in a "Programming Languages" context, not (only) in AI context. With Prolog, this means learning about SLD-resolution and perhaps making some broader forays from there. But one could also explore connections between functional programming and intuitionistic logics.

Comment by lukstafi on Learning programming: so I've learned the basics of Python, what next? · 2013-06-18T11:19:05.142Z · LW · GW

OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.

Comment by lukstafi on Normative uncertainty in Newcomb's problem · 2013-06-16T09:46:13.772Z · LW · GW

Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to "verifying if Omega was right" -- if Omega is right "apriori" then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, "verifying" or "being playful" might mean for example tossing a coin to decide.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-10T00:11:47.593Z · LW · GW

An interesting problem with CEV is demonstrated in chapter 5 "On the Rationality of Preferences" of Hilary Putnam "The Collapse of the Fact/Value Dichotomy and Other Essays". The problem is that a person might assign value to that a choice of a preference, underdetermined at a given time, being of her own free will.

Comment by lukstafi on Is a paperclipper better than nothing? · 2013-06-09T09:43:29.821Z · LW · GW

I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple "basins of attraction". Some of the dynamics consists of decreasing inconsistencies and increasing robustness.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T14:08:21.288Z · LW · GW

I agree. In case it's not clear, my opinion is that an essential part of being a person is developing one's value system. It's not something that you can entirely outsource because "the journey is part of the destination" (but of course any help one can get matters) and it's not a requirement for having ethical people or AI. ETA: i.e. having a fixed value system is not a requirement for being ethical.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:52:44.053Z · LW · GW

The last forbidden transition would be the very last one, since it's outright wrong while the previous ones do seem to have reasons behind them.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:49:15.349Z · LW · GW

Valuing everything means you want to go as far from nothingness as you can get. You value that more types are instantiated over less types being instantiated.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:45:58.911Z · LW · GW

Logically.

Comment by lukstafi on Mahatma Armstrong: CEVed to death. · 2013-06-07T11:44:10.492Z · LW · GW

By letting people evolve their values at their own pace, within ethical boundaries.