Posts
Comments
However, if you're like me, you probably aren't quite comfortable with Miller's rejection of bits as the information currency of the brain.
What if "system 2" is like a register machine with 4 pointer-storing registers / slots / tabs, 2 per hemisphere? (Peter Carruthers "The Centered Mind".) I don't reject information processing, but rather consider "working memory" to be a misnomer. The brain does not implement memcpy
.
People allergic to Stephen Wolfram and criticizing "Wolfram Physics" (e.g. Scott Aaronson) would better contribute to the conversation by reading Jonathan Gorard's publications, forming their opinion of the content of that published work, and expressing their non-ad-Wolfram criticisms. The broader "culture clash" problem is that Wolfram Physics is a metaphysical theory rather than a physics theory: it explains physics theories such as general relativity and quantum mechanics, rather than explaining a specific physical phenomenon. (Further physics theories can be advanced within Wolfram Physics.)
Rodney Brooks' predictions made at the beginning of 2018: https://rodneybrooks.com/predictions-scorecard-2022-january-01/
Moral anti-realists do not claim that people don't have preferences. Rather, they claim that there are no preference-assumption-free facts regarding preference system comparisons. Therefore moral anti realists will not seek such facts. Moral realists may seek such facts in order to improve/correct their preferences.
The implications of moral anti realism for action revolve around pursuing facts to feed into terminal preference updates.
"if anti-realism is true, it doesn't matter [to us] what we do" -- that's false. Whether something does matter to us is a fact independent of whether something ought to matter to us.
I advise using JAX instead of Tensorflow.
I recently read David Goggins "Can't Hurt Me". On one level it does glorify superhuman pain tolerance. But a constructive perspective on such attitudes is: they illustrate courage. Do not tolerate pain, laugh at it! Do not tense under cold shower, relax into it. Do not bear problems, solve them.
The fridge / the freezer!
Would you consider MuZero an advance in causal reasoning? Despite intentionally not representing causality / explicit model dynamics, it supports hypothetical reasoning via state tree search.
Do you think there's a chance of MuZero - AlphaStar crossover?
The general tool: residual networks variant of convolutional NNs, MCTS-like variable-depth tree search. Prerequisites: input can be presented as K layers of N-D data (where N=1,2,3... not too large), the action space is discrete. If the actions are not discrete, an additional small module would be needed to quantize the action space based on the neural network's action priors.
Perhaps a satisfactory answer can be found in "Jewish Philosophy as a Guide to Life: Rosenzweig, Buber, Levinas, Wittgenstein" by Hilary Putnam (who seemed to me to be a reasonable philosopher, but converted to Judaism). I've just started listening to its audiobook version, prompted by this post.
At high-school level, physics has perhaps the richest tightly-knit concept structures.
Including signaling "thanks" to the university. :-)
Reminds me of the error -- on charitable reading, of the characters, but perhaps of the author -- in "Permutation City". There's no such a thing as out-of-order simulation.
Only in objective modal sense. Beliefs are probabilistic constraints over observations anticipated given a context. So in the example with stars moving away, the stars are still observables because there is counterfactual context where we observe them from nearby (by traveling with them etc.)
(1) It's totally tongue-in-cheek. (2) By "modern" I don't mean "contemporary", I mean "since Descartes onwards". (3) By "notes" I mean criticisms. (4) The point is that I see responses to the simulation aka. Daemon argument recurring in philosophy.
Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".
All our values are fallible, but doubt requires justification.
Persons do not have fixed value systems anyway. A value system is a partly-physiologically-implemented theory of what is valuable (good, right, etc.) One can recognize a better theory and try to make one's habits and reactions fit to it. Pedophilia is bad if it promotes a shallower reaction to a young person, and good if it promotes a richer reaction, it depends on particulars of brain-implementing-pedophilia. Abusing anyone is bad.
It is not necessary for Nazis hating Jews to be rational that there are reasons for hating Jews, only that the reasons for not hating Jews do not outweigh the reasons for hating Jews. But their reasons for hating Jews are either self-contradictory or in fact support not hating Jews when properly worked out.
I liked "Diaspora" more.
Let me get this straight. You want to promote the short-circuiting the mental circuit of promotion?
If God created the universe, then that's some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.
Set up automatic filters.
As a function of how long the universe will exist? ETA: a short period of time might be significantly located.
The absurd claim is "there is nothing you ought to do or ought to not do". The claim "life is tough" is not absurd. ETA: existentialism in the absurdist flavor (as opposed to for example the Christian flavor) is a form of value anti-realism which is not nihilism. It denies that there are values that could guide choices, but puts intrinsic value into making choices.
I would still be curious how much I can get out of life in billions of years.
I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).
Isn't it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism -- accommodate some aspects of identity theory -- but not to directly deny it.
The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn't be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.
The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.
I'd like to add that if the curriculum has a distinction between "probability" and "statistics", it is taught in the "probability" class. Much later, the statistics class has "frequentist" part and "bayesian" part.
Inflationary multiverse is essentially infinite. But as you take a slice through (a part of) the multiverse, there is way more young universes. The proportion of universes of given age is inversely (exponentially, as in memoryless distribution) proportional to the age. This resolves the doomsday paradox (because our universe is very young relative to its lifespan). http://youtu.be/qbwcrEfQDHU?t=32m10s
Another argument to similar effect would be to consider a measure over possible indices. Indices pointing into old times would be less probable -- by needing more bits to encode -- than indices pointing to young times.
Our universe might be very old on this picture (relative to the measure), so the conclusion regarding Fermi paradox is to update towards the "great filter in the past" hypothesis. (It's more probable to be the first observer-philosopher having these considerations in one's corner of a universe.)
I'm glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I've confused it with http://friendly-ai.com/faq.html which has an explicitly narrower focus.
We should continue growing so that we join the superintelligentsia.
Although I wouldn't say this, I don't see how my comment contradicts this.
Let's take "the sexual objectification of women in some advertisement" as an example. Do you mean that sexual objectification takes place when the actress feels bad about playing in an erotic context, and agreed only because of commercial incentive, or something similar? ETA: I guess objectification generally means not treating someone as a person. With a focus on this explication, objectification in (working on) a film (advertisement is a short film) would be when the director does not collaborate with the actors, but rather is authoritarian in demanding that the actors fit his vision. ETA2: and objectification in the content of a film would be depicting an act of someone not treating another as a person; in case of "sexual objectification" depicting sexual violence.
I see it this way. It is "objectification" when it's used to attract attention. It's "for the purpose of appreciation" when it's used to enrich emotional reaction (usually of the aesthetic evaluation, but sometimes of the moral evaluation). So it is hard to say just by the content, but if the content is both erotic and boring it's objectification.
You might be interested in reading TDT chapter 5 "Is Decision-Dependency Fair" if you haven't already.
I mean learning Prolog in the way it would be taught in a "Programming Languages" course, not as an attempt at facilitating AI. Two angles are important here: (1) programming paradigm features: learning the concept of late-bound / dataflow / "logical" variables. http://en.wikipedia.org/wiki/Oz_(programming_language) is an OK substitute. (2) logic, which is also something to be taught in a "Programming Languages" context, not (only) in AI context. With Prolog, this means learning about SLD-resolution and perhaps making some broader forays from there. But one could also explore connections between functional programming and intuitionistic logics.
OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.
Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to "verifying if Omega was right" -- if Omega is right "apriori" then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, "verifying" or "being playful" might mean for example tossing a coin to decide.
An interesting problem with CEV is demonstrated in chapter 5 "On the Rationality of Preferences" of Hilary Putnam "The Collapse of the Fact/Value Dichotomy and Other Essays". The problem is that a person might assign value to that a choice of a preference, underdetermined at a given time, being of her own free will.
I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple "basins of attraction". Some of the dynamics consists of decreasing inconsistencies and increasing robustness.
I agree. In case it's not clear, my opinion is that an essential part of being a person is developing one's value system. It's not something that you can entirely outsource because "the journey is part of the destination" (but of course any help one can get matters) and it's not a requirement for having ethical people or AI. ETA: i.e. having a fixed value system is not a requirement for being ethical.
The last forbidden transition would be the very last one, since it's outright wrong while the previous ones do seem to have reasons behind them.
Valuing everything means you want to go as far from nothingness as you can get. You value that more types are instantiated over less types being instantiated.
Logically.
By letting people evolve their values at their own pace, within ethical boundaries.