Posts

On desiring subjective states (post 3 of 3) 2015-05-05T02:16:01.543Z
The language of desire (post 2 of 3) 2015-05-03T21:57:33.050Z
Gasoline Gal looks under the hood (post 1 of 3) 2015-05-03T20:15:49.744Z
[LINK] Prisoner's Dilemma? Not So Much 2014-05-20T23:38:19.450Z
Robots ate my job [links] 2012-04-10T01:57:39.019Z

Comments

Comment by torekp on Is CDT with precommitment enough? · 2024-05-25T23:08:15.046Z · LW · GW

Given the disagreement over what "causality" is, I suspect that different CDT's might have different tolerances for adding precommitment without spoiling the point of CDT.  For an example of a definition of causality that makes interesting impacts on decision theory, see Douglas Kutach, Causation and its Basis in Fundamental Physics.  There's a nice review here.  Defining "causation" Kutach's way would allow both making and keeping precommitments to count as causing good results.  It would also at least partly collapse the divergence between CDT and EDT.  Maybe completely - I haven't thought that through yet.

Comment by torekp on When is a mind me? · 2024-04-19T11:15:19.422Z · LW · GW

Suppose someone draws a "personal identity" line to exclude this future sunrise-witnessing person.  Then if you claim that, by not anticipating, they are degrading the accuracy of the sunrise-witness's beliefs, they might reply that you are begging the question.

Comment by torekp on When is a mind me? · 2024-04-19T11:03:41.920Z · LW · GW

I have a closely related objection/clarification.  I agree with the main thrust of Rob's post, but this part:

Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you"...

Rather, I assume xlr8harder cares about more substantive questions like:  (1) If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? (2) Should I anticipate experiencing what my upload experiences? (3) If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure?

..strikes me as confused or at least confusing.

Take your chemistry/physics tests example.  What does "I anticipate the experience of a sense of accomplishment in answering the chemistry test" mean?  Well for one thing, it certainly indicates that you believe the experience is likely to happen (to someone).  For another, it often means that you believe it will happen to you - but that invites the semantic question that Rob says this isn't about.  For a third - and I propose that this is a key point that makes us feel there is a "substantive" question here - it indicates that you empathize with this future person who does well on the test.

But I don't see how empathizing or not-empathizing can be assessed for accuracy.  It can be consistent or inconsistent with the things one cares about, which I suppose makes it subject to rational evaluation, but that looks different from accuracy/inaccuracy.

Comment by torekp on [Valence series] 2. Valence & Normativity · 2023-12-11T02:18:26.100Z · LW · GW

I'm not at all convinced by the claim that <valence is a roughly linear function over included concepts>, if I may paraphrase.  After laying out a counterexample, you seem to be constructing a separate family of concepts that better fits a linear model.  But (a) this is post-hoc and potentially ad-hoc, and (b) you've given us little reason to expect that there will always be such a family of concepts.  It would help if you could outline how a privileged set of concepts arises for a given person, that will explain their valences.

Also, your definition of "innate drives" works for the purpose of collecting all valences into a category explained by one basket of root causes.  But it's a diverse basket.  I think you're missing the opportunity to make a distinction -- Wanting vs. Liking Revisited -- which is useful for understanding human motivations.

Comment by torekp on The Puritans would one-box: evidential decision theory in the 17th century · 2023-10-17T02:06:00.718Z · LW · GW

When dealing with theology, you need to be careful about invoking common sense. According to https://www.thegospelcoalition.org/themelios/article/tensions-in-calvins-idea-of-predestination/ , Calvin held that God's destiny for a human being is decided eternally, not within time and prior to that person's prayer, hard work, etc.

The money (or heaven) is already in the box. Omega (or God) can not change the outcome.

What makes this kind of reasoning work in the real (natural) world is the growth of entropy involved in putting money in boxes, deciding to do so, or thinking about whether the money is there. If we're taking theology seriously though - or maybe even when we posit an "Omega" with magical sounding powers - we need to wonder whether the usual rules still apply.

Comment by torekp on Arguments for optimism on AI Alignment (I don't endorse this version, will reupload a new version soon.) · 2023-10-16T17:02:19.984Z · LW · GW

I view your final point as crucial. I would put an additional twist on it, though. During the approach to AGI, if takeoff is even a little bit slow, the effective goals of the system can change. For example, most corporations arguably don't pursue profit exclusively even though they may be officially bound to. They favor executives, board members, and key employees in ways both subtle and obvious. But explicitly programming those goals into an SGD algorithm is probably too blatant to get away with.

Comment by torekp on Everybody Knows · 2023-09-10T14:14:24.407Z · LW · GW

In addition to your cases that fail to be explained by the four modes, I submit that Leonard Cohen's song itself also fails to fit.  Roughly speaking, one thread of meaning in these verses is that "(approximately) everybody knows the dice are loaded, but they don't raise a fuss because they know if they do, they'll be subjected to an even more unfavorable game."  And likewise for the lost war.  A second thread of meaning is that, as pjeby pointed out, people want to be at peace with unpleasant things they can't personally change.  It's not about trapping the listener into agreeing with the propositions that everyone supposedly knows.  Cohen's protagonist just takes it that the listener already agrees, and uses that to explain his own reaction to the betrayal he feels.

Comment by torekp on Consciousness as a conflationary alliance term for intrinsically valued internal experiences · 2023-07-17T23:21:01.217Z · LW · GW

Like Paradiddle, I worry about the methodology, but my worry is different.  It's not just the conclusions that are suspect in my view:  it's the data.  In particular, this --

Some people seemed to have multiple views on what consciousness is, in which cases I talked to them longer until they became fairly committed to one main idea.

-- is a serious problem.  You are basically forcing your subjects to treat a cluster in thingspace as if it must be definable by a single property or process.  Or perhaps they perceive you as urging them to pick a most important property.  If I had to pick a single most important property of consciousness, I'd pick affect (responses 4, 5 and 6), but that doesn't mean I think affect exhausts consciousness.  Analogously, if you ask me for the single most important thing about a car, I'll tell you that it gets one from point A to point B; but this doesn't mean that's my definition of "car".

This is not to deny that "consciousness" is ambiguous!  I agree that it is.  I'm not sure whether that's all that problematic, however.  There are good reasons for everyday English speakers to group related aspects together.  And when philosophers or neuroscientists try to answer questions about consciousness, in its various aspects which raise different questions, they typically clue you in as to which aspects they are addressing.

Comment by torekp on Why it's so hard to talk about Consciousness · 2023-07-05T23:49:44.812Z · LW · GW

this [that there is no ground truth as to what you experience] is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists.

I beg to differ.  The thrust of Dennett's statement is easily interpreted as the truth of a description being partially constituted by the subject's acceptance of the description.  E.g., in one of the snippets/bits you cite, "I seem to see a pink ring."  If the subject said "I seem to see a reddish oval", perhaps that would have been true.  But compare:

My freely drinking tea rather than coffee is partially constituted by saying to my host "tea, please."  Yet there is still an actual event of my freely drinking tea.  Even though if I had said "coffee, please" I probably would have drunk coffee instead.

We are getting into a zone where it is hard to tell what is a verbal issue and what is a substantive one.  (And in my view, that's because the distinction is inherently fuzzy.)  But that's life.

Comment by torekp on Why it's so hard to talk about Consciousness · 2023-07-05T00:32:59.504Z · LW · GW

Fair point about the experience itself vs its description.  But note that all the controversy is about the descriptions.  "Qualia" is a descriptor, "sensation" is a descriptor, etc.  Even "illusionists" about qualia don't deny that people experience things.

Comment by torekp on Why it's so hard to talk about Consciousness · 2023-07-04T20:28:10.686Z · LW · GW

There are many features you get right about the stubbornness of the problem/discussion.  Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters.  But now I'm going to complain about what I see as your missteps.

Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.

I think we need to be careful not to mush together metaphysics and epistemics.  A conceptual mystery, a felt lack of explanation - these are epistemic problems.  That's not sufficient reason to infer distinct metaphysical categories.  Particular camp #2 philosophers sometimes have arguments that try to go from these epistemic premises, plus additional premises, to a metaphysical divide between mental and physical properties.  Those arguments fail, but aside from that, it's worthwhile to distinguish their starting points from their conclusions.

Secondly, you imply that according to camp #2, statements like "I experienced a headache" cannot be mistaken.  As TAG already pointed out, the claim of incorrigibility is not necessary.  As soon as one uses a word or concept, one is risking error.  Suppose you are at a new restaurant, and you try the soup, and you say, "this soup tastes like chicken."  Your neighbor says, "no, it tastes like turkey."  You think about it, the taste still fresh in your mind, and realize that she is right.  It tastes (to you) like turkey, you just misidentified it.

Finally, a bit like shminux, I don't know which camp I'm in - except that I do, and it's neither.  Call mine camp 1.5 + 3i.  It's sort of in-between the main two (hence 1.5) but accuses both 1 + 2 of creating imaginary barriers (hence the 3i).

Comment by torekp on Why it's so hard to talk about Consciousness · 2023-07-04T19:29:38.925Z · LW · GW

The belief in irreducibility is much more of a sine qua non of qualiaphobia,

Can you explain that?  It seems that plenty of qualiaphiles believe they are irreducible, epistemically if not metaphysically.  (But not all:  at least some qualiaphiles think qualia are emergent metaphysically.  So, I can't explain what you wrote by supposing you had a simple typo.)

Comment by torekp on Open & Welcome Thread — March 2023 · 2023-03-26T19:29:43.037Z · LW · GW

I think you can avoid the reddit user's criticism if you go for an intermediate risk averse policy. On that policy, there being at least one world without catastrophe is highly important, but additional worlds also count more heavily than a standard utilitarian would say, up until good worlds approach about half (1/e?) the weight using the Born rule.

However, the setup seems to assume that there is little enough competition that "we" can choose a QRNG approach without being left behind. You touch on related issues when discussing costs, but this merits separate consideration.

Comment by torekp on The Preference Fulfillment Hypothesis · 2023-02-26T20:22:34.331Z · LW · GW

"People on the autistic spectrum may also have the experience of understanding other people better than neurotypicals do."

I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person's position. The AI will need an explicit model. If in addition the AI has superior cognitive abilities to the humans it's working with - or expects to become superior - it's not clear why simulation would be a good approach for it. Yes that works for humans, with their hardware accelerators and their clunky explicit modeling, but...

I read you as saying that simulation is what makes preference satisfaction a natural thing to do. If I misread, please clarify.

Comment by torekp on A Critique of Functional Decision Theory · 2023-01-09T00:02:55.341Z · LW · GW

Update:  John Collins says that "Causal Decision Theory" is a misnomer because (some?) classical formulations make subjunctive conditionals, not causality as such, central.  Cited by the Wolfgang Schwarz paper mentioned by wdmcaskill in the Introduction.

Comment by torekp on A Critique of Functional Decision Theory · 2023-01-08T14:41:02.565Z · LW · GW

I have a terminological question about Causal Decision Theory.

Most often, this [causal probability function] is interpreted in counterfactual terms (so P (SA) represents something like the probability of ​S​ coming about were I to choose ​A​) but it needn’t be.

Now it seems to me that causation is understood to be antisymmetric, i.e. we can have at most one of "A causes B" and "B causes A".  In contrast, counterfactuals are not antisymmetric, and "if I chose A then my simulation would also do so" and "If my simulation chose A then I would also do so" are both true.  Brian Hedden's Counterfactual Decision Theory seems like a version of FDT.

Maybe I am reading the quoted sentence without taking context sufficiently into account, and I should understand "causal counterfactual" where "counterfactual" was written.  Still, in that case, I think it's worth noting that antisymmetry is a distinguishing mark of CDT in contrast to FDT.

Comment by torekp on 100 Tips for a Better Life · 2020-12-24T21:02:50.975Z · LW · GW

I love #38

A time-traveller from 2030 appears and tells you your plan failed. Which part of your plan do you think is the one ...?

And I try to use it on arguments and explanations.

Comment by torekp on The Solomonoff Prior is Malign · 2020-10-15T09:06:12.993Z · LW · GW

Right, you're interested in syntactic measures of information, more than a physical one  My bad.

Comment by torekp on The Solomonoff Prior is Malign · 2020-10-15T01:01:53.356Z · LW · GW

the initial conditions of the universe are simpler than the initial conditions of Earth.

This seems to violate a conservation of information principle in quantum mechanics.

Comment by torekp on The Road to Mazedom · 2020-10-11T11:39:37.035Z · LW · GW

On #4, which I agree is important, there seems to be some explanation left implicit or left out.

#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem.

But middle managers who are good at producing actual results will therefore want to decrease mazedom, in order that their competence be recognized.  Is it, then, that incompetent people will be disproportionately attracted to - and capable of crowding others out from - middle management?  That they will be attracted is a no-brainer, but that they will crowd others out seems to depend on further conditions not specified.  For example, if an organization lets people advance in two ways, one through middle management, another through technical fields, then it naturally diverts the competent away from middle management.  But short of some such mechanism, it seems that mazedom in middle management is up for grabs.

Comment by torekp on Book review: Rethinking Consciousness · 2020-07-06T00:52:11.542Z · LW · GW

When I read

To be clear, if GNW is "consciousness" (as Dehaene describes it), then the attention schema is "how we think about consciousness". So this seems to be at the wrong level! [...] But it turns out, he wants to be one level up!

I thought, thank goodness, Graziano (and steve2152) gets it. But in the moral implications section, you immediately start talking about attention schemas rather than simply attention. Attention schemas aren't necessary for consciousness or sentience; they're necessary for meta-consciousness. I don't mean to deny that meta-consciousness is also morally important, but it strikes me as a bad move to skip right over simple consciousness.

This may make little difference to your main points. I agree that "There are (presumably) computations that arguably involve something like an 'attention schema' but with radically alien properties." And I doubt that I could see any value in an attention schema with sufficiently alien properties, nor would I expect it to see value in my attentional system.

Comment by torekp on Clickbait might not be destroying our general Intelligence · 2018-11-19T17:12:29.982Z · LW · GW
how to quote

Paste text into your comment and then select/highlight it. Formatting options will appear, including a quote button.

Comment by torekp on Embedded Agency (full-text version) · 2018-11-18T18:33:51.156Z · LW · GW
People often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can't perfectly know the hardware it is running on.

How could Emmy, an embedded agent, know its source code perfectly, or even be certain that it is a computing device under the Church-Turing definition? Such certainty would seem dogmatic. Without such certainty, the choice of 10 rather than 5 cannot be firmly classified as an error. (The classification as an error seemed to play an important role in your discussion.) So Emmy has a motivation to keep looking and find that U(10)=10.

Comment by torekp on Sam Harris and the Is–Ought Gap · 2018-11-17T18:08:54.892Z · LW · GW

Thanks for making point 2. Moral oughts need not motivate sociopaths, who sometimes admit (when there is no cost of doing so) that they've done wrong and just don't give a damn. The "is-ought" gap is better relabeled the "thought-motivation" gap. "Ought"s are thoughts; motives are something else.

Comment by torekp on Decision Theory Anti-realism · 2018-10-10T10:52:33.307Z · LW · GW

Technicalities: Under Possible Precisifications, 1 and 5 are not obviously different. I can interpret them differently, but I think you should clarify them. 2 is to 3 as 4 is to 1, so I suggest listing them in that order, and maybe adding an option that is to 3 as 5 is to 1.

Substance: I think you're passing over a bigger target for criticism, the notion of "outcomes". In general, agents can and do have preferences over decision processes themselves, as contrasted with the standard "outcomes" of most literature like winning or losing money or objects. For example, I can be "money pumped" in the following manner. Sell me a used luxury sedan on Monday for $10k. Trade me a Harley Davidson on Tuesday for the sedan plus my $5. Trade me a sports car on Wednesday for the Harley plus $5. Buy the sports car from me on Thursday for $9995. Oh no, I lost $15 on the total deal! Except: I got to drive, or even just admire, these different vehicles in the meantime.

If all processes and activities are fair game for rational preferences, then agents can have preferences over the riskiness of decisions, the complexity of the decision algorithm, and a host of other features that make it much more individually variable which approach is "best".

Comment by torekp on The Tails Coming Apart As Metaphor For Life · 2018-09-26T00:48:35.866Z · LW · GW

If there were no Real Moral System That You Actually Use, wouldn't you have a "meh, OK" reaction to either Pronatal Total Utilitarianism or Antinatalist Utilitarianism - perhaps whichever you happened to think of first? How would this error signal - disgust with those conclusions - be generated?

Comment by torekp on Stupid Questions September 2017 · 2017-09-18T00:42:54.629Z · LW · GW

Shouldn't a particular method of inductive reasoning be specified in order to give the question substance?

Comment by torekp on Minimizing Motivated Beliefs · 2017-09-10T13:06:30.780Z · LW · GW

Great post and great comment. Against your definition of "belief" I would offer the movie The Skeleton Key. But this doesn't detract from your main points, I think.

Comment by torekp on Inconsistent Beliefs and Charitable Giving · 2017-09-10T11:47:42.638Z · LW · GW

I think there are some pretty straightforward ways to change your true preferences. For example, if I want to become a person who values music more than I currently do, I can practice a musical instrument until I'm really good at it.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-09-02T18:59:28.403Z · LW · GW

I don't say that we can talk about every experience, only that if we do talk about it, then the basic words/concepts we use are about things that influence our talk. Also, the causal chain can be as indirect as you like: A causes B causes C ... causes T, where T is the talk; the talk can still be about A. It just can't be about Z, where Z is something which never appears in any chain leading to T.

I just now added the caveat "basic" because you have a good point about free will. (I assume you mean contracausal "free will". I think calling that "free will" is a misnomer, but that's off topic.) Using the basic concepts "cause", "me", "action", and "thing" and combining these with logical connectives, someone can say "I caused my action and nothing caused me to cause my action" and they can label this complex concept "free will". And that may have no referent, so such "free will" never causes anything. But the basic words that were used to define that term, do have referents, and do cause the basic words to be spoken. Similarly with "unicorn", which is shorthand for (roughly) a "single horned horse-like animal".

An eliminativist could hold that mental terms like "qualia" are referentless complex concepts, but an epiphenomenalist can't.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-08-30T22:58:49.796Z · LW · GW

The core problem remains that, if some event A plays no causal role in any verbal behavior, it is impossible to see how any word or phrase could refer to A. (You've called A "color perception A", but I aim to dispute that.)

Suppose we come across the Greenforest people, who live near newly discovered species including the greater geckos. Greenforesters use the word "gumie" always and only when they are very near greater geckos. Since greater geckos are extremely well camouflaged, they can only be seen at short range. Also, all greater geckos are infested with microscopic gyrating gnats. Gyrating gnats make intense ultrasound energy, so whenever anyone is close to a greater gecko, their environment and even their brain is filled with ultrasound. When one's brain is filled with this ultrasound, the oxygen consumption by brain cells rises. Greenforesters are hunter-gatherers lacking either microscopes or ultrasound detectors.

To what does "gumie" refer: geckos, ultrasound, or neural oxygen consumption? It's a no-brainer. Greenforesters can't talk about ultrasound or neural oxygen: those things play no causal role in their talk. Even though ultrasound and neural oxygen are both inside the speakers, and in that sense affect them, since neither one affects their talk, that's not what the talk is about.

Mapping this causal structure to the epiphenomenalist story above: geckos are like photon-wavelengths R, ultrasound in brain is like brain activity B, oxygen consumption is like "color perception" A, and utterances of "gumie" are like utterances S1 and S2. Only now I hope you can see why I put scare quotes around "color perception". Because color perception is something we can talk about.

Comment by torekp on Open thread, August 28 - September 3, 2017 · 2017-08-29T23:16:04.622Z · LW · GW

Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors.

Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?

Comment by torekp on Open thread, August 28 - September 3, 2017 · 2017-08-28T22:06:17.689Z · LW · GW

Sean Carroll writes in The Big Picture, p. 380:

The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.

I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we mean past macroscopic state), but can someone sketch the argument for me? Or give references?

For comparison, there are clear explanations available for why memory involves increasing entropy. I don't need anything that formal, but just an informal explanation of why different choices don't reliably correlate to different macroscopic events at lower-entropy (past) times.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-08-27T16:58:54.489Z · LW · GW

We not only stop at red lights, we make statements like S1: "subjectively, red is closer to violet than it is to green." We have cognitive access both to "objective" phenomena like the family of wavelengths coming from the traffic light, and also to "subjective" phenomena of certain low-level sensory processing outputs. The epiphenomenalist has a theory on the latter. Your steelman is well taken, given this clarification.

By the way, the fact that there is a large equivalence class of wavelength combinations that will be perceived the same way, does not make redness inherently subjective. There is an objective difference between a beam of light containing a photon mix that belongs to that class, and one that doesn't. The "primary-secondary quality" distinction, as usually conceived, is misleading at best. See the Ugly Duckling theorem.

Back to "subjective" qualities: when I say subjective-red is more similar to violet than to green, to what does "subjective-red" refer? On the usual theories of how words in general refer -- see above on "horses" and cows -- it must refer to the things that cause people to say S2: "subjectively this looks red when I wear these glasses" and the like.

Suppose the epiphenomenalist is a physicalist. He believes that subjective-red is brain activity A. But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B. But now by our theory of reference, subjective-red is B, rather than A. If the epiphenomenalist is a dualist, a similar problem applies.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-08-26T10:21:54.480Z · LW · GW

The point is literally semantic. "Experience" refers to (to put it crudely) the things that generally cause us to say "experience", because almost all words derive their reference from the things that cause their utterances (inscriptions, etc.). "Horse" means horse because horses typically occasion the use of "horse". If there were a language in which cows typically occasioned the word "horse", in that language "horse" would mean cow.

Comment by torekp on Reaching out to people with the problems of friendly AI · 2017-05-29T14:36:06.112Z · LW · GW

I agree that non-universal-optimizers are not necessarily safe. There's a reason I wrote "many" not "all" canonical arguments. In addition to gaming the system, there's also the time honored technique of rewriting the rules. I'm concerned about possible feedback loops. Evolution brought about the values we know and love in a very specific environment. If that context changes while evolution accelerates, I foresee a problem.

Comment by torekp on Reaching out to people with the problems of friendly AI · 2017-05-28T00:12:42.672Z · LW · GW

I think the "non universal optimizer" point is crucial; that really does seem to be a weakness in many of the canonical arguments. And as you point out elsewhere, humans don't seem to be universal optimizers either. What is needed from my epistemic vantage point is either a good argument that the best AGI architectures (best for accomplishing the multi-decadal economic goals of AI builders) will turn out to be close approximations to such optimizers, or else some good evidence of the promise and pitfalls of more likely architectures.

Needless to say, that there are bad arguments for X does not constitute evidence against X.

Comment by torekp on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-27T19:45:00.813Z · LW · GW

This is the right answer, but I'd like to add emphasis on the self-referential nature of the evaluation of humans in the OP. That is, it uses human values to assess humanity, and comes up with a positive verdict. Not terribly surprising, nor terribly useful in predicting the value, in human terms, of an AI. What the analogy predicts is that evaluated by AI values, AI will probably be a wonderful thing. I don't find that very reassuring.

Comment by torekp on What Value Hermeneutics? · 2017-03-24T22:23:22.144Z · LW · GW

Well if you narrow "metaphysics" down to "a priori First Philosophy", as the example suggests -- then I'm much less enthusiastic about "metaphysics". But if it's just (as I conceive it) continuous with science, just an account of what the world contains and how it works, we need a healthy dose of that just to get off the ground in epistemology..

Comment by torekp on What Value Hermeneutics? · 2017-03-23T22:17:33.490Z · LW · GW

The post persuasively displays some of the value of hermeneutics for philosophy and knowledge in general. Where I part ways is with the declaration that epistemology precedes metaphysics. We know far more about the world than we do about our senses. Our minds are largely outward-directed by default. What you know far exceeds what you know that you know, and what you know how you know is smaller still. The prospects for reversing cart and horse are dim to nonexistent.

Comment by torekp on Moral Philosophers as Ethical Engineers: Limits of Moral Philosophy and a Pragmatist Alternative · 2017-02-25T13:13:38.568Z · LW · GW

Mostly it's no-duh, but the article seems to set up a false contrast between justification in ethics, and life practice. But large swaths of everyday ethical conversation are justificatory. This is a key feature that the philosopher needs to respect.

Comment by torekp on The humility argument for honesty · 2017-02-05T20:36:19.859Z · LW · GW

Nice move with the lyrical section titles.

Comment by torekp on Split Brain Does Not Lead to Split Consciousness · 2017-01-29T13:04:41.713Z · LW · GW

There's a lot of room in between fully integrated consciousness and fully split consciousness. The article seems to take a pretty simplistic approach to describing the findings.

Comment by torekp on The Non-identity Problem - Another argument in favour of classical utilitarianism · 2016-10-23T11:46:43.548Z · LW · GW

Here's another case of non-identity, which deserves more attention: having a child. This one's not even hypothetical. There is always a chance to conceive a child with some horrible birth defect that results in suffering followed by death, a life worse than nothing. But there is a far greater chance of having a child with a very good life. The latter chance morally outweighs the former.

Comment by torekp on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-13T00:36:16.166Z · LW · GW

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

Comment by torekp on Biofuels a climate mistake · 2016-10-13T00:27:16.110Z · LW · GW

The linked paper is only about current practices, their benefits and harms. You're right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-10-04T01:43:06.380Z · LW · GW

I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-10-02T12:08:08.292Z · LW · GW

I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-10-01T12:38:10.810Z · LW · GW

You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.

Mentioning something is not a prerequisite for having it.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-09-29T22:55:42.133Z · LW · GW

I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.

I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.