Comment by torekp on Clickbait might not be destroying our general Intelligence · 2018-11-19T17:12:29.982Z · score: 4 (3 votes) · LW · GW
how to quote

Paste text into your comment and then select/highlight it. Formatting options will appear, including a quote button.

Comment by torekp on Embedded Agency (full-text version) · 2018-11-18T18:33:51.156Z · score: 2 (2 votes) · LW · GW
People often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can't perfectly know the hardware it is running on.

How could Emmy, an embedded agent, know its source code perfectly, or even be certain that it is a computing device under the Church-Turing definition? Such certainty would seem dogmatic. Without such certainty, the choice of 10 rather than 5 cannot be firmly classified as an error. (The classification as an error seemed to play an important role in your discussion.) So Emmy has a motivation to keep looking and find that U(10)=10.

Comment by torekp on Sam Harris and the Is–Ought Gap · 2018-11-17T18:08:54.892Z · score: 2 (2 votes) · LW · GW

Thanks for making point 2. Moral oughts need not motivate sociopaths, who sometimes admit (when there is no cost of doing so) that they've done wrong and just don't give a damn. The "is-ought" gap is better relabeled the "thought-motivation" gap. "Ought"s are thoughts; motives are something else.

Comment by torekp on Decision Theory Anti-realism · 2018-10-10T10:52:33.307Z · score: 3 (2 votes) · LW · GW

Technicalities: Under Possible Precisifications, 1 and 5 are not obviously different. I can interpret them differently, but I think you should clarify them. 2 is to 3 as 4 is to 1, so I suggest listing them in that order, and maybe adding an option that is to 3 as 5 is to 1.

Substance: I think you're passing over a bigger target for criticism, the notion of "outcomes". In general, agents can and do have preferences over decision processes themselves, as contrasted with the standard "outcomes" of most literature like winning or losing money or objects. For example, I can be "money pumped" in the following manner. Sell me a used luxury sedan on Monday for $10k. Trade me a Harley Davidson on Tuesday for the sedan plus my $5. Trade me a sports car on Wednesday for the Harley plus $5. Buy the sports car from me on Thursday for $9995. Oh no, I lost $15 on the total deal! Except: I got to drive, or even just admire, these different vehicles in the meantime.

If all processes and activities are fair game for rational preferences, then agents can have preferences over the riskiness of decisions, the complexity of the decision algorithm, and a host of other features that make it much more individually variable which approach is "best".

Comment by torekp on The Tails Coming Apart As Metaphor For Life · 2018-09-26T00:48:35.866Z · score: 7 (3 votes) · LW · GW

If there were no Real Moral System That You Actually Use, wouldn't you have a "meh, OK" reaction to either Pronatal Total Utilitarianism or Antinatalist Utilitarianism - perhaps whichever you happened to think of first? How would this error signal - disgust with those conclusions - be generated?

Comment by torekp on Stupid Questions September 2017 · 2017-09-18T00:42:54.629Z · score: 0 (0 votes) · LW · GW

Shouldn't a particular method of inductive reasoning be specified in order to give the question substance?

Comment by torekp on Minimizing Motivated Beliefs · 2017-09-10T13:06:30.780Z · score: 0 (0 votes) · LW · GW

Great post and great comment. Against your definition of "belief" I would offer the movie The Skeleton Key. But this doesn't detract from your main points, I think.

Comment by torekp on Inconsistent Beliefs and Charitable Giving · 2017-09-10T11:47:42.638Z · score: 0 (0 votes) · LW · GW

I think there are some pretty straightforward ways to change your true preferences. For example, if I want to become a person who values music more than I currently do, I can practice a musical instrument until I'm really good at it.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-09-02T18:59:28.403Z · score: 0 (0 votes) · LW · GW

I don't say that we can talk about every experience, only that if we do talk about it, then the basic words/concepts we use are about things that influence our talk. Also, the causal chain can be as indirect as you like: A causes B causes C ... causes T, where T is the talk; the talk can still be about A. It just can't be about Z, where Z is something which never appears in any chain leading to T.

I just now added the caveat "basic" because you have a good point about free will. (I assume you mean contracausal "free will". I think calling that "free will" is a misnomer, but that's off topic.) Using the basic concepts "cause", "me", "action", and "thing" and combining these with logical connectives, someone can say "I caused my action and nothing caused me to cause my action" and they can label this complex concept "free will". And that may have no referent, so such "free will" never causes anything. But the basic words that were used to define that term, do have referents, and do cause the basic words to be spoken. Similarly with "unicorn", which is shorthand for (roughly) a "single horned horse-like animal".

An eliminativist could hold that mental terms like "qualia" are referentless complex concepts, but an epiphenomenalist can't.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-08-30T22:58:49.796Z · score: 0 (0 votes) · LW · GW

The core problem remains that, if some event A plays no causal role in any verbal behavior, it is impossible to see how any word or phrase could refer to A. (You've called A "color perception A", but I aim to dispute that.)

Suppose we come across the Greenforest people, who live near newly discovered species including the greater geckos. Greenforesters use the word "gumie" always and only when they are very near greater geckos. Since greater geckos are extremely well camouflaged, they can only be seen at short range. Also, all greater geckos are infested with microscopic gyrating gnats. Gyrating gnats make intense ultrasound energy, so whenever anyone is close to a greater gecko, their environment and even their brain is filled with ultrasound. When one's brain is filled with this ultrasound, the oxygen consumption by brain cells rises. Greenforesters are hunter-gatherers lacking either microscopes or ultrasound detectors.

To what does "gumie" refer: geckos, ultrasound, or neural oxygen consumption? It's a no-brainer. Greenforesters can't talk about ultrasound or neural oxygen: those things play no causal role in their talk. Even though ultrasound and neural oxygen are both inside the speakers, and in that sense affect them, since neither one affects their talk, that's not what the talk is about.

Mapping this causal structure to the epiphenomenalist story above: geckos are like photon-wavelengths R, ultrasound in brain is like brain activity B, oxygen consumption is like "color perception" A, and utterances of "gumie" are like utterances S1 and S2. Only now I hope you can see why I put scare quotes around "color perception". Because color perception is something we can talk about.

Comment by torekp on Open thread, August 28 - September 3, 2017 · 2017-08-29T23:16:04.622Z · score: 1 (1 votes) · LW · GW

Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors.

Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?

Comment by torekp on Open thread, August 28 - September 3, 2017 · 2017-08-28T22:06:17.689Z · score: 0 (0 votes) · LW · GW

Sean Carroll writes in The Big Picture, p. 380:

The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.

I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we mean past macroscopic state), but can someone sketch the argument for me? Or give references?

For comparison, there are clear explanations available for why memory involves increasing entropy. I don't need anything that formal, but just an informal explanation of why different choices don't reliably correlate to different macroscopic events at lower-entropy (past) times.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-08-27T16:58:54.489Z · score: 0 (0 votes) · LW · GW

We not only stop at red lights, we make statements like S1: "subjectively, red is closer to violet than it is to green." We have cognitive access both to "objective" phenomena like the family of wavelengths coming from the traffic light, and also to "subjective" phenomena of certain low-level sensory processing outputs. The epiphenomenalist has a theory on the latter. Your steelman is well taken, given this clarification.

By the way, the fact that there is a large equivalence class of wavelength combinations that will be perceived the same way, does not make redness inherently subjective. There is an objective difference between a beam of light containing a photon mix that belongs to that class, and one that doesn't. The "primary-secondary quality" distinction, as usually conceived, is misleading at best. See the Ugly Duckling theorem.

Back to "subjective" qualities: when I say subjective-red is more similar to violet than to green, to what does "subjective-red" refer? On the usual theories of how words in general refer -- see above on "horses" and cows -- it must refer to the things that cause people to say S2: "subjectively this looks red when I wear these glasses" and the like.

Suppose the epiphenomenalist is a physicalist. He believes that subjective-red is brain activity A. But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B. But now by our theory of reference, subjective-red is B, rather than A. If the epiphenomenalist is a dualist, a similar problem applies.

Comment by torekp on Open thread, August 21 - August 27, 2017 · 2017-08-26T10:21:54.480Z · score: 0 (0 votes) · LW · GW

The point is literally semantic. "Experience" refers to (to put it crudely) the things that generally cause us to say "experience", because almost all words derive their reference from the things that cause their utterances (inscriptions, etc.). "Horse" means horse because horses typically occasion the use of "horse". If there were a language in which cows typically occasioned the word "horse", in that language "horse" would mean cow.

Comment by torekp on Reaching out to people with the problems of friendly AI · 2017-05-29T14:36:06.112Z · score: 0 (0 votes) · LW · GW

I agree that non-universal-optimizers are not necessarily safe. There's a reason I wrote "many" not "all" canonical arguments. In addition to gaming the system, there's also the time honored technique of rewriting the rules. I'm concerned about possible feedback loops. Evolution brought about the values we know and love in a very specific environment. If that context changes while evolution accelerates, I foresee a problem.

Comment by torekp on Reaching out to people with the problems of friendly AI · 2017-05-28T00:12:42.672Z · score: 0 (0 votes) · LW · GW

I think the "non universal optimizer" point is crucial; that really does seem to be a weakness in many of the canonical arguments. And as you point out elsewhere, humans don't seem to be universal optimizers either. What is needed from my epistemic vantage point is either a good argument that the best AGI architectures (best for accomplishing the multi-decadal economic goals of AI builders) will turn out to be close approximations to such optimizers, or else some good evidence of the promise and pitfalls of more likely architectures.

Needless to say, that there are bad arguments for X does not constitute evidence against X.

Comment by torekp on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-27T19:45:00.813Z · score: 0 (1 votes) · LW · GW

This is the right answer, but I'd like to add emphasis on the self-referential nature of the evaluation of humans in the OP. That is, it uses human values to assess humanity, and comes up with a positive verdict. Not terribly surprising, nor terribly useful in predicting the value, in human terms, of an AI. What the analogy predicts is that evaluated by AI values, AI will probably be a wonderful thing. I don't find that very reassuring.

Comment by torekp on What Value Hermeneutics? · 2017-03-24T22:23:22.144Z · score: 0 (0 votes) · LW · GW

Well if you narrow "metaphysics" down to "a priori First Philosophy", as the example suggests -- then I'm much less enthusiastic about "metaphysics". But if it's just (as I conceive it) continuous with science, just an account of what the world contains and how it works, we need a healthy dose of that just to get off the ground in epistemology..

Comment by torekp on What Value Hermeneutics? · 2017-03-23T22:17:33.490Z · score: 2 (2 votes) · LW · GW

The post persuasively displays some of the value of hermeneutics for philosophy and knowledge in general. Where I part ways is with the declaration that epistemology precedes metaphysics. We know far more about the world than we do about our senses. Our minds are largely outward-directed by default. What you know far exceeds what you know that you know, and what you know how you know is smaller still. The prospects for reversing cart and horse are dim to nonexistent.

Comment by torekp on Moral Philosophers as Ethical Engineers: Limits of Moral Philosophy and a Pragmatist Alternative · 2017-02-25T13:13:38.568Z · score: 0 (0 votes) · LW · GW

Mostly it's no-duh, but the article seems to set up a false contrast between justification in ethics, and life practice. But large swaths of everyday ethical conversation are justificatory. This is a key feature that the philosopher needs to respect.

Comment by torekp on The humility argument for honesty · 2017-02-05T20:36:19.859Z · score: 0 (0 votes) · LW · GW

Nice move with the lyrical section titles.

Comment by torekp on Split Brain Does Not Lead to Split Consciousness · 2017-01-29T13:04:41.713Z · score: 7 (3 votes) · LW · GW

There's a lot of room in between fully integrated consciousness and fully split consciousness. The article seems to take a pretty simplistic approach to describing the findings.

Comment by torekp on The Non-identity Problem - Another argument in favour of classical utilitarianism · 2016-10-23T11:46:43.548Z · score: 0 (0 votes) · LW · GW

Here's another case of non-identity, which deserves more attention: having a child. This one's not even hypothetical. There is always a chance to conceive a child with some horrible birth defect that results in suffering followed by death, a life worse than nothing. But there is a far greater chance of having a child with a very good life. The latter chance morally outweighs the former.

Comment by torekp on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-13T00:36:16.166Z · score: 0 (2 votes) · LW · GW

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

Comment by torekp on Biofuels a climate mistake · 2016-10-13T00:27:16.110Z · score: 0 (0 votes) · LW · GW

The linked paper is only about current practices, their benefits and harms. You're right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-10-04T01:43:06.380Z · score: 0 (0 votes) · LW · GW

I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-10-02T12:08:08.292Z · score: 0 (0 votes) · LW · GW

I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-10-01T12:38:10.810Z · score: 0 (0 votes) · LW · GW

You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.

Mentioning something is not a prerequisite for having it.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-09-29T22:55:42.133Z · score: 0 (0 votes) · LW · GW

I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.

I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.

Comment by torekp on Learning values versus learning knowledge · 2016-09-18T20:26:44.395Z · score: 0 (0 votes) · LW · GW

I think in order to make more progress on this, an extensive answer to the whole blue minimizing robot sequence would be a way to go. A lot of effort seems to be devoted to answering puzzles like: the AI cares about A; what input will cause it to (also/only) care about B? But this is premature if we don't know how to characterize "the AI cares about A".

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-09-08T02:21:40.692Z · score: 1 (1 votes) · LW · GW

It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

Further info on my position.

Comment by torekp on The map of ideas how the Universe appeared from nothing · 2016-09-05T11:52:36.644Z · score: 0 (0 votes) · LW · GW

This. And if one is willing to entertain Tegmark, approximately 100% of universes will be non-empty, so the epistemic question "why a non-empty universe?" gets no more bite than the ontological one.

Comment by torekp on Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link] · 2016-09-05T11:34:37.374Z · score: 1 (1 votes) · LW · GW

The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.

Comment by torekp on The map of the risks of aliens · 2016-08-29T22:23:30.023Z · score: 1 (1 votes) · LW · GW

Can you please clarify "our reference class"? And are you using some form of Self-Sampling Assumption?

Comment by torekp on Open Thread, Aug. 15. - Aug 21. 2016 · 2016-08-28T17:02:17.609Z · score: 0 (0 votes) · LW · GW

Belated thanks to you and MrMind, these answers were very helpful.

Comment by torekp on Open Thread, Aug. 15. - Aug 21. 2016 · 2016-08-16T00:52:48.680Z · score: 2 (2 votes) · LW · GW

Can someone sketch me the Many-Worlds version of what happens in the delayed choice quantum eraser experiment? Does a last-minute choice to preserve or erase the which-path information affect which "worlds" decohere "away from" the experimenter? If so, how does that go, in broad outline? If not, what?

Comment by torekp on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-13T00:51:46.782Z · score: 0 (0 votes) · LW · GW

Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world

What is the notion of "includes" here? Edit: from pp 4-5:

This means that a superintelligent machine could simulate the behavior of an arbitrary Turing machine on arbitrary input, and hence for our purpose the superintelligent machine is a (possibly identical) super-set of the Turing machines. Indeed, quoting Turing, “a man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine”

Comment by torekp on Pluralistic Existence in Many Many-Worlds · 2016-05-15T15:45:11.950Z · score: 1 (1 votes) · LW · GW

Let's start with an example: my length-in-meters, along the major axis, rounded to the nearest integer, is 2. In this statement "2", "rounded to nearest integer", and "major axis" are clearly mathematical; while "length-in-meters" and "my (me)" are not obviously mathematical. The question is how to cash out these terms or properties into mathematics.

We could try to find a mathematical feature that defines "length-in-meters", but how is that supposed to work? We could talk about the distance light travels in 1 / 299,792,458 seconds, but now we've introduced both "seconds" and "light". The problem (if you consider non-mathematical language a problem) just seems to be getting worse.

Additionally, if every apparently non-mathematical concept is just disguised mathematics, then for any given real world object, there is a mathematical structure that maps to that object and no other object. That seems implausible. Possibly analogous, in some way I can't put my finger on: the Ugly Duckling theorem.

Comment by torekp on My Kind of Moral Responsibility · 2016-05-06T01:28:14.895Z · score: 1 (1 votes) · LW · GW

Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.

This. I call the inference "no X at the microlevel, therefore, no such thing as X" the Cherry Pion fallacy. (As in, no cherry pions, implies no cherry pie.) Of course more broadly speaking it's an instance of the fallacy of composition, but, this variety seems to be more tempting than most, so it merits its own moniker.

It's a shame. The OP begins with some great questions, and goes on to consider relevant observations like

When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent.

But from there, the obvious move is one of charitable interpretation, saying, Hey! Responsibility is declared in these sorts of situations, when an agent has caused an event that wouldn't have happened without her, so maybe, "responsibility" means something like "the agent caused an event that wouldn't have happened without her". Then one could find counterexamples to this first formulation, and come up with a new formulation that got the new (and old) examples right ... and so on.

Comment by torekp on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-17T14:15:13.292Z · score: 0 (0 votes) · LW · GW

Go through a Venn diagram explanation of Bayes's Theorem. Not necessarily the formula, but just a graphical representation of updating on evidence. Draw attention to the distribution of probability of H between E and not-E. Point out that if the probability of H doesn't go down upon the discovery of not E, it can't possibly go up upon the discovery of E.

This has the advantage of showing the requirement of falsifiability to be an extreme case of a more powerful general principle.

This could be supplemental to some of the great suggestions by your other commenters.

Comment by torekp on Rationality Reading Group: Part W: Quantified Humanism · 2016-04-03T00:27:38.017Z · score: 1 (1 votes) · LW · GW

Sure, if there were more people answering the poll, there'd probably be some that took the Axiom of Independence, and/or expected utility theory, in the way you worried about. It's a fair point. But so far I'm the only skeptical vote.

Comment by torekp on Rationality Reading Group: Part W: Quantified Humanism · 2016-04-02T19:34:19.510Z · score: 2 (2 votes) · LW · GW

The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision. I agree that the case for it as a normative principle is better than taking it as a prescription. I just don't think it's a completely convincing case.

I agree with Wei Dai's remark that

the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)

If a Dutchman throws a book at you - duck! You don't need to be the sort of agent to whom expected utility theory applies.

The deep reason why utility theory fails to be required by rationality, is that there is no general separability between the decision process itself and the "outcomes" that agents care about. I'm putting "outcomes" in scare quotes because the term strongly suggests that what matters is the destination, not the journey (where the journey includes the decision process and its features such as risk).

There are many particular occasions, at least for many agents (including me), on which there is such separability. That's why I find expected utility theory useful. But rationally required? Not so much.

Here's a toy version of the journey/destination problem. (I think I'm borrowing from Kaj Sotala, who probably said it better, but I can't find the original.) Suppose I sell my convertible Monday for $5000 and buy an SUV for $5010. On Tuesday I sell the SUV for $5000 and buy a Harley for $5010. On Wednesday I sell the Harley for $5000 and buy the original convertible back for $5010. Oh no, I've been money pumped! Except, wait - I got to drive a different vehicle each day, something that I enjoy. I'm out $30, but that might be a small price to pay for the privilege. This example doesn't involve risk per se, but does illustrate the care needed to avoid defining "outcomes" in such a way as to avoid begging questions against an agent's values.

Comment by torekp on Rationality Reading Group: Part W: Quantified Humanism · 2016-04-01T22:33:48.593Z · score: 1 (1 votes) · LW · GW

So after reading the Allais paradox posts or being otherwise familiar with the topic, what do lesswrongers think? [pollid:1133]

Comment by torekp on Genetic "Nature" is cultural too · 2016-03-18T22:44:22.235Z · score: 3 (3 votes) · LW · GW

particularly that they measure the variance of a factor rather than its absolute importance (and hence you get results like variation in nutrition being almost invisible as an explanation for variation in height)

Excellent point, which deserves some elaboration. Suppose that very high doses of vitamin K dramatically increase height, but that almost nobody is experimenting with such doses. Then a heritability study will find that environment contributes little to the variation in height - but that's usually not what we want to know. What we want to know is more likely something like, what steps can I take to have tall children?

Comment by torekp on AIFoom Debate - conclusion? · 2016-03-08T02:15:16.069Z · score: 1 (1 votes) · LW · GW

I suggest a different reason not to waste your time with the foom debate: even a non-fooming process may be unstoppable. Consider the institutions of the state and the corporation. Each was a long time coming. Each was hotly contested and had plenty of opponents, who failed to stop it or radically change it. Each changed human life in ways that are not obviously and uniformly for the better.

Comment by torekp on AIFoom Debate - conclusion? · 2016-03-08T02:05:39.299Z · score: 1 (1 votes) · LW · GW

There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect.

To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably "simplistic" approaches) could shine is in the design of computer algorithms, or some important features thereof.

Comment by torekp on Is Spirituality Irrational? · 2016-03-07T22:52:35.797Z · score: 2 (2 votes) · LW · GW

Religious beliefs and subjective experiences are quite separate things

I would like to take this opportunity to note that "religious beliefs" is not redundant; that belief is not even a particularly important part of many religions. Not that you said anything to the contrary. But to a lot of readers of this site, Bible-thumping Christians, to whom belief is paramount, are over-represented in the mental prototype of "religion".

Comment by torekp on The Philosophical Implications of Quantum Information Theory · 2016-03-07T22:44:53.799Z · score: 0 (0 votes) · LW · GW

if P(B) is well defined without reference to A

You're right. Good point.

it's an n-squared-minus-one-way street

Don't you mean n-factorial? Anyway, ... hmm, I need to think about this more.

Comment by torekp on The Philosophical Implications of Quantum Information Theory · 2016-03-05T14:15:57.478Z · score: 0 (0 votes) · LW · GW

That's what "causal relationship" means.

I disagree. Following Pearl, I define "A causes B" to mean something like: (DO:A) raises the probability of B.

Bob's choice in the evening to make strong measurements along the beta-axis, raises the probability of Alice's noon measurements along the beta-axis measurements having been the ones that showed the best correlation. It doesn't raise the probability of any individual measurement being up or down, but that's OK. Even on a many worlds interpretation, where perhaps every digital up/down pattern happens at some "world" and the overall multi-world distribution is invariant, "probability" refers to what happens in our "world", so again that's OK.

Correlation can only be observed after the fact, in the evening, not at noon. So isn't this just a case of Bob affecting Bob+Alice's immediate future, where they go over the results? Why do I say Bob's choice affected Alice's results? Because correlation is a two-way street, and in this case there isn't much traffic in the forward direction. Alice's measurements only weakly affect Bob's results.

Comment by torekp on The Philosophical Implications of Quantum Information Theory · 2016-03-02T01:58:37.869Z · score: 0 (0 votes) · LW · GW

Thanks, this helped me fill in some gaps. In Ron Garret's piece that you linked above, a comment has a link to a very nice article by Aharonov et al titled Can a Future Choice Affect a Past Measurement's Outcome?. (Hint: yes.)

On desiring subjective states (post 3 of 3)

2015-05-05T02:16:01.543Z · score: 7 (14 votes)

The language of desire (post 2 of 3)

2015-05-03T21:57:33.050Z · score: 1 (4 votes)

Gasoline Gal looks under the hood (post 1 of 3)

2015-05-03T20:15:49.744Z · score: 4 (9 votes)

[LINK] Prisoner's Dilemma? Not So Much

2014-05-20T23:38:19.450Z · score: 4 (7 votes)

Robots ate my job [links]

2012-04-10T01:57:39.019Z · score: 5 (8 votes)