Posts

Comments

Comment by ImmortalRationalist on On the ontological development of consciousness · 2020-01-27T03:00:42.816Z · LW · GW

What do you think of Avshalom Elitzur's arguments for why he reluctantly thinks interactionist dualism is the correct metaphysical theory of consciousness?

Comment by ImmortalRationalist on Open and Welcome Thread December 2018 · 2019-01-03T04:55:02.211Z · LW · GW

This is mostly just arguing over semantics. Just replace "philosophical zombie" with whatever your preferred term is for a physical human who lacks any qualia.

Comment by ImmortalRationalist on Open and Welcome Thread December 2018 · 2018-12-13T23:55:26.460Z · LW · GW

Why is it that philosophical zombies are unlikely to exist? In Eliezer's article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Also, what are LessWrong's views on the idea of a continuous consciousness? CGPGrey brought up this issue in The Trouble with Transporters. Does a continuous self exist at all, or is our perception of being a continuous conscious entity existing throughout time just an illusion?

Comment by ImmortalRationalist on Memetic Tribes and Culture War 2.0 · 2018-09-25T11:35:58.892Z · LW · GW

This video by CGPGrey is somewhat related to the idea of memetic tribes and the conflicts that arise between them.

Comment by ImmortalRationalist on Annihilating aliens & Rare Earth suggest early filter · 2018-09-25T11:20:19.918Z · LW · GW

This is a bit unrelated to the original post, but Ted Kaczynski has an interesting hypothesis on the Great Filter, mentioned in Anti-Tech Revolution: Why and How.

But once self-propagating systems have attained global scale, two crucial differences emerge. The first difference is in the number of individuals from among which the "fittest" are selected. Self-prop systems sufficiently big and powerful to be plausible contenders for global dominance will probably number in the dozens, or possibly in the hundreds; they certainly will not number in the millions. With so few individuals from among which to select the "fittest," it seems safe to say that the process of natural selection will be inefficient in promoting the fitness for survival of the dominant global self-prop systems. It should also be noted that among biological organisms, species that consist of a relatively small number of large individuals are more vulnerable to extinction than species that consist of a large number of small individuals. Though the analogy between biological organisms and self-propagating systems of human beings is far from perfect, still the prospect for viability of a world-system based on the dominance of a few global self-prop systems does not look encouraging.
The second difference is that in the absence of rapid, worldwide transportation and communication, the breakdown or the destructive action of a small-scale self-prop system has only local repercussions. Outside the limited zone where such a self-prop system has been active there will be other self-prop systems among which the process of evolution through natural selection will continue. But where rapid, worldwide transportation and communication have led to the emergence of global self-prop systems, the breakdown or the destructive action of any one such system can shake the whole world-system. Consequently, in the process of trial and error that is evolution through natural selection, it is highly probable that after only a relatively small number of "trials" resulting in "errors," the world-system will break down or will be so severely disrupted that none of the world's larger or more complex self-prop systems will be able to survive. Thus, for such self-prop systems, the trial-and-error process comes to an end; evolution through natural selection cannot continue long enough to create global self-prop systems possessing the subtle and sophisticated mechanisms that prevent destructive internal competition within complex biological organisms.
Meanwhile, fierce competition among global self-prop systems will have led to such drastic and rapid alterations in the Earth's climate, the composition of its atmosphere, the chemistry of its oceans, and so forth, that the effect on the biosphere will be devastating. In Part IV of the present chapter we will carry this line of inquiry further: We will argue that if the development of the technological world-system is allowed to proceed to its logical conclusion, then in all probability the Earth will be left a dead planet-a planet on which nothing will remain alive except, maybe, some of the simplest organisms-certain bacteria, algae, etc.-that are capable of surviving under extreme conditions.
The theory we've outlined here provides a plausible explanation for the so-called Fermi Paradox. It is believed that there should be numerous planets on which technologically advanced civilizations have evolved, and which are not so remote from us that we could not by this time have detected their radio transmissions. The Fermi Paradox consists in the fact that our astronomers have never yet been able to detect any radio signals that seem to have originated from an intelligent extraterrestrial source.
According to Ray Kurzweil, one common explanation of the Fermi Paradox is "that a civilization may obliterate itself once it reaches radio capability." Kurzweil continues: "This explanation might be acceptable if we were talking about only a few such civilizations, but [if such civilizations have been numerous], it is not credible to believe that every one of them destroyed itself" Kurzweil would be right if the self-destruction of a civilization were merely a matter of chance. But there is nothing implausible about the foregoing explanation of the Fermi Paradox if there is a process common to all technologically advanced civilizations that consistently leads them to self-destruction. Here we've been arguing that there is such a process.
Comment by ImmortalRationalist on Theories of Pain · 2018-09-03T03:10:41.278Z · LW · GW

One perspective on pain is that it is ultimately caused by less than ideal Darwinian design of the brain. Essentially, we experience pain and other forms of suffering for the same reason that we have backwards retinas. Other proposed systems, such as David Pearce's gradients of bliss, would accomplish the same things as pain without any suffering involved.

Comment by ImmortalRationalist on Open Thread September 2018 · 2018-09-03T03:05:28.956Z · LW · GW

Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.

Comment by ImmortalRationalist on Questions about AGI's Importance · 2017-11-02T18:00:08.274Z · LW · GW

Here is a somewhat relevant video.

Comment by ImmortalRationalist on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-14T02:07:41.285Z · LW · GW

This LessWrong Survey had the lowest turnout since Scott's original survey in 2009

What is the average amount of turnout per survey, and what has the turnout been year by year?

Comment by ImmortalRationalist on Fish oil and the self-critical brain loop · 2017-09-25T01:07:24.521Z · LW · GW

Does anyone here know any ways of dealing with brain fog and sluggish cognitive tempo?

Comment by ImmortalRationalist on Stupid Questions September 2017 · 2017-09-17T03:36:35.139Z · LW · GW

What is the probability that induction works?

Comment by ImmortalRationalist on What Are The Chances of Actually Achieving FAI? · 2017-08-19T02:11:10.309Z · LW · GW

On a related question, if Unfriendly Artificial Intelligence is developed, how "unfriendly" is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can't think of many scenarios in which this would occur.

Comment by ImmortalRationalist on Looking for ideas about Epistemology related topics · 2017-07-30T10:13:09.457Z · LW · GW

Eliezer Yudkowsky wrote this article a while ago, which basically states that all knowledge boils down to 2 premises: That "induction works" has a sufficiently large prior probability, and that there exists some single large ordinal that is well-ordered.

Comment by ImmortalRationalist on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-24T14:10:58.553Z · LW · GW

If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?

Comment by ImmortalRationalist on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-21T03:34:36.463Z · LW · GW

Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.

Comment by ImmortalRationalist on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-21T03:32:30.752Z · LW · GW

Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.

Comment by ImmortalRationalist on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-20T10:42:38.810Z · LW · GW

For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?

Comment by ImmortalRationalist on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-20T10:39:25.744Z · LW · GW

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

Comment by ImmortalRationalist on Open thread, July 10 - July 16, 2017 · 2017-07-14T20:30:36.924Z · LW · GW

Eliezer wrote this article a few years ago, about the 2 things that rationalists need faith to believe. Has any progress been made in finding justifications for either of these things that do not require faith?

Comment by ImmortalRationalist on Becoming stronger together · 2017-07-14T20:27:09.516Z · LW · GW

We guess we are around the LW average.

What would you estimate to be the LW average?

Comment by ImmortalRationalist on The Internet as an existential threat · 2017-07-14T19:57:24.401Z · LW · GW

Although with a sufficiently advanced artificial superintelligence, it could probably prevent something like the scenario discussed in this article from occurring.

Comment by ImmortalRationalist on The Internet as an existential threat · 2017-07-14T19:56:15.732Z · LW · GW

Ted Kaczynski wrote about something similar to this in Industrial Society And Its Future.

We distinguish between two kinds of technology, which we will call small-scale technology and organization-dependent technology. Small-scale technology is technology that can be used by small-scale communities without outside assistance. Organization-dependent technology is technology that depends on large-scale social organization. We are aware of no significant cases of regression in small-scale technology. But organization-dependent technology DOES regress when the social organization on which it depends breaks down. Example: When the Roman Empire fell apart the Romans’ small-scale technology survived because any clever village craftsman could build, for instance, a water wheel, any skilled smith could make steel by Roman methods, and so forth. But the Romans’ organization-dependent technology DID regress. Their aqueducts fell into disrepair and were never rebuilt. Their techniques of road construction were lost. The Roman system of urban sanitation was forgotten, so that not until rather recent times did the sanitation of European cities equal that of Ancient Rome.

Comment by ImmortalRationalist on Open thread, July 10 - July 16, 2017 · 2017-07-14T01:02:58.395Z · LW · GW

Does it make more sense to sign up for cryonics at Alcor or the Cryonics Institute?

Comment by ImmortalRationalist on [deleted post] 2017-07-06T11:42:28.139Z

If you are a consequentialist, it's the exact same calculation you would use if happiness were your goal. Just with different criteria to determine what constitute "good" and "bad" world states.

Comment by ImmortalRationalist on Dissolving the Fermi Paradox (Applied Bayesianism) · 2017-07-03T23:23:25.416Z · LW · GW

I agree with the conclusion that the Great Filter is more likely behind us than ahead of us. Some explanations of the Fermi Paradox, such as AI disasters or advanced civilizations retreating into virtual worlds, do not seem to fully explain the Fermi Paradox. For AI disasters, for instance, even if an artificial superintelligence destroyed the species that created it, the artificial superintelligence would likely colonize the universe itself. If some civilizations become sufficiently advanced but choose not to colonize for whatever reason, there would likely be at least some civilizations that would.

Comment by ImmortalRationalist on Priors Are Useless · 2017-07-03T23:18:51.307Z · LW · GW

But what exactly constitutes "enough data"? With any finite amount of data, couldn't it be cancelled out if your prior probability is small enough?

Comment by ImmortalRationalist on Idea for LessWrong: Video Tutoring · 2017-07-03T12:23:05.134Z · LW · GW

effective altruist youtubers

Such as?

Comment by ImmortalRationalist on Any Christians Here? · 2017-07-01T21:56:35.314Z · LW · GW

Believing in a soul that departs to the afterlife would seem to make cryonics pointless. What I am asking is, are there Christians here that believe in an afterlife and a soul, but plan on being cryopreserved regardless?

Comment by ImmortalRationalist on Any Christians Here? · 2017-07-01T12:21:16.120Z · LW · GW

For any Christians here on LessWrong, are you currently or do you plan on signing up for cryonics? If so, how do you reconcile being a cryonicist with believing in a Christian afterlife?

Comment by ImmortalRationalist on Bring up Genius · 2017-07-01T11:59:01.897Z · LW · GW

TL;DR: In the study, a number of White and Black children were adopted into upper middle class homes in Minnesota, and the researchers had the adopted children take IQ tests at age 7 and age 17. What they found is that the Black children consistently scored lower on IQ tests, even when controlling for education and upbringing. Basically the study suggests that IQ is to an extent genetic, and the population genetics of different ethnic groups are a contributing factor to differences in average IQ and achievement.

Comment by ImmortalRationalist on Idea for LessWrong: Video Tutoring · 2017-06-29T20:30:40.828Z · LW · GW

Channels that make videos on similar topics covered in the Sequences.

Comment by ImmortalRationalist on Angst, Ennui, and Guilt in Effective Altruism · 2017-06-29T20:29:35.876Z · LW · GW

Are there any 2017 LessWrong surveys planned?

Comment by ImmortalRationalist on Bring up Genius · 2017-06-29T18:18:19.871Z · LW · GW

Is anyone here familiar with the Minnesota Transracial Adoption Study? Any opinions on it?

Comment by ImmortalRationalist on Idea for LessWrong: Video Tutoring · 2017-06-29T18:13:30.874Z · LW · GW

I'm surprised that there aren't any active YouTube channels with LessWrong-esque content, or at least none that I am aware of.

Comment by ImmortalRationalist on Why I think worse than death outcomes are not a good reason for most people to avoid cryonics · 2017-06-11T18:01:33.272Z · LW · GW

Avoiding cryonics because of possible worse than death outcomes sounds like a textbook case of loss aversion.

Comment by ImmortalRationalist on Philosophical Parenthood · 2017-06-11T17:50:10.624Z · LW · GW

Ted Kaczynski wrote something similar to this in Industrial Society And Its Future, albeit with different motivations.

  1. Revolutionaries should have as many children as they can. There is strong scientific evidence that social attitudes are to a significant extent inherited. No one suggests that a social attitude is a direct outcome of a person’s genetic constitution, but it appears that personality traits are partly inherited and that certain personality traits tend, within the context of our society, to make a person more likely to hold this or that social attitude. Objections to these findings have been raised, but the objections are feeble and seem to be ideologically motivated. In any event, no one denies that children tend on the average to hold social attitudes similar to those of their parents. From our point of view it doesn’t matter all that much whether the attitudes are passed on genetically or through childhood training. In either case they ARE passed on.
  1. The trouble is that many of the people who are inclined to rebel against the industrial system are also concerned about the population problems, hence they are apt to have few or no children. In this way they may be handing the world over to the sort of people who support or at least accept the industrial system. To insure the strength of the next generation of revolutionaries the present generation should reproduce itself abundantly. In doing so they will be worsening the population problem only slightly. And the important problem is to get rid of the industrial system, because once the industrial system is gone the world’s population necessarily will decrease (see paragraph 167); whereas, if the industrial system survives, it will continue developing new techniques of food production that may enable the world’s population to keep increasing almost indefinitely.
Comment by ImmortalRationalist on We are the Athenians, not the Spartans · 2017-06-11T17:40:48.484Z · LW · GW

I remember a while ago Eliezer wrote this article, titled Bayesians vs. Barbarians. In it, he describes how in a conflict between rationalists and barbarians, or to your analogy Athenians and Spartans, the barbarians/Spartans will likely win. In the world today, low IQ individuals are reproducing at far higher rates than high IQ individuals, so are "winning" in an evolutionary sense. Having universalist, open, trusting values is not necessarily a bad thing in itself, but should not be done to such an extent that this altruism becomes pathological, and leads to the protracted suicide of the rationalist community.

Comment by ImmortalRationalist on Open thread, June 5 - June 11, 2017 · 2017-06-07T00:47:54.421Z · LW · GW

Has anyone here read Industrial Society And Its Future (the Unabomber manifesto), and if so, what are your thoughts on it?

Comment by ImmortalRationalist on Interview on IQ, genes, and genetic engineering with expert (Hsu) · 2017-06-07T00:26:11.185Z · LW · GW

What is the general consensus on LessWrong regarding Race Realism?

Comment by ImmortalRationalist on Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument · 2017-03-27T14:07:35.404Z · LW · GW

This, and find better ways to optimize power efficiency.

Comment by ImmortalRationalist on The Practical Argument for Free Will · 2017-03-27T14:00:10.474Z · LW · GW

How do you even define free will? It seems like a poorly defined concept in general, and is more or less meaningless. The notion of free will that people talk about seems to be little more than a glorified form of determinism and randomness.

Comment by ImmortalRationalist on New Philosophical Work on Solomonoff Induction · 2016-09-30T04:54:12.213Z · LW · GW

But why should the probability for lower-complexity hypotheses be any lower?

Comment by ImmortalRationalist on New Philosophical Work on Solomonoff Induction · 2016-09-29T20:20:27.302Z · LW · GW

But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?

Comment by ImmortalRationalist on New Philosophical Work on Solomonoff Induction · 2016-09-29T08:46:57.211Z · LW · GW

How is it that Solomonoff Induction, and by extension Occam's Razor, is justified in the first place? Why is it that hypotheses with higher Kolmogorov complexity are less likely to be true than those with lower Kolmogorov complexity? If it is justified by that fact that it has "worked" in the past, does that not require Solomonoff induction to justify that has worked, in the sense that you need to verify that your memories are true, and thus requires circular reasoning?

Comment by ImmortalRationalist on Open Thread Feb 29 - March 6, 2016 · 2016-03-06T16:58:17.054Z · LW · GW

With transhumanist technology, what is the probability that any human alive today will live forever, and not just thousands, or millions of years? I assume an extremely small, but non-zero, amount.

Comment by ImmortalRationalist on Stupid Questions May 2015 · 2015-05-19T08:00:17.203Z · LW · GW

Also, how do we know when the probability surpasses 50%? Couldn't the prior probability of the sun rising tomorrow be astronomically small, and with Bayesian updates using the evidence that the sun will rise tomorrow, merely make the probability slightly less astronomically small?

Comment by ImmortalRationalist on Stupid Questions May 2015 · 2015-05-13T04:24:37.710Z · LW · GW

How do we determine our "hyper-hyper-hyper-hyper-hyperpriors"? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?

Comment by ImmortalRationalist on How to sign up for Alcor cryo · 2015-05-12T08:05:20.946Z · LW · GW

Plastination is one technology you might be interested in.

Comment by ImmortalRationalist on Stupid Questions May 2015 · 2015-05-12T07:51:58.966Z · LW · GW

The money you would have spent on giving money to a beggar might be better spent on something that will decrease existential risk or contribute to transhumanist goals, such as donating to MIRI or the Methuselah Foundation.

Comment by ImmortalRationalist on Stupid Questions May 2015 · 2015-05-12T07:47:25.319Z · LW · GW

Using Bayesian reasoning, what is the probability that the sun will rise tomorrow? If we assume that induction works, and that something happening previously, i.e. the sun rising before, increases the posterior probability that it will happen again, wouldn't we ultimately need some kind of "first hyperprior" to base our Bayesian updates on, for when we originally lack any data to conclude that the sun will rise tomorrow?