Posts

Memetic downside risks: How ideas can evolve and cause harm 2020-02-25T19:47:18.237Z
Information hazards: Why you should care and what you can do 2020-02-23T20:47:39.742Z
Hell Must Be Destroyed 2018-12-06T04:11:19.417Z
State-Space of Background Assumptions 2015-07-29T00:22:37.952Z

Comments

Comment by algekalipso on Subagents, introspective awareness, and blending · 2022-03-22T00:24:08.767Z · LW · GW

Also see:  The Symmetry Theory of Valence: 2020 Presentation


 

Comment by algekalipso on Mental health benefits and downsides of psychedelic use in ACX readers: survey results · 2021-11-09T01:51:56.609Z · LW · GW

For people who are planning on taking psychedelics (I'm not suggesting they do, but if they will anyway) or who have already done so: perhaps consider writing a high-quality trip report. They are super rare, and rationalist-informed trip reports might be excellent sources for research leads to figure out how the brain works.

For inspiration, perhaps read: Guide to Writing Rigorous Reports of Exotic States of Consciousness.

Also, I recommend reading "The Grand Illusion" by Steven Lehar for some excellent pointers for how psychedelic experiences can legitimately inform our understanding of consciousness. Here is a writeup I made about his life's work and how it was informed by his (very rare) rational psychonautics.

For another example of rational synthesis of psychedelic phenomenology see:  The Hyperbolic Geometry of DMT Experiences (@Harvard Science of Psychedelics Club) or this talk about mapping high-energy states of consciousness delivered at a ACX online meetup.

Finally, consider submitting a datapoint for the Tracer Tool. More info here.

Cheers!
 

Comment by algekalipso on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T22:24:20.777Z · LW · GW

This is substantiated by data in "Logarithmic Scales of Pleasure and Pain" (quote):

Birth of children

I have heard a number of mothers and father say that having kids was the best thing that ever happened to them. The survey showed this was a very strong pattern, especially among women. In particular, a lot of the reports deal with the very moment in which they held their first baby in their arms for the first time. Some quotes to illustrate this pattern:

The best experience of my life was when my first child was born. I was unsure how I would feel or what to expect, but the moment I first heard her cry I fell in love with her instantly. I felt like suddenly there was another person in this world that I cared about and loved more than myself. I felt a sudden urge to protect her from all the bad in the world. When I first saw her face it was the most beautiful thing I had ever seen. It is almost an indescribable feeling. I felt like I understood the purpose and meaning of life at that moment. I didn’t know it was possible to feel the way I felt when I saw her. I was the happiest I have ever been in my entire life. That moment is something that I will cherish forever. The only other time I have ever felt that way was with the subsequent births of my other two children. It was almost a euphoric feeling. It was an intense calm and contentment.
—————
I was young and had a difficult pregnancy with my first born. I was scared because they had to do an emergency c-section because her health and mine were at risk. I had anticipated and thought about how the moment would be when I finally got to hold my first child and realize that I was a mother. It was unbelievably emotional and I don’t think anything in the world could top the amount of pleasure and joy I had when I got to see and hold her for the first time.
—————
I was 29 when my son was born. It was amazing. I never thought I would be a father. Watching him come into the world was easily the best day of my life. I did not realize that I could love someone or something so much. It was at about 3am in the morning so I was really tired. But it was wonderful nonetheless.
—————
I absolutely loved when my child was born. It was a wave of emotions that I haven’t felt by anything before. It was exciting and scary and beautiful all in one.

No luck for anti-natalists… the super-strong drug-like effects of having children will presumably continue to motivate most humans to reproduce no matter how strong the ethical case against doing so may be. Coming soon: a drug that makes you feel like “you just had 10,000 children”.

Comment by algekalipso on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2019-11-30T05:24:38.013Z · LW · GW

The histogram of CSHW amplitudes seems to have very little information content, while the entire matrix of just-noticeable-differences of our experience seems to have a whole lot of information. If CSHWs are so important to determine a "brain state", where is all the missing information?

Two points here. First, according to the theory -as Mike points out- the overall "mood" of the state is largely encoded in the low frequency harmonics, while the higher frequency ones are more important for semantic information. In a sense, you can think of the lower frequency harmonics as creating a set of buckets in which to put, juggle, and recombine the information provided by the higher frequency harmonics. Hence, while the specific information content of the experience might require a very fine level of resolution, both the valence and the broad information-processing steps might not. And second, there is more to the CSHWs than just the histogram of amplitudes. There is also a matrix of phase-locking relations between them, which increases the overall information content by a large amount.

Comment by algekalipso on Neural Annealing: Toward a Neural Theory of Everything (crosspost) · 2019-11-30T05:18:42.596Z · LW · GW

I'd mention that Steven Lehar foreshadowed the paradigm in his Directional Harmonic Theory of neurocomputation. I recommend reading his book "The Grand Illusion" for abundant phenomenological data in favor of this flavor of neurocomputation.

Comment by algekalipso on Subagents, introspective awareness, and blending · 2019-03-10T06:41:52.370Z · LW · GW

Definitely. I'll probably be quoting some of your text in articles on Qualia Computing soon, in order to broaden the bridge between LessWrong-consumable media and consciousness research.

Of all the articles linked, perhaps the best place to start would be the Pseudo-time Arrow. Very curious to hear your thoughts about it.

Comment by algekalipso on Subagents, introspective awareness, and blending · 2019-03-10T06:37:05.176Z · LW · GW

Sure! It is "invariance under an active transformation". The more energy is trapped in phenomenal spaces that are invariant under active transformations, the more blissful the state seems to be (see "analysis" section of this article).

Comment by algekalipso on Subagents, introspective awareness, and blending · 2019-03-03T11:36:32.228Z · LW · GW

Really great post!

Andrés (Qualia Computing) here. Let me briefly connect your article with some work that QRI has done.

First, we take seriously the view of a "moment of experience" and study the contents of such entities. In Empty Individualism, every observer is a "moment of experience" and there is no continuity from one moment to the next; the illusion is caused by the recursive and referential way the content of experience is constructed in brains. We also certainly agree that you can be aware of something without being aware of being aware of it. As we I will get to, this is an essential ingredient in the way subjective time is constructed

The concept of blending is related to our concept of "The Tyranny of the Intentional Object". Indeed, some people are far more prone to confusing logical or emotional thoughts for revealed truth; introspective ability (which can be explained as the rate at which awareness of a having being aware before happens) varies between people and is trainable to an extent. People who are systematizers can develop logical ontologies of the world that feel inherently true, just as empathizers can experience a made-up world of interpersonal references as revealed true. You could describe this difference in terms of whether blending is happening more frequently with logical or emotional structures. But empathizers and systematizers (and people high on both traits!) can, in addition, be highly introspective, meaning that they recognize those sensations as aspects of their own mind.

The fact that each moment of experience can incorporate informational traces of previous ones allows the brain to construct moments of experience with all kinds of interesting structures. Of particular note is what happens when you take a psychedelic drug. The "rate of qualia decay" lowers due to a generalization of what in visual phenomenology is called "tracers". The disruption of inhibitory control signals from the cortex leads to the cyclical activation of the thalamus* and thus the "re-living" of previous contents of experience in high-frequency repeating patterns (see "tracers" section of this article). On psychedelics, each moment of experience is "bigger". You can formalize this by representing each moment of experience as a connected network, where each node is a quale and each edge is a local binding relationship of some sort (whether one is blending or not, may depend on the local topology of the network). In the structure of the network you can encode the information pertaining to many constructed subagents; phenomenal objects that feel like "distinct objects/realities/channels" would be explained in terms of clusters of nodes in the network (e.g. subsets of nodes such that the clustering coefficient within them is much larger than the average clustering coefficient of different subsets of nodes of similar size). As an aside, dissociatives, in particular, drastically change the size of clusters, which phenomenally is experienced as "being aware of more than one reality at once".

You can encode time-structure into the network by looking at the implicit causality of the network, which gives rise to what we call a pseudo-time arrow. This model can account for all of the bizarre and seemingly unphysical experiences of time people report on psychedelics. As the linked article explains in detail, how e.g. thought-loops, moments of eternity, and time branching can be expressed in the network, and emerge recursively from calls to previous clusters of sensations (as information traces).

Even more strange, perhaps, is the fact that a long rate of qualia decay can give rise to unusual geometry. In particular, if you saturate the recursive calls and bind together a network with a very high branching factor, you get a hyperbolic space (cf. The Hyperbolic Geometry of DMT Experiences: Symmetries, Sheets, and Saddled Scenes).

That said, perhaps the most important aspect of the investigation has been to encounter a deep connection between felt sense of wellbeing (i.e. "emotional valence") and the structure of the network. From your article:

For instance, you might notice sensations in your body that were associated with the emotion, and let your mind generate a mental image of what the physical form of those sensations might look like. Then this set of emotions, thoughts, sensations, and visual images becomes “packaged together” in your mind, unambiguously designating it as a mental object.

The claim we would make is that the very way in which this packaging happens gives rise to pleasant or unpleasant mental objects, and this is determined by the structure (rather than "semantic content") of the experience. Evolution made it such that thoughts that refer to things that are good for the inclusive fitness of our genes get packaged in more symmetrical harmonious ways.

The above is, however, just a partial explanation. In order to grasp the valence effects of meditation and psychedelics, however, it will be important to take into account a number of additional paradigms of neuroscience. I recommend Mike Johnson's articles: A Future for Neuroscience and The Neuroscience of Meditation. The topic is too broad and complex for me to cover here right now, but I would advance the claim that (1) when you "harmonize" the introspective calls of previously-experienced qualia you up the valence, and (2) the process can lead to "annealing" where the internal structure of the moments of experience are highly-symmetrical, and for reasons we currently don't understand, this appears to co-occur in a 1-1 fashion with high valence.

I look forward to seeing more of your thoughts on meditation (and hopefully psychedelics, too, if you have personal experience with them).

*The specific brain regions mentioned is a likely mechanism of action but may turn out to be wrong upon learning further empirical facts. The general algorithmic structure of psychedelic effects, though, where every sensation "feels like it lasts longer" will have the downstream implications on the construction of the structure of moments experience either way.

Comment by algekalipso on State your physical account of experienced color · 2016-03-31T03:36:18.082Z · LW · GW

I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in "standard" naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as "seeing the world directly, nothing else, nothing more." Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.

Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of "misapprehension", where you don't really perceive the world directly anymore. That does not mean you "weren't perceiving the world directly before." But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as "failed representations of true objects" you don't, anymore, need to in addition restate one's previous belief in "perceiving the world directly." Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.

Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a "bug" in one's mind. So here you have two ontologies, where you can certainly explain it all with just one.

Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a "bug" of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.

Comment by algekalipso on State your physical account of experienced color · 2016-03-20T08:40:29.630Z · LW · GW

With the aid of qualia computing and a quantum computer, perhaps ;-)

Comment by algekalipso on State your physical account of experienced color · 2016-03-20T05:49:20.931Z · LW · GW

Both you and prase seem to be missing the point. The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience. Why? Because you can experience the qualia of green thanks to synesthesia. Likewise, if you take LSD at a sufficient dose, you will experience a lot of colors that are unrelated to the particular input your senses are receiving. Finally, you can also experience such color in a dream. I did that last night.

The experience of green is not the result of information-processing that works to discriminate between wavelengths of light. Instead, the experience of green was recruited by natural selection to be part of an information-processing system that discriminates between wavelengths of light. If it had been more convenient, less energetically costly, more easily accessible in the neighborhood of exploration, etc. evolution would have recruited entirely different qualia in order to achieve the exact same information-processing tasks color currently takes part in.

In other words, stating what stimuli triggers the phenomenology is not going to help at all in elucidating the very nature of color qualia. For all we know, other people may experience feelings of heat and cold instead of colors (locally bounded to objects in their 2.5D visual field), and still behave reasonably well as judged by outside observers.

Comment by algekalipso on State your physical account of experienced color · 2016-03-20T05:37:17.134Z · LW · GW

Quantum mechanics by itself is not an answer. A ray in a Hilbert space looks less like the world than does a scattering of particles in a three-dimensional space. At least the latter still has forms with size and shape. The significance of quantum mechanics is that conscious experiences are complex wholes, and so are entangled states. So a quantum ontology in which reality consists of an evolving network of states drawn from Hilbert spaces of very different dimensionalities, has the potential to be describing conscious states with very high-dimensional tensor factors, and an ambient neural environment of small, decohered quantum systems (e.g. most biomolecules) with a large number of small-dimensional tensor factors. Rather than seeing large tensor factors as an entanglement of many particles, we would see "particles" as what you get when a tensor factor shrinks to its smallest form.

[...]

Once this is done, the way you state the laws of motion might change. Instead of saying 'tensor factor T with neighbors T0...Tn has probability p of being replaced by Tprime', you would say 'conscious state C, causally adjacent to microphysical objects P0...Pn, has probability p of evolving into conscious state Cprime' - where C and Cprime are described in a "pure-phenomenological" way, by specifying sensory, intentional, reflective, and whatever other ingredients are needed to specify a subjective state exactly.

You are hitting the nail in the head. I don't expect people in LessWrong to understand this for a while, though. There is actually a good reason why the cognitive style of rationalists, at least statistically, is particularly ill-suited for making sense of the properties of subjective experience and how they constrain the range of possible philosophies of mind. The main problem is the axis of variability of "empathizer vs. systematizer." LessWrong is built on a highly systematizing meme-plex that attracts people who have a motivational architecture particularly well suited for problems that require systematizing intelligence.

Unfortunately, recognizing that one's consciousness is ontologically unitary requires a lot of introspection and trusting one's deepest understanding against the conclusions that one's working ontology suggests. Since LessWrongers have been trained to disregard their own intuitions and subjective experience when thinking about the nature of reality, it makes sense that the unity of consciousness will be a blind spot for as long as we don't come up with experiments that can show the causal relevance of such unity. My hope is to find a computational task that consciousness can achieve at a runtime complexity that would be impossible with a classical neural networks implemented with the known physical constraints of the brain. However, I'm not very optimistic this will happen any time soon.

The alternative is to lay out specific testable predictions involving the physical implementation of consciousness in the brain. I recommend reading David Pearce's physicalism.com, which outlines an experiment that would convince any rational eternal quantum mind skeptic that indeed the brain is a quantum computer.

Comment by algekalipso on State your physical account of experienced color · 2016-03-20T05:24:27.875Z · LW · GW

I am super late to the party. But I want to say that I agree with you and I find your line of research interesting and exciting. I myself am working on a very similar space.

I own a blog called Qualia Computing. The main idea is that qualia actually plays a causally and computationally relevant role. In particular, it is used in order to solve Constraint Satisfaction Problems with the aid of phenomenal binding. Here is the "about" of the site:

Qualia Computing? In brief, epiphenomenalism cannot be true. Qualia, it turns out, must have a causally relevant role in forward-propelled organisms, for otherwise natural selection would have had no way of recruiting it. I propose that the reason why consciousness was recruited by natural selection is found in the tremendous computational power that it afford to the real-time world simulations it instantiates through the use of the nervous system. More so, the specific computational horse-power of consciousness is phenomenal binding –the ontological union of disparate pieces of information by becoming part of a unitary conscious experience that synchronically embeds spaciotemporal structure. While phenomenal binding is regarded as a mere epiphenomenon (or even as a totally unreal non-happening) by some, one needs only look at cases where phenomenal binding (partially) breaks down to see its role in determining animal behavior.

Once we recognize the computational role of consciousness, and the causal network that links it to behavior, a new era will begin. We will (1) characterize the various values of qualia in terms of their computational properties, and (2) systematically explore the state-space of possible conscious experiences.

(1) will enable us to recruit the new qualia varieties we discover thanks to (2) so as to improve the capabilities of our minds. This increased cognitive power will enable us to do (2) more efficiently. This positive-feedback loop is perhaps the most important game-changer in the evolution of consciousness in the cosmos.

We will go from cognitive sciences to actual consciousness engineering. And then, nothing will ever feel the same.

Also, see: qualiacomputing.com/2015/04/19/why-not-computing-qualia/

I'm happy to talk to you. I'd love to see where your research is at.

Comment by algekalipso on The correct response to uncertainty is *not* half-speed · 2016-01-16T20:14:28.420Z · LW · GW

I suspect half speed is actually a rational decision given some underlying model AnnaSalamon was not aware of explicitly.

For instance, she may intuitively feel that she just passed the hotel. If so, then being extra careful to look well for features and marks around you that could give you hints of whether this happened could work best at half speed. Are there fewer hotels around? Is it a residential area? Does the amount of economic activity seems to be increasing or decreasing as I move in this direction? Then, you can turn around and get there faster.

Formalizing the precise model that would make half-speed the rational choice may be a bit complicated. But that's what the Bayesian approach to cognitive sciences would try to do first.

Comment by algekalipso on State-Space of Background Assumptions · 2015-08-24T02:50:02.007Z · LW · GW

Good feedback! In the future I will always add that option. The statistical analysis is trickier, but it can be done :)

Comment by algekalipso on State-Space of Background Assumptions · 2015-08-01T01:43:59.742Z · LW · GW

Thanks for your feedback. I am aiming to have the writeup done by August 8th. You will be able to find it in Qualia Computing.

Comment by algekalipso on State-Space of Background Assumptions · 2015-08-01T01:12:57.694Z · LW · GW

Announcement:

Enough people are continuing to answer the questionnaire that it makes sense to extend the deadline until midnight (California time) of the Sunday 2nd of August of 2015.

Thanks for helping! I am aiming to have the writeup with the results ready by August 8th.

Comment by algekalipso on State-Space of Background Assumptions · 2015-07-31T07:48:51.895Z · LW · GW

doing mind-coalescing and decolescing

That is not enough to solve the problem of other minds, as the article explains. The main problem is that when you incorcoporate a whole brain into your overall brain-mass by connecting to it, you can't be certain whether the other being was conscious to begin with or whether the effect is a simple result of your massively amplified brain.

That's why you need a scheme that allows the other being to solve a puzzle while you are disconnected. The puzzle needs to be such that only a conscious intelligence could solve it. And to actually verify that the entity solved it on its own you need to connect again to it and verify while merged that the solution is found there.

Of course you need to make sure that you distract yourself while you are temporarily disconnected, otherwise you may suspect you accidentally solved the phenomenal puzzle on your own.

The solution has a minimum of complexity, and to my knowledge no one else had proposed it before. Derek Parfit, Daniel Kolak, Borges and David Pearce get into some amazing territories that could well lead to a solution of this sort. But they always stay one step short of getting something where the creation of information is a demonstration of another entity actually being conscious.

Comment by algekalipso on State-Space of Background Assumptions · 2015-07-30T05:52:58.763Z · LW · GW

"Consciousness is different than the subject?" This and many other ones are tricky. But I know people who harbor strong opinions about them. In the end, it does not matter vey much that people from all over the place disagree a lot about those questions... that only means they are not really measuring any important latent trait. On the other hand, there are quite a few questions that people disagree on predictably. In other words, they can be used to determine the memetic cluster to which you belong.

Thanks for the heads up. I know of a statistical method to reduce the bias provided by lazy users :)

Comment by algekalipso on Consciousness doesn't exist. · 2015-07-30T05:40:32.879Z · LW · GW

I have a solution to the problems of other minds, actually. But it requires you to recognize that you are conscious yourself, which is not necessarily possible for all people.

Check: physicalism.com And: qualiacomputing.com/2015/03/31/a-solution-to-the-problem-of-other-minds/

Comment by algekalipso on State-Space of Background Assumptions · 2015-07-29T06:15:28.314Z · LW · GW

I think there are 3 pages.

Comment by algekalipso on State-Space of Background Assumptions · 2015-07-29T01:32:52.440Z · LW · GW

The average time people take to complete the survey is 20 minutes, with most people taking 15 and a long tail of people taking up to several hours, presumably because they went on to do something else for a while and returned to complete it later.

Thanks for mentioning the problem with the links. Fixed.

Comment by algekalipso on The Importance of Sidekicks · 2015-07-29T01:23:15.242Z · LW · GW

This is an awesome, awesome, awesome post! I think you have nailed a few important axis of variance that we usually neglect.

Now, precisely because you are still part of the community and can accept rationalist memes, you are an important sample to learn what the rationalist community is not. At least what it is not necessarily.

Would you, and any other self-identified rationalist sidekick, please fill out this survey?

I am analyzing how personality is related to beliefs about consciousness and memetic affiliations. If only heroes fill out the questionnaire, I may associate traits that are not actually relevant for rationality with rationality.

Personally, I think that the difference you are pointing out ultimately comes down to testosterone, and relatedly, Aspergers.

Comment by algekalipso on Aspergers Survey Re-results · 2015-07-28T10:01:01.942Z · LW · GW

I'm currently running a study on personality and consciousness in the transhumanist community. The questionnaire also inquires into the possible effects of Aspergers in memetic affiliations.

Of course, LessWrongers are an important piece of the puzzle. Please help me by answering this survey:

qualiacomputing.com/2015/07/18/state-space-of-background-assumptions/

Comment by algekalipso on Why the tails come apart · 2014-08-05T06:28:01.711Z · LW · GW

My guess is that there are several variables that are indeed positively correlated throughout the entire range, but are particularly highly correlated at the very top. Why not? I'm pretty sure we can come up with a list.

Comment by algekalipso on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-22T00:14:27.926Z · LW · GW

Did you know that we already have instances of things that pass the Turing test?

And more surprisingly, that we don't generally consider them conscious?

And the most amazing of all: That they have existed for probably at the very least a hundred thousand years (but possibly much more)?

I am talking about the characters in our dreams

They fool us into thinking that they are conscious! That they are the subjects of their own worlds just as people presumably are when awake.

You can have a very eloquent conversation with a dream character without ever noticing there is any apparent lack of consciousness. You can even ask them about their own consciousness (I have done so).

The riddle to why this is possible involves a very deep state of affairs that we are scarcely aware of in daily life. Namely, that your phenomenal self is, just as well, a dream character.

Comment by algekalipso on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-22T00:12:25.029Z · LW · GW

sorry

Comment by algekalipso on Belief in Self-Deception · 2013-05-13T03:14:47.407Z · LW · GW

Here: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0046774

Comment by algekalipso on Belief in Self-Deception · 2013-05-02T07:45:55.330Z · LW · GW

This seems to be associated with higher than average testosteron levels. If you inject testosterone to a random man he will very prone to not lie and be overly straightforward.

Comment by algekalipso on Anybody want to join a Math Club? · 2013-04-05T05:59:37.536Z · LW · GW

If you lack an objective, a good goal is to be able to solve national math Olympiad problems in the time allowed for the competitions.

Comment by algekalipso on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics · 2013-03-17T21:04:18.498Z · LW · GW

At this rate it might be very rational to look at ways to modify our cognitive architecture and limbic system to experience long term and sustained attraction and love... rather than hack it via external stimuli.

MDMA is promising when it comes to revive intimacy between long term couples. But its neurotoxic profile makes this non-workable for most people. Long term sustainable mood enrichers and love enhancers should be developed... this will be much more life enriching than just rationally learning what relationship style best suits you.

Comment by algekalipso on Arguments against the Orthogonality Thesis · 2013-03-11T03:27:18.978Z · LW · GW

The crux of the disagreement, I think, is in the way we understand the self-assessment of our experience. If consciousness is epiphenomenal or just a different level of description of a purely physical world, this self-assessment is entirely algorithmic and does not disclose anything real about the intrinsic nature of consciousness.

But consciousness is not epiphenomenal, and a purely computational account fails to bridge the explanatory gap. Somehow conscious experience can evaluate itself directly, which still remains a not well understood and peculiar fact about the universe. In addition, as I see it, this needs to be acknowledged to make more progress in understanding both ethics and the relationship between the physical world and consciousness.

Comment by algekalipso on Absolute denial for atheists · 2013-02-26T03:21:41.805Z · LW · GW

No, dude, the correct answer is "because he is a man!"

Comment by algekalipso on Absolute denial for atheists · 2013-02-26T03:01:31.361Z · LW · GW

I think you are not aware of research in acquired taste. It turns out that the effect of particular foods and drinks on psychological states create some deep subconscious associations. Take this as a clear and striking example:

"A study that investigated the effect of adding caffeine and theobromine (active compounds in chocolate) vs. a placebo to identically-flavored drinks that participants tasted several times, yielded the development of a strong preference for the drink with the compounds.[3]"

I think that's why I do enjoy beer now, even though I thought exactly as you did several years ago. I thought it was a huge collective rationalization. Which I still think is a big part of it, specially among teenagers and young adults who like to boast about being strong drinkers and how oh-dear they love alcohol so very much. But grown up people do drink, say, one beer alone and seem to enjoy it quite a bit. But without the pleasant relaxation that usually follows, though, the taste would not be agreeable. So we see a deep neurological change in the way we process taste.

Comment by algekalipso on Not By Empathy Alone · 2012-12-20T19:14:25.715Z · LW · GW

It is my understanding that outrage is the result of 'selective empathy' if at all, and VERY often completely lacking in empathy. E.g. When a group of people are outraged to a gay couple for having gay sex. Ok, so where is empathy in this case? Victimless crime evoking huge deontological moral self-righteousness and anger.

Comment by algekalipso on A Bayesian Argument for the Resurrection of Jesus · 2012-07-14T11:39:42.796Z · LW · GW

We know that many zealous followers are willing to die for the honor of their leaders. It would not be very surprising to see that happen in early Christianity.

Comment by algekalipso on Timeless Causality · 2012-07-07T04:20:55.435Z · LW · GW

And yet fire as a phenomenon exists in several spatio-temporal coordinates, right? If the observer of consciousness is a property of conscious experience as a physical phenomena, maybe we should expect to find it wherever consciousness exists.

Comment by algekalipso on Rationality Quotes July 2012 · 2012-07-04T03:00:34.239Z · LW · GW

"Arguing over the meaning of a word nearly always means that you've lost track of the original question."

  • Eliezer Yudkowsky
Comment by algekalipso on Rationality Quotes July 2012 · 2012-07-04T02:54:59.647Z · LW · GW

"'Whereof one cannot speak thereof be silent,' the seventh and final proposition of Wittgenstein’s Tractatus, is to me the most beautiful but also the most errant. 'Whereof one cannot speak thereof write books, and music, and invent new and better terminology through mathematics and science,' something like that, is how I would put it. Or, if one is not predisposed to some such productivity, '. . . thereof look steadfastly and directly into it forever.'"

-- Daniel Kolak, comment on a post by Gordon Cornwall.

Comment by algekalipso on David Pearce on Hedonic Moral realism · 2012-04-13T23:43:43.350Z · LW · GW

It actually sounds to me like CEV will indeed spit it out. It will explain how a better understanding of what we are will lead us to abandon the constrains of the human experience in th search for maximizing the goodness of the universe, a scenario that we would understand if we were smarter, had grown more closer together and had a better grasp of the nature of identity and consciosness and subjective reward.

Comment by algekalipso on Crowley on Religious Experience · 2012-04-09T22:58:30.307Z · LW · GW

It might not provide a lot of knowledge to the subject who practices mysticism. It does provide the best experience in his or her life.

For the time being, this might not provide a lot of value in the grand scheme of things. However, as we advance into posthumanisty, we do want to explore the state-space of possible conscious experiences in a systematic way so we can design ourselves in such a way that we inhabit the best regions of conscious experience. Mysticistical practice, therefore, has a tremendous long term potentintial; having practicioners and scientists interested is crucial if we are indeed to find out more about these states of consciousness.

I think, after all, there is a very pertinent parallel in the community of lesswrong: it is called fun theory. The fact that mystical experiences can be so outsandingly great and sublime beyond words is a very strong indicator that we will never run out of fun.

Comment by algekalipso on Essay-Question Poll: Dietary Choices · 2011-04-26T06:36:31.986Z · LW · GW

I personally would rather live a good life into my prime and be humanely >slaughtered and fed to some higher life form, than never exist at all. For the most part, the >animals I eat would not have ever existed had the demand for meat not existed as well.

It seems to me that when you say 'never exist at all' you are bringing a mystic notion of identity into conscious experience. A lot has been written about personal identity and the like, and I would argue that the notion of one's identity tied to genetic makeup or historical origin is not the most relevant way of approaching the matter. In this way, when you say "I'd prefer to have existed in any case" I ask "point to me who existed". When you reference the life-path of the animal in question I would point out that you are showing me a collection of conscious experiences. What, if any, distinguish these experiences in a fundamental way from other experiences alike but originated in other similar animals? I don't think anything of real relevance.

The idea that somehow whenever you add another animal into the equation you are multiplying the number of entities brought into existence is questionable. It does have moral consequences, however. For instance, if multiplying entities was a real possibility, such that giving birth to animals brought into existence new 'beings', it could be argued that it is preferable to bring two animals to the world, each living 25 years, than bringing only one that lives 50. Assuming that each conscious moment is qualitatively similar in this animals, if you don't believe in the multiplicity of entities, the two scenarios are completely equivalent.

I think that the confusion I point is very prevalent in animal welfare talk, and I think it contaminates rationality for that matter. I have heard people who put a lot of value in the multiplication of entities argue that massive factory farming is desirable precisely for this reason. They reason that, precisely because you are bringing more 'distinct' life into being, even if in deplorable sates, chicken farms are doing something good. If you look at it from a reductionist perspective, you are merely making little brains play again and again the same old plot with slight variations. And the worst is that the plot is actually painful.

Comment by algekalipso on Why Are Individual IQ Differences OK? · 2011-03-19T00:13:30.403Z · LW · GW

I may not risk to claim: There are no human inequalities, there are only sentient inequalities.