Posts

How does an infovore manage information overload? 2009-08-25T18:54:32.609Z

Comments

Comment by haig on Meditation, insight, and rationality. (Part 2 of 3) · 2013-07-18T22:41:05.902Z · LW · GW

From what I've read of the source writings within the contemplative traditions, modern neuroscience studies and theories on meditation, and my own experiences and thoughts on the subject as well, I've come to view the practice of meditation as serving 3 different but interconnected purposes: 1.) ego loss, 2.) cultivation of compassion, and 3.) experience of non-dual reality.

Ego loss means inhibiting or eliminating the internal self-critic by changing the way you perceive the target of that critic, namely the concept of a stable 'self' that you identify with. Suffering is caused by the struggle of attachment, which is a manifestation of your internal self-critic admonishing your identified 'self' over future desires and past failures. By changing the concept you identify with as your 'self', you remove the object or target of the self-critical process and it fades away, allowing you to be free to live in the moment unhindered by such suffering. Meditation lessens the activation of what is called the default mode network, a network of neurons that extend through several anatomical brain regions and which the over-activation of is implicated in many mood disorders including anxiety, depression, ocd, among others.

Cultivating compassion, generally seen as a way to put yourself in another's shoes and wish others to be free from suffering and attain happiness, is also a way to achieve those things for your self as well. Compassion, once cultivated, recursively extends back onto yourself also, and allows you to be compassionate about your own life. Being loving and kind to yourself, not in a narcissistic way, but in an egoless, purely compassionate way as a sentient being among others, helps you remain in a positive mood and help others to do the same. Mirror neurons and the ability to form a theory of mind are hypothesized to be active in empathic cognition, but they may also prove to be the crucial substructure that allows us to have a theory of our own minds, the seat of the 'self' if you like, and so compassion is both an outward an inward directed process.

Lastly, experience of non-dual awareness is somewhat like the combination of the previous two effects, but taken further it inculcates a deep connection between you, others, and everything else in the universe as one unseparated whole. I've only experienced this sensation twice, but it is overwhelming and life changing.

In conclusion, meditation helps to develop and sustain these brain processes, but it is not easy, takes a lot of time, and may not even be effective for many people. I'm hoping that affective neuroscience and neurotechnology will progress to the level where meditation is unnecessary for achieving these states (though can and should be practiced for aesthetic reasons for those so inclined).

Comment by haig on Who Wants To Start An Important Startup? · 2012-08-31T20:56:31.807Z · LW · GW

Wanted to add the insights of Neil Gershenfeld which I think is how we should frame these problems:

We've already had a digital revolution; we don't need to keep having it. The next big thing in computers will be literally outside the box, as we bring the programmability of the digital world to the rest of the world.

He was talking about personal fabrication in this context, but the 'digitization' of the physical world is applicable to the sustainability goals I mentioned. Using operations research, loosely-coupled distributed architectures, nature-inspired routing algorithms, and other tricks of the IT trade applied to natural resources, we can finally transition to a sustainable world.

Comment by haig on Who Wants To Start An Important Startup? · 2012-08-31T20:52:22.546Z · LW · GW

Surprised no one has mentioned anything involving sustainable/clean tech (energy, food, water, materials). This site does stress existential threats, and I'd think that, given many (most?) societal collapses in the past were precipitated, at least partly, by resource collapse, I'd want to concentrate much of the startup activity around trying to disrupt our short-term wasteful systems. Large pushes to innovate and disrupt the big four (energy, food, water, materials) would do more than anything I can think of to improve the condition of our world and minimize the major risks confronting us within the next 100 years (or sooner).

It's not as hopeless as it appears on first glance. Population growth will reach about 9-10 billion people within 50 years (not much more do to lower birth rates as developing countries have less children and developed countries go into negative population growth) so that is the carrying capacity to aim for. Decoupling the big four from the unpredictability of scarcity, monocrops, climate change, and depletion/destruction by not only using innovations in the specific domains, but using advanced information technologies and algorithms (operations research, stigmergic routing, ..) would be the first time our planet is placed on a secure and sustainable foundation for our basic resource needs. If there is any other large, audacious goal that would change the world positively more than this (other than a positive singularity) I can't think of it.

Comment by haig on Friendly AI and the limits of computational epistemology · 2012-08-11T01:08:59.357Z · LW · GW

Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we'll be able to have a map of the neural correlates that are active in certain environments and which produce certain qualia. Neuroscientists are on that path already. But, are only physical nervous systems capable of producing a subjective experience? If we emulate with enough precision a brain with sufficient input and output to an environment, computationalists assume that it will behave and experience the same as if it was a physical wetware brain. Given this assumption, we conclude that the simulated brain, which is just some machine code operating on transistors, has qualia. So now qualia is attributed to a software system. How much can we diverge from this perfect software emulation and still have some system that experiences qualia? From the other end, by building a cognitive agent piece-meal in software without reference to biology, what types of dynamics will cause qualia to arise, if at all? The simulated brain is just data, as is Microsoft Windows, but Windows isn't conscious, or so we think. Looking at the electrons moving through the transistors tells us nothing about what running software has qualia and what does not. On the other hand, It might be the case that deeper physics beyond the classical must be involved for the system to have qualia. In that case, classical computers will be unable to produce software that experiences qualia and machines that exploit quantum properties may be needed, this is still speculative, but the whole question of qualia is still speculative.

So now, when designing an AI that will learn and grow and behave in accordance with human values, how important is qualia for it to function along those lines? Can an unconscious optimizing algorithm be robust enough to act morally and shape a positive future for humanity? Will an unconscious optimizing algorithm, without the same subjectivity that we take for granted, be able to scale up in intelligence to the level we see in biological organisms, let alone humans and beyond, or is subjective experience necessary for the level of intelligence we have? If possible, will an optimizing algorithm actually become conscious and experience qualia after a certain threshold, and how does that affect its continued growth?

On a side note, my hypothetical friendly AGI project that would directly guarantee success without wondering about speculations on the limits of computation, qualia, or how to safely encode meta-ethics in a recursively optimizing algorithm, would be to just grow a brain in a vat as it were, maybe just neural tissue cultures on biochips with massive interconnects coupled to either a software or hardware embodiment, and design its architecture so that its metacognitive processes are hardwired for compassion and empathy. A bodhisattva in a box. Yes, I'm aware of all the fear-mongering regarding anthropomorphized AIs, but I'm willing to argue that the possibility space of potential minds, at least the ones we have access to create from our place in history, is greatly constricted and that this route may be the best, and possibly, the only way forward.

Comment by haig on Friendly AI and the limits of computational epistemology · 2012-08-10T06:42:05.415Z · LW · GW

That is the $64,000 question.

Comment by haig on Friendly AI and the limits of computational epistemology · 2012-08-10T04:44:25.682Z · LW · GW

To summarize (mostly for my sake so I know I haven't misunderstood the OP):

  • 1.) Subjective conscious experience or qualia play a non-negligible role in how we behave and how we form our beliefs, especially of the mushy (technical term) variety that ethical reasoning is so bound up in.
  • 2.) The current popular computational flavor of philosophy of mind has inadequately addressed qualia in your eyes because the universality of the extended church-turing thesis, though satisfactorily covering the mechanistic descriptions of matter in a way that provides for emulation of the physical dynamics, does not tell us anything about what things would have subjective conscious experiences.
  • 3.) Features of quantum mechanics such as entanglement and topological structures in a relativistic quantum field provide a better ontological foundation for your speculative theories of consciousness which takes as inspiration phenomenology and a quantum mondadology.

EDIT: I guess the shortest synopsis of this whole argument is: we need to build qualia machines, not just intelligent machines, and we don't have any theories yet to help us do that (other than the normal, but delightful, 9 month process we currently use). I can very much agree with #1. Now, with #2, it is true that the explanatory gap of qualia does not yield to the computational descriptions of physical processes, but it is also true that the universe may just be constructed such that this computational description is the best we can get and we will just have to accept that qualia will be experienced by those computational systems that are organized in particular ways, the brain being one arrangement of such systems. And, for #3, without more information about your theory, I don't see how appealing to ontologically deeper physical processes would get you any further in explaining qualia, you need to give us more.

Comment by haig on Friendly AI and the limits of computational epistemology · 2012-08-10T03:58:43.243Z · LW · GW

We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent's subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.

Comment by haig on Friendly AI and the limits of computational epistemology · 2012-08-10T03:46:54.966Z · LW · GW

"How an algorithm feels from inside" discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.

Comment by haig on Does functionalism imply dualism? · 2012-07-31T19:06:13.033Z · LW · GW

"If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality."

This is a 'why' question, not a 'how' question, and though some 'why' questions may not be amenable to deeper explanations, 'how' questions are always solvable by science. Explaining how neuronal patterns generate systems with subjective experiences of green is a straightforward, though complex, scientific problem. One day we may understand this so well that we could engineer quales on demand, or create new types of never before seen quales according to some transformation rules. However, explaining 'why' such arrangements of matter should possess such interiority or subjectivity is, I think at least based on everything we currently know, unanswerable.

Comment by haig on Journal of Consciousness Studies issue on the Singularity · 2012-07-30T16:10:32.157Z · LW · GW

In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singularity.

I'm thinking of specific concepts by Yudkowsky and others in the singularity/FAI crowd that seem uncontroversial at first glance, but upon further investigation, when analyzed in the light of computational complexity, become unconvincing. One example of this is the concept of the possibility space of minds that is an assumption propping up many of the arguments for the negative consequences of careless AI engineering. When seen from the perspective of computability, that possibility space does represent the landscape of theoretically possible intelligent agents, and at first glance, those sensitive and wise enough to care about where in that landscape most outcomes of successful AI engineering projects will be located are alarmed at the needle in the haystack that is our target for a positive outcome. But, if you put on your computational complexity hat and start to analyze not just particular algorithms representing AI systems themselves, but the engineering processes that work towards outputting those AI agents/systems, a very different landscape takes shape, one that drastically constrains the space of possible minds that are a.) of a comparable cognitive class with humans, and b.) have a feasible engineering approach on a timescale T < heat death of our universe. I'm including the evolution of natural history on earth within the set of engineering processes that output intelligence mentioned above

This is but one example of how the neglect of computational complexity, and, to be frank, the neglect of time as a very important factor overall, has influenced the thinking of the SIAI/Lesswrong et al crowd. This neglect leads to statements such as the one Yudkowsky made that an AI could be programmed on a desktop computer circa early 2000s which I am extremely incredulous of. It also leads to timeless decision theories which I don't feel will be of much importance. Scott Aaronson has made a career out of stressing computational complexity for understanding the deep nature of quantum mechanics, and this should apply to all natural phenomena, cognition and AI among them.

Comment by haig on How to deal with non-realism? · 2012-07-28T00:39:53.096Z · LW · GW

I think extra properties outside of physics conveys a stronger notion than what this view actually tries to explain. Property dualism, such as emergent materialism or epiphenomenalism, doesn't really think there are any extra properties other than the standard physical ones, it is just that when those physical properties are arranged and interact in a certain way they manifest what we experience as subjective experience and qualia and those phenomena aren't further reducible in an explanatory sense, even though they are reducible in the standard sense of being arrangements of atoms.

So, why is that therefore an incomplete understanding? I always thought of qualia as included within the same class of questions as, and let me quote Parfit here, "Why anything, why this?" We may never know why there is something rather than nothing in the deep sense, not just in the sense of Larry Krausse saying 'because of the relativistic quantum field', but in 'why the field in the first place', even if it is the only logical way for a universe to exist given a final TOE, but that does not hinder our ability to figure out how the universe works from a scientific perspective. I feel it is the same when discussing subjective experience and qualia. The universe is here, it evolves, matter interacts and phenomena emerge, and when that process ends up at neural systems, those systems (maybe just a certain subset of them) experience what we call subjectivity. From this subjective vantage point, we can use science to look back at that evolved process and see how the physical material is architected and understand its dynamics and create similar systems , but there may not be a deeper answer to why or what qualia is other than its correlated emergence from the physical instantiations and interactions. That is not anti-reductionist, and it is not anywhere near the same class of thought as substance dualism.

Comment by haig on New York Times on Arguments and Evolution [link] · 2011-08-11T22:59:24.259Z · LW · GW

Reading his essay here: http://edge.org/conversation/the-argumentative-theory it appears that he does indeed come off as pessimistic with regard to raising the sanity line for individuals (ie teaching individuals to reason better and become more rational on their own). However, he does also offer a way forward by emphasizing group reasoning such as what the entire enterprise of science (peer review, etc.) encourages and is structured for. I suspect he thinks that even though most people might be able to understand that their reasoning is flawed and that they are susceptible to biases on an academic level, they will still not be able to overcome those strongly innate tendencies in practice, hence his pragmatic insistence on group deliberation to put the individual in check.

IMO, what he fails to take into consideration is the adaptability of human learning through experience and social forces, such that with the proliferation of and prolonged participation in communities like Less Wrong or other augmented reasoning systems, one would internalize the rational arts as habits and override the faulty reasoning to some extent much of the time. I still agree with him that we will always need a system like peer review or group deliberation to reach the most rational conclusions, but in the process of using those systems we individually become better thinkers.

Comment by haig on Defeating Ugh Fields In Practice · 2010-08-17T10:29:02.337Z · LW · GW

An alternative to making things fun is to make things unconscious and/or automatic. No healthy individual complains about insulin production because their pancreas does it for them unconsciously, but diabetic patients must actively intervene with unpleasant, routine injections. One option would be to make the injections less unpleasant (make the process fun and/or less painful), but a better option would be to bring them in line with non-diabetic people and make the process unconscious and automatic again.

Comment by haig on July 2010 Southern California Meetup · 2010-07-11T00:11:54.586Z · LW · GW

The location in space was fine, the location in time, however, was problematic. Friday afternoon, especially in that area, has probably the most congested traffic anywhere on earth. I was so frustrated to finally get there that I ended up parking in a structure that cost me $16 for two hours. Maybe the next meetup can happen at a later time (after 6pm) on a weekday other than Friday.

Also, a little more structure would have been nice in order to massage the strained conversations into a more productive path. For the next meetup it might be interesting to ask prospective attendees to suggest a list of topics of discussion which we could vote on.

Other than that, nice meeting you all!

Comment by haig on July 2010 Southern California Meetup · 2010-07-08T22:10:03.616Z · LW · GW

I'll be attending.

Comment by haig on The scourge of perverse-mindedness · 2010-03-24T21:18:58.328Z · LW · GW

In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of 'soul', whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern and New Age ideologies. The root of spiritual, 'spirit', is a non-corporeal substance/entity whose main purpose is to contrast itself with the material body. Spirit is that which is not material and so can survive the loss of material pattern decay.

In my opinion, THIS IS the hard pill to swallow.

Comment by haig on "Outside View!" as Conversation-Halter · 2010-02-24T22:30:54.683Z · LW · GW

I may be overlooking something, but I'd certainly consider Robin's estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer's estimates? Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation. Even this difference isn't that great, since there will be some participants in the global scenario with much greater contributions and may seem very similar to the local scenario, and vice versa where a lab may get help from a diffuse network of contributors over the internet. If the differences really are that marginal, then Robin's 'outside view' seems to approximately agree with Eliezer's 'inside view'.

Comment by haig on Debate tools: an experience report · 2010-02-22T01:45:00.954Z · LW · GW

There is a web-based tool being worked on at MIT's collective intelligence lab. Couldn't find the direct link to the project, but here's a video overview: Deliberatorium

Comment by haig on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T22:19:48.046Z · LW · GW

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

Comment by haig on Rationality Quotes: October 2009 · 2009-10-25T23:26:17.648Z · LW · GW

"People are not base animals, but people, about 90% animal and 10% something new and different. Religion can be looked on as an act of rebellion by the 90% animal against the 10% new and different (most often within the same person)."

Comment by haig on Privileging the Hypothesis · 2009-09-29T05:33:05.069Z · LW · GW

The wikipedia article for Abductive Reasoning claims this sort of privileging the hypothesis can be seen as an instance of affirming the consequent.

Comment by haig on Your Most Valuable Skill · 2009-09-28T04:56:07.795Z · LW · GW

Aside from learning as a way to acquire useful skills, there are certain things I learn in order to change the way I think. Echoing similar comments, programming seems to have altered my perspective as a kid and continues to do so. One example is learning Lisp. It's become popular to learn lisp not because it is practically useful in day-to-day coding (though it can be), but because it changes the way you think about how to program.

Similarly, studying abstract algebra might be a waste of my time (though I'll understand Lie groups and hence theoretical physics better), but the way it warps (in a good way) my mind cannot be attained by any other (present) means.

This might not be original, but my most valuable skill then seems to be learning how to learn, which includes honing my intuition of what is important to learn.

Comment by haig on Rationality Quotes - September 2009 · 2009-09-03T08:05:04.387Z · LW · GW

"It is useless to attempt to reason a man out of a thing he was never reasoned into." (Jonathan Swift )

Comment by haig on How does an infovore manage information overload? · 2009-08-26T15:30:58.581Z · LW · GW

I agree and admit laziness on my part for hoping someone else to insightfully reflect on my problem instead of offering at least a minimum of a solution to start things off. Ironically, I can't seem t make time to analyze how I can make more time!

Comment by haig on ESR's New Take on Qualia · 2009-08-25T23:19:48.286Z · LW · GW

What you describe is what Tim Gallwey calls the 'inner game'. It is, to simplify a bit, training your intuitive subconscious without letting your conscious awareness interfere. Here is a video of him coaching a woman who has never touched a tennis racket to serve using the technique.

Another similar technique is drawing on the right side of the brain.

Comment by haig on How does an infovore manage information overload? · 2009-08-25T22:32:05.068Z · LW · GW

The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.

Comment by haig on Would Your Real Preferences Please Stand Up? · 2009-08-11T06:37:53.788Z · LW · GW

You might have misunderstood me. I did not limit akrasia to only things we enjoy. I said actually getting going on the task, whether inherently enjoyable or not, is what 'feels wonderful'. I hate going to the dentist, but actually engaging in the process of going to the office and getting it over with feels pretty good as an accomplishment.

And forming the habit of not procrastinating is a very big part of it, IMO. To stop putting things off and automatically jump into a task is a positive habit that does a great deal against akrasia. Why do you think juvenile delinquents get sent off to boot camp or some other long period of regimented experience. To form those habits which will mold their character accordingly.

Comment by haig on Would Your Real Preferences Please Stand Up? · 2009-08-09T06:09:21.014Z · LW · GW

The Cynic's Theory may in fact describe a true state of mind, but it is not describing akrasia. The Cynic's Theory might better describe those minds whose preferences are placed by exterior influences that conflict with their internal, consciously hidden preferences. An example may be someone who always thought they wanted to be a doctor but deep down knew they wanted to be an artist.

However, when I think of Akrasia, I don't think of incompatible goals or hidden preferences, I think of compatible goals but an inability to consciously exert control of your willpower in achieving the agreed upon goals. When you finally stop procrastinating and get going, you feel wonderful and wonder why you couldn't have done it sooner--but then you go through the same problem the next time again. Akrasia is a problem of forming/eliminating automatic behaviors, aka habits. So in my opinion, the Cynic's theory does not shed any light on the problem of akrasia.

Comment by haig on Unspeakable Morality · 2009-08-05T08:09:14.150Z · LW · GW

Yes, a big problem is the human tendency to associate strongly with beliefs so that they become a part of your identity. When I once got into an argument with a particularly stubborn friend regarding religion, I tried to disassociate arguments as much as possible by writing them down and having an impartial 3rd party check for inconsistencies and biases blindly in a type of scoring system. How'd it turn out? He gave up alright, but still retained his beliefs!

Comment by haig on Pain · 2009-08-03T08:48:08.471Z · LW · GW

The important thing to point out is that the information signal we experience as pain is an instructive signal more than just an indicative signal. I mean to say that pain's purpose is to make the organism react against whatever is hurting it, not just become aware of it. Since conscious decision making in humans is delayed at least 500ms (and sometimes up to 10 seconds!), signals such as pain have to be a result of low-level cybernetic reactions in the nervous system and not just a conscious experience after the fact. I'm sure if an intelligent designer created an intelligent agent instead of dumb evolution, she would have created it so as to take advantage of its intelligence to relay painless 'bad' signals without resorting to pain. Pain is a necessary legacy subsystem that came about due to the stupidity of evolution.

So what is bad about pain? It is a cruel hack built by a blind, dumb hacker with lots and lots of time on its hands.

Comment by haig on With whom shall I diavlog? · 2009-06-03T09:38:57.712Z · LW · GW

I voted up robin hanson, but I would love either Cory Doctorow or Bruce Sterling because they are both smart scifi authors who are vocally skeptical of something like the singularity happening.

Whoever it is, in my opinion the best discussions would consist of people who share very similar worldviews yet strongly differ on some critical ideas. We don't need to see another religion debate that is for sure.

Comment by haig on Do Fandoms Need Awfulness? · 2009-05-29T05:19:56.085Z · LW · GW

Warren Buffett seems to fit all the criteria of the counterexample Eliezer asked for. And if you doubt the fanaticism of his fandom, just look over some videos of his annual shareholders' meeting/convention.

Comment by haig on Bad reasons for a rationalist to lose · 2009-05-20T07:01:35.262Z · LW · GW

Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.

Comment by haig on Share Your Anti-Akrasia Tricks · 2009-05-16T18:03:05.346Z · LW · GW

'cleaning my room' is still abstract. If you decompose that into 'pick up clothes off floor, then make my bed, then vacuum the carpet, .....', then those are concrete tasks.

Comment by haig on Share Your Anti-Akrasia Tricks · 2009-05-16T04:45:07.367Z · LW · GW

Well, from reading the comments it seems the most popular type of akrasia that hinders this group is procrastination. I'm sure other weaknesses of will are common, but procrastination seems to be an overwhelmingly common nuisance. This paper http://www.uni-konstanz.de/FuF/SozWiss/fg-psy/gollwitzer/PUBLICATIONS/McCreaetal.PsychSci09.pdf might hint at why this is so. The gist is that the more abstract the tasks/projects/goals are, the more you will procrastinate. As the tasks become more concrete, the procrastination is eliminated. An example is the abstract concept of 'write that essay' vs. 'pick up pen & paper and begin mind-mapping' or whatever.

It is probably fair to assume that most people here are more abstract thinkers compared with the average population and thus might be extra sensitive to procrastination.

Comment by haig on Generalizing From One Example · 2009-04-29T06:20:27.218Z · LW · GW

Good post and important issues: How similar are other human minds to my own? How can I discuss academically what other minds should/should not do/believe if they are so different from my own? It is much like trying to argue over aesthetics of a colored art piece to a person born blind.

It would be constructive to be able to deduce which attributes a person has and which they're lacking, and in what proportions.

Comment by haig on Bayesians vs. Barbarians · 2009-04-15T08:01:09.334Z · LW · GW

In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operations research or systems engineering.

On another note, sometimes the most rational response for 'winning' will conflict with our morality, or at least, our emotions. Von Neumann advocated a first strike response against the soviets early on, and he might have been right. Even if his was the most rational decision, you do see the tangle of problems associated with it. What if winning means losing a part of you that was a huge part of the reason you were fighting in the first place.

Comment by haig on Rationality, Cryonics and Pascal's Wager · 2009-04-09T09:52:12.128Z · LW · GW

My issue isn't with cryonics, it is with the whole notion of self identity and post-singularity personhood. I guess this ties into EY's 'fun theory' and underscores the importance of a working positive theory of 'fun' as a prerequisite for immortality as we currently define it.

Assume cryonics works, further more assume that your brain is scanned with enough resolution to capture all salient features of what you consider to be your mind. You are now an uploaded entity, and your mind is as malleable as any other piece of software. There are only so many clock cycles to spend indulging in hedonism and utopian bliss before that gets old. So then naturally, you expand your mind, you merge with other minds, and whatever else is possible. Very soon you will no longer resemble anything of what we consider a 'self', not just a human self with our evolved emotions and thoughts, but a 'self' in general as we can define it. So then, what is the point of trying to preserve yourself if you are going to transcend a self anyway?

I'd want to make sure humanity continues or ensure some posthuman eventuality continues where we leave off. Also, I'd want to continue to live as much of a fruitful existence for as long as I can, as long as I am still living. If that means I reach the singularity and enjoy a certain number of clock-cycles in utopia before the idea of utopia ceases to have any meaning, then lucky me. If not, I'm content to have existed at all and to have done my part in trying to ensure the continuation and spread of consciousness. But it makes no sense to go to such lengths to preserve myself needlessly in relation to such a (near) infinite expansion of consciousness.

This can also be viewed as an update to Camus's question of suicide. Is not signing up for cryonics, or dying while knowing it can be overcome, similar to suicide? I admit, I have contemplated suicide in the past, but I'm over it. It pains me to start feeling the same emotions once again when contemplating cryonics.

Comment by haig on Rationality is Systematized Winning · 2009-04-04T08:25:54.635Z · LW · GW

Let's use one of Polya's 'how to solve it' strategies and see if the inverse helps: Irrationalists should lose. Irrationality is systematized losing.

On another note, rationality can refer to either beliefs or behaviors. Does being a rationalist mean your beliefs are rational, your behaviors are rational, or both? I think behaving rationally, even with high probability priors, is still very hard for us humans in a lot of circumstances. Until we have full control of our subconscious minds and can reprogram our cognitive systems, it is a struggle to will ourselves to act in completely rational ways.

To spread rationality, amongst humans at least, we might want to consider a divide and conquer approach focusing on people maintaining rational beliefs first, and maximizing rational behavior second.

Comment by haig on Where are we? · 2009-04-03T07:39:47.884Z · LW · GW

I'm in pasadena

Comment by haig on Where are we? · 2009-04-03T05:17:29.320Z · LW · GW

Los Angeles, Ca

Comment by haig on Purchase Fuzzies and Utilons Separately · 2009-04-02T06:22:00.184Z · LW · GW

Kiva.org has the distinct honor of being the only charity that has ensured me maximum utilons for my money with an unexpected bonus of most fuzzies experienced ever. Seeing my money being repaid and knowing that it was possible only because my charity dollars worked, that the recipient of my funds actually put the dollars to effective use enough to thrive and pay back my money, well, goddamn it felt good.

Comment by haig on Church vs. Taskforce · 2009-03-29T07:58:44.868Z · LW · GW

We don't want to create a new religion, but whatever we create to take the place of it needs to offer at least as much as that which it replaces, so we might end up actually needing a new 'religion' whether we like it or not. If indeed there is a biological predisposition for humans to want to engage in 'worship', then we might as well worship rationally. I hesitate to call this new organization a religion or the practice worship, those are the things they are replacing, but those words get my idea across.

How about we create a church-like organization that has local congregations and meets weekly to listen to talks on rationality, the latest scientific discoveries, lectures on philosophy, the state of the world, etc. And they don't need to lack beauty or awe. A weekly dose of the unimaginable beauty of biology, or astrophysics, or even economics, in a shared setting, would sure add value to my life. A 'bible study' about fermi's paradox would have made my day as a child. We can tug on the emotions as much as traditional religions without being irrational.

And the missionary arm would maintain the rationality of the 'church'. If the catholic pope denounces condoms in africa, then our 'church' goes one further and starts a viral campaign to not only spread the reason why the pope is wrong, but gets creative and sets up condom donations or incentive structures to promote their use, or whatever.

I know there are many organizations that promote skepticism, secular humanism, reason, enlightenment, etc. but don't know if they are widespread, have local chapters that meet regularly, or have much of a following.

And yes, 'canonizing' the vast information to make it more accessible would help a lot.

UPDATE: In regards to the post wondering how this all would be different from the atheist groups and other such organizations that currently exist, well, that is the rub isn't it. Those have the right idea but aren't successful....how can we make one succeed? Or, can we prove that one can't succeed so as to not waste any more time over it.

Comment by haig on Don't Revere The Bearer Of Good Info · 2009-03-22T10:51:55.719Z · LW · GW

I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.

Comment by haig on 3 Levels of Rationality Verification · 2009-03-16T20:33:34.719Z · LW · GW

There is a recent trend of 'serious games' which use video games to teach and train people in various capacities, including military, health care, management, as well as the traditional schooling. I see no reason why this couldn't be applied to rationality training.

I always liked adventure style games as a kid, such as King's Quest or Myst, and wondered why they aren't around any more. They seemed to be testing rationality in that you would need to guide the character through many interconnected puzzles while figuring out the model of the world and how best to achieve the goals of the protagonist. It seems like the perfect video game genre for both developing and testing rationality skills.

Specifically, I've thought of a microcosm of the real world, taking place in a different setting yet similar enough to our real world that there would be analogues to religion, science, politics, etc. As you progress through the game, say from child to adult, you learn about the world and see how different beliefs and strategies effect the game. Players would encounter similar challenges to the real world but be disconnected enough not to put up a defense mechanism, yet involved enough to care about the outcome. Add MMO et al features to taste.

Comment by haig on A Sense That More Is Possible · 2009-03-13T08:33:34.960Z · LW · GW

Isn't this a description of what a liberal arts education is supposed to provide? The skills of 'how to think' not 'what to think'? I'm not too familiar with the curriculum since I did not attend a liberal arts college, instead I was conned into an overpriced private university, but if anyone has more info please chime in.

Comment by haig on Raising the Sanity Waterline · 2009-03-12T13:19:14.380Z · LW · GW

I just read a nice blog post at neurowhoa.blogspot.com/2009/03/believer-brains-different-from-non.html, covering research on brain differences of believers vs. non-believers. The take away from the recent study was "religious conviction is associated with reduced neural responsivity to uncertainty and error". I'm hesitant to read too much into this particular study, but if there is something to this then the best way to spread rational thought would be to try to correct for this deficiency. Practicing not to let uncertainty or errors slide by, no matter how small, would result in a positive habit and develop their rationality skills.

Comment by haig on The Apologist and the Revolutionary · 2009-03-12T08:20:37.436Z · LW · GW

Some commenters said that in fact theory revision sessions such as brainstorming, etc. were actually pleasant to most rationalists and don't necessarily induce sadness. Indeed, I really enjoy arguing and learning new things, or else I wouldn't continue to do them. However, there is a difference between the loose juggling of ideas that we aren't very attached to and the type of continual self-checking of core beliefs that strict rationalists try to do. In order to operate effectively in the world and achieve goals, we need a solid belief foundation to pick goals to achieve and then choose strategies to achieve them. If you are continually attacking your core beliefs or at least remaining open to changing them, you never have a very solid model to base actions upon. The sine qua non of depression is an inability to function, you don't move towards goals, you can't even pick the goals in the first place. Contrast that with the stereotypical 'go-getter', the ambitious overachievers, who report higher happiness levels. They remain consistent in their beliefs and world-views for the most part. Your brain wants to move towards goals, and continually reassessing the foundation you base goals on is bound to cause problems. It's cliche' to mention the quintessential exemplar of this scenario, the depressed person struggling with existential angst. In her case, she is constantly assessing her most basic models: "Who am I? What's my purpose? etc."

Comment by haig on Striving to Accept · 2009-03-10T23:15:03.067Z · LW · GW

practice, practice, practice

Comment by haig on Striving to Accept · 2009-03-10T22:58:13.556Z · LW · GW

I think this post and the related ones are really hitting home why it is hard for our minds to function fully rationally at all times. Like Jon Haidt's metaphor that our conscious awareness is akin to a person riding on top of an elephant, our conscious attempt at rational behavior is trying to tame this bundle of evolved mechanisms lying below 'us'. Just think of the preposterous notion of 'telling yourself' to believe or not believe in something. Who are you telling this to? How is cognitive dissonance even possible?

I remember the point when I finally abandoned my religious beliefs as a kid. I had 'known' that belief in a personal god and the religious teachings were incompatible with rational thinking yet I still maintained my irrational behavior. What did the trick was to actually practice and live strictly for a set period of time only appropriately to what my rational beliefs allowed. After some number of days, I was finally completely changed and could not revert back to my state of contradiction.

In relation to this, think about why you can't just read a math book and suddenly just get it (at least for us non math geniuses). You may read an equation and superficially understand that it is true, but you can still convince yourself otherwise or hold conflicted beliefs about it. But then, after doing examples yourself and practicing, you come to 'know' the material deeply and you can hardly imagine what it is like not to know it.

For people like the girl Eliezer was talking to, I wonder what would happen if you told her, as an experiment, to force herself to totally abandon her belief in god for a week, only adhering to reason, and see how she feels.