Posts

Auckland meetup, Thrusday May 26th 2011-05-21T05:22:05.418Z
Auckland meet up Saturday Nov 28th 2009-11-15T05:29:16.003Z
Robin Hanson's lists of Overcoming Bias Posts 2009-08-06T20:10:39.207Z
An observation on cryocrastination 2009-07-22T20:41:41.912Z

Comments

Comment by AndrewH on More "Stupid" Questions · 2013-08-01T18:52:28.710Z · LW · GW

One wonders if in the populations of rationalists (CFAR in particular) that there are naturally mono people who are 'conformed' into being poly?

Comment by AndrewH on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-28T20:35:08.343Z · LW · GW

This reminds me place premium, an interesting concept that someone doing the same job in one country can earn more than in another. Though we are talking about some kid who can't even get a job in the first place, this concept works well.

For example if a homogenous region such as country, city, or even suburb, has automated to such a degree that menial jobs are few. Has attracted the best people, and the best people to serve the best people. Such a region has 'place premium' as the top creative jobs, programming, finance, design work, etc, pay extremely well to entice the best. These people demand, via their wealth, the best service and so entice those that are skilled, good looking, whatever attributes required for service. Continuously filtering people.

I'll also argue that the US is a special case in that US dollar holders get a subsidy to living via the petrodollar/global reserve currency. Payed for by any foreigners wanting to by [relative to them] foreign products. This only increases the place premium of living in the US, and thus earning a wage in USD.

For the IQ 70 kids, perhaps there ARE no jobs for them in the region they live in. They have been filtered out by better (in the sense of selected for the jobs in that region) people after the region's 'place premium'.

The solution is to move somewhere else, go opposite the flow of people moving to higher 'place premium' locations; the one they are in has been saturated by above average people. Perhaps even it is time to think of immigration to one of those countries where they can earn 50c/hour.

Of course with the advent of nation states there is no longer free flow of people, so without welfare these kids might just starve to death, denied the freedom to move.

Comment by AndrewH on How I Lost 100 Pounds Using TDT · 2013-07-21T01:37:14.604Z · LW · GW

Plenty of foods available today not available to our ancestors, such as semi-dwarf wheat.

Comment by AndrewH on Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89 · 2013-06-30T07:19:48.191Z · LW · GW

Could be that 'use 75th' only had the right information and mental algorithms to produce the correct prediction in this one case. Other cases 'user 75th' might not have passed a sufficient threshold of probability to spout out a prediction.

Please label me as user 2nd when it comes to predictions of 'user 75th' 's predictive powers.

Comment by AndrewH on Optimizing for attractiveness · 2013-06-01T22:37:28.654Z · LW · GW

Being happy is a higher order goal than becoming attractive correct? How about picking up meditation instead? You shouldn't need to rely on anyone but yourself to be a happy person.

Here's some simple instructions to get you started. If interested, google "Progressive Stages of Meditation in Plain English" for more detailed instructions.

Comment by AndrewH on Mathematicians and the Prevention of Recessions · 2013-05-26T21:43:17.942Z · LW · GW

To the degree that money is used as a store of value, the money supply available for 'positive-sum' trades decreases. Let us say that the supply of goods and services on the market stays the same, then with less money available to potentially purchase theses goods and services, the price of the goods and services decreases; microeconomics supply and demand curve. This incentivizes people who are not holding money as a store of value to participate in more positive-sum trades.

Of course, people might end up taking their store-of-value money and investing it, allowing the creation of capital goods that make more efficient production possible. But that's another story.

Comment by AndrewH on Post ridiculous munchkin ideas! · 2013-05-22T02:00:14.822Z · LW · GW

Better to think of ways to not spend money than think of ways to keep on living relying on other peoples' money.

Comment by AndrewH on MetaMed: Evidence-Based Healthcare · 2013-03-05T18:33:06.496Z · LW · GW

If you haven't done the search yet (in the manner MetaMed would do a search), how can you guarantee you'll find something before you do the search? :)

Comment by AndrewH on New censorship: against hypothetical violence against identifiable people · 2012-12-24T02:57:44.008Z · LW · GW

Intriguing, actual paraphrasing here of a US "The Surgeon General"? I can imagine it is something someone in high office might say.

Comment by AndrewH on More Cryonics Probability Estimates · 2012-12-18T08:32:57.437Z · LW · GW

Accessing long term memory appears to be a reconstructive process, which additionally results in accessed memories becoming fragile again; this is what I believe is occurring here. The learned aversion is reconstructed and as then susceptible to damage much more than other non-recently accessed LTM. Consider that the drug didn't destroy ALL of the mice's (fear?) memories, only that which was most recently accessed.

So no worries to cryonics!

Comment by AndrewH on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-27T04:55:59.873Z · LW · GW

I can only give you one upvote, so please take my comment as a second.

Comment by AndrewH on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T20:43:26.524Z · LW · GW

Lets be honest about 'demonstrating rationality' here. If your goals are to have much more romping in the bedroom, they have done well here. However many of these techniques speak to me of cults, the ones with the leader getting all the brainwashed girls.

A much better sign of rationality is to have success in career, in money, in fame -- to be Successful. Not to just have more fun. Being successful hasn't been much demonstrated, though I am hopeful still.

Comment by AndrewH on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T19:57:31.329Z · LW · GW

To be honest, as a long term supporter of SIAI, this sort of social experimentation seems like a serious political blunder. I personally have no problem with finding new (or not part of current western culture) techniques of... social interactions... if you believe it will make yourself and others 'better' for some definition of better.

But if you are serious in actually getting the world behind the movement, this is Bad. "Why should I believe you when you seem to be amoral?". I have more arguments on this matter but they are easy to generate anyway.

Another thought: one way to think of it might be that to achieve your goals personal sacrifice is necessary and applauded: 'I'm too busy saving the world to have a girlfriend.'. Perhaps there are better examples than that. Maybe it's time to get rid of couches?

Comment by AndrewH on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T19:56:09.499Z · LW · GW

With Unbreakable Vows, the... arbitrator?... sacrifices a portion of their magic permanently yes? One issue is that, after you die you might need that magic for something, like the more magic you have the more pleasant (or less!) magically created heaven is. In any case, even if magical society was fine with sacrifices, they might reason thus, and not use unbreakable vows. Such a society would make investigation (magical!) into potential afterlife a top priority, so lack of use of such a ritual might be compensated by finding out there is a heaven (or hell).

Comment by AndrewH on Brain Preservation · 2012-03-31T08:27:35.966Z · LW · GW

Clearly if you see larger costs as you age, then the incorrect course of action is to simply do nothing and find when you are old, you have no money to pay for the policy. If you don't want to spend a large amount when you are old, then save now. Perhaps if you save/invest enough, you will have enough money to simply by a cryonics policy directly.

Comment by AndrewH on Normal Ending: Last Tears (6/8) · 2012-01-29T02:48:12.944Z · LW · GW

Other than the mass suicides...

And including the mass suicides? remember that in this story, 6 billion people become 1 in a million, and over 25% of people died in this branch of the story. Destroying Huygens resulted in 15 billion deaths.

As they say, shut up and multiply.

Comment by AndrewH on Polyhacking · 2011-09-08T02:09:40.726Z · LW · GW

At first, I thought that making a new convention is the wrong way to go about it. How many conventions should we need to remember then? making new conventions all over the place for LWer's will be too difficult, too many different social rules to juggle.

For example, in such a situation, as in asking a person out, you would need to think about the LW community conventions and then normal conventions when deciding actions. But then, you couldn't do better unless you allow for change.

If a community is to be truly made, perhaps a set of conventions can be constructed so that, this convention will slot nicely into an easily searchable hierarchy: Relationships -> relationship changing -> approaches/dating requests. You could make an iPhone app so that the LWer looking for love (or wishing to do some social action) can quickly and discretely check up the currently accepted conventions/guidelines. If someone deviates, you can have all sorts of fun deciding to call them on it.

Comment by AndrewH on [Link] Simon Cowell plans to sign up for cryonics · 2011-08-29T00:01:54.064Z · LW · GW

Are you going to kill yourself now? given that you are only living because you know someday you will be alive and Cowell will not be. Because not signing up for cryonics is saying that you don't want to live for longer than ~90 years :)

Comment by AndrewH on Auckland meet up Saturday Nov 28th · 2011-04-28T19:35:30.073Z · LW · GW

Most interesting! I would also recommend CompSci 111 even if you are skilled with computers. It introduces you to a wide range of skills.

You might even bump into me in the corridor.

Comment by AndrewH on Auckland meet up Saturday Nov 28th · 2011-04-28T08:34:25.861Z · LW · GW

I noticed. I'll be setting up a new meet up soon due to someone else requesting it. Auckland is positively on fire with rationality it seems! bring water buckets.

You are doing computer science now? that's most interesting. Are you taking any stage 1 compsci papers this semester?

Comment by AndrewH on Procedural Knowledge Gaps · 2011-02-08T07:46:43.946Z · LW · GW

Yes, the refined carbohydrates are the real killer here. Eat as much meat as you want but no more white bread!

The complete notes are a fantastic summary.

Comment by AndrewH on The Last Days of the Singularity Challenge · 2010-03-01T00:04:36.799Z · LW · GW

I put in ~1000 or so over a few months. For a better world!

Comment by AndrewH on Are wireheads happy? · 2010-01-25T20:05:37.705Z · LW · GW

I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you >asked me my favorite food, there are dozens of things I would say before "Pringles". >Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. >But once I've had that first chip, my motivation for a second chip goes through the >roof, without my subjective assessment of how tasty Pringles are changing one bit.

What is missing from this is the effort (which eats up the limited willpower budget) required to get the second Pringle chip. Your motivation for a second Pringle chip would be much lower if you only brought one bag of Pringle chips, and all bags contained one chip. However, your motivation to have another classof(Pringle) = potato chip no doubt rises -- due to the fact that chips are on your thoughts rather than iPhones.

Talking about effort allows us to bring in habits into the discussion, which you might define as sets of actions that, due to their frequent use, are much less effort to perform.

The difference between enjoyment and motivation provides an argument that could >rescue these people. It may be that a person really does enjoy spending time with >their family more than they enjoy their iPhone, but they're more motivated to work >and buy iPhones than they are to spend time with their family.

Alternatively, for potentially good reasons before (working hard to buy a house for said family), work has become habitual while spending time with the family has not. Hence, work is the default set of actions, the default no-effort state, and anything that takes time off work requires effort. Spending time with the family could do this, yet buying an iPhone with the tons of money this person has would not.

A way of summarizing the effect of effort is that it is a function of a particular persons set of no-effort (no willpower) actions. This function defines how much 'wanting' is required to do that action -- less effort actions of the same amount of 'wanting' are more 'desirable' to be done.

Willpower plays a big role in this in that you can spend willpower to pull yourself out of the default state (a default state such as being in New York), but it only last so long.

Comment by AndrewH on Normal Cryonics · 2010-01-20T09:06:18.804Z · LW · GW

I am also a New Zealander, AND I am signed up with Cryonics Institute. You might be interested in contacting the Cryonics Association of Australasia but I'm sure there is no actual suspension and storage nearby.

Besides you are missing the main point, if you don't sign up now and you die tomorrow, you are annihilated - no questions asked. I would be wary of this question as it can be an excuse to not sign up.

Comment by AndrewH on Auckland meet up Saturday Nov 28th · 2009-11-29T02:19:06.190Z · LW · GW

A turn out of 3 including myself, which is quite a success for a small place such as Auckland. We agreed that in mid December we should meet again. So for anyone who considered coming but did not, please come next time!; these meet-ups are excellent motivators for studying rationality.

Comment by AndrewH on Robin Hanson's lists of Overcoming Bias Posts · 2009-08-06T20:53:24.065Z · LW · GW

If you are talking about pretty pictures, then this looks much better.

Comment by AndrewH on Robin Hanson's lists of Overcoming Bias Posts · 2009-08-06T20:50:39.159Z · LW · GW

Automatically, If I did it by hand, it would have looked nicer. I'm working on this project again, so I hope to have some much more user friendly things coded soon. Ill make what you mentioned as well.

Comment by AndrewH on Pain · 2009-08-03T18:25:37.674Z · LW · GW

Agreed, pain overwhelming your entire thoughts is too extreme, though understandable how it evolved this way.

Comment by AndrewH on Celebrate Trivial Impetuses · 2009-07-25T06:29:15.140Z · LW · GW

In Getting Things Done, after the first step of simply writing down each task you want to accomplish (can be of any level of difficult and time), and then you do a seperate processing step after that.

That is when you decide how long each task takes, and if it takes less than 5 minutes you do it now. When you get into the GTD system of life organization, trivial impetuses you put down in the initial collection phase, and when you get around to processing them, you have habits that say "do task now if takes less than 5 minutes". GTD is (apparently, I tried to get it working for me but to little success so far) a life changing thing.

Comment by AndrewH on Article upvoting · 2009-07-24T02:32:23.302Z · LW · GW

You could make voting a post mandatory to comment on that post, so to submit a comment you get prompted to vote it up or down (or maybe neutral)

Or maybe just by having the vote up/down/neutral buttons next to the comment submit button, right in peoples faces, would make them more likely to vote.

Comment by AndrewH on AndrewH's observation and opportunity costs · 2009-07-23T14:40:58.746Z · LW · GW

Neat! I did not think of generalizing my arguments; we could call it the resource commit fallacy. We need techniques to help us solve this problem.

One strategy that comes to mind is precommitting to allocate the resources to the most efficient place before you optimize yourself. Then taking a bet with someone that if you fail to follow through with your commitment, you take a penalty of resources.

Comment by AndrewH on An observation on cryocrastination · 2009-07-23T05:03:40.456Z · LW · GW

I reiterated the cost analysis from my perspective because it is essential to my argument of why people see cryonics is super beneficial, but still fail to do anything about it. All the while sitting on a potential gold mine of money they are wasting away which could be used to get cryonics!

Comment by AndrewH on An observation on cryocrastination · 2009-07-23T04:56:17.487Z · LW · GW

I don't even drink coffee so I'm going to have to think hard on what part of my life I should optimize. I picked it because many people drink starbucks coffee (we even have then in my little island country) and I presume you can do it cheaper if you buy your own.

Comment by AndrewH on An observation on cryocrastination · 2009-07-23T02:21:01.061Z · LW · GW

Well, there are a great many factors I am glossing over, but if you are pessimistic about cryonics to that degree, you are probably pessimistic about other future technologies like medical and anti-aging technologies. You will die eventually unless actuarial escape velocity occurs when you are alive. Assuming this is not the case, if you don't have cryonics you wont take advantage of the future indefinite lifespans humans will possess, old age will kill you.

You could very well be worth more than 200 million, you just need to live long enough!.

Comment by AndrewH on An observation on cryocrastination · 2009-07-22T22:30:45.804Z · LW · GW

Now the real costs are between $25,000 and $155,000 in addition to annual membership fees, signup fees, transportation fees after death etc. That's how much you have to save during your lifetime to get cryopreserved.

The real costs per day of not bothering to optimized how you purchase food also add up over time. Most people I would be willing to bet could save quite a substantial amount of money with some careful thought and planning simply in how they purchase food. $7 a week over an average lifespan is $27,000.

The point of cryonics is that just a bit of optimization gives you a potential second chance at life if you screw up somewhere (sneeze when driving for example), given reasonable probabilities over the chances of dying, the likelihood of successful cryopreservation and revival, that amount of money is ridiculously cheap.

Comment by AndrewH on Are You Anosognosic? · 2009-07-19T23:34:37.623Z · LW · GW

Does the absence of people around me pointing towards my arm insisting it does not move, while believing that I have done plenty of activities in which I used 2 arms mean I am an extreme Anosognosic. One who rewrites massive quantities of 1 arm experiences to 2 arm experiences on the fly?

Comment by AndrewH on Where are we? · 2009-07-18T00:18:10.830Z · LW · GW

Auckland, New Zealand

Comment by AndrewH on The Strangest Thing An AI Could Tell You · 2009-07-15T16:51:48.265Z · LW · GW

Something I would probably believe:

The AI informs you that it has discovered the purpose of the universe, and part of the purpose is to find the purpose (the rest, apparently, can only be comprehended by philosophical zombies, which you are not one).

Upon finding the purpose, the universe gave the FAI and humanity a score out of 3^^^3 (we got 42) and politely informs the FAI to tell humanity "best of luck next time! next game starts in 5 minutes".

Comment by AndrewH on The Strangest Thing An AI Could Tell You · 2009-07-15T16:06:59.800Z · LW · GW

This is so fun that I suspect that we have pushed back the date of friendly AI by at least a day - or we pushed it forward cause we are all now hyper motivated to see who guessed this question right!

Comment by AndrewH on Jul 12 Bay Area meetup - Hanson, Vassar, Yudkowsky · 2009-07-12T21:30:50.997Z · LW · GW

We are probably far more afraid of you than you are of us.

Comment by AndrewH on Our society lacks good self-preservation mechanisms · 2009-07-12T17:24:31.892Z · LW · GW

An important consideration not yet mentioned is that risk mitigation is can be difficult to quantify, compared to disaster relief efforts where if you save a house fill of children, you become a hero. Coupled with the fact that people extrapolate the future using the past (which misses all existential risks), the incentive to do anything about it drops pretty much to nil.

Comment by AndrewH on Not Technically Lying · 2009-07-12T03:47:36.134Z · LW · GW

It is truth, but you are explicitly saying those words so that the hearer (the patient) forms a false belief about the world. So it cannot be really truthful because most people in that situation would, after hearing example 3, believe that they are being given something that has more affect than a placebo.

Comment by AndrewH on Let's reimplement EURISKO! · 2009-06-13T21:59:19.671Z · LW · GW

Taking progress in AI to mean more real world effectiveness:

Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.

So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe enough to have an "oh, shit" moment?

Regardless of the time frame, if the AI community is working towards AGI rather than FAI, we will likely have (eventually) an AI go FOOM or at the very least, and "oh, shit" moment (I'm not sure if they are equivalent).

Comment by AndrewH on Honesty: Beyond Internal Truth · 2009-06-07T21:35:01.357Z · LW · GW

That's teaching for you, the raw truth of the world can be both difficult to understand in the context of what you already 'know' (Religion -> Evolution) or difficult to understand in its own right (Quantum physics).

This reminds me of "Lies to Humans" as Hex, the thinking machine of Discworld, where Hex tells the Wizards the 'truth' of something, coached in things they understand to basically shut them up, rather than to actually tell them what is really happening.

In general, a person cannot jump from any preconceived notion of how something is to the (possibly subjective!) truth. Instead, to teach you tell lesser and lesser lies, which in the best case, may simply be more and more accurate approximations of the truth. Throughout, you the teacher, have been as honest as to the learner as you can be.

But when someone has a notion of something that is wrong enough, I can see these steps as, in themselves, could contain falsehood which is not an approximation of the truth itself. Is this honest? To teach a flat-Earther the world is round, perhaps a step is to consider the world being convex, so as to explain the 'ships over the horizon disappear'.

If your goal is to get someones understanding closer to the truth, it may be rational, but the steps you take, the things you teach, might not be honest.

Comment by AndrewH on This Failing Earth · 2009-05-27T22:37:06.104Z · LW · GW

WRT eugenics and other seemly nasty solutions, it is as they say: sometimes it has to get worse to get better. No option that causes, obvious to the voting population, short term harm but long term benefits, to the population as a whole, is going to be considered by politicians that want to be elected again.

It seems to me that the science and rationality that allow more than a shot in the dark probability of some social engineering project to work only came about recently (for example for Eugenics, post Darwin time). By the time that it was possible to do these sorts of projects, it was not possible to do them because of the national (and international) out-crying that would result.

So this really cuts off a great many possible projects that could benefit humanity in the long term. Is this a good or bad? depends on how far you are looking into the future, and whether or not you think AGI is possible or not!

I hope those nine guys in that basement are working hard.

Comment by AndrewH on Can Humanism Match Religion's Output? · 2009-03-27T21:00:42.453Z · LW · GW

Ease of entry and exit is really important. I want to be able to enter the world and enter a discussion asap, but I don't want to feel compelled to stay for long periods of time.

So I think a browser based program would be best, rather than Second Life.

But I think having a place such as Second Life would be good addition compared to what we have now with LW. Having a a place where people like ourselves can discuss things in practically real time, would, I think, be useful in helping to create this community of Rationalists.

Mechanisms that make it feel like we really are living together, such as a detailed virtual world, and even virtual houses, could help in making the community and keeping people participating in it. And of course, the added benefit of this is that we don't need to be physically close to each other but we could get the benefits as if we were (given a detailed enough environment).

Comment by AndrewH on BHTV: Yudkowsky & Adam Frank on "religious experience" · 2009-03-26T09:03:49.504Z · LW · GW

In many cases, I suspect that people adopt false beliefs and the ensuing dark-side for short term emotional gain, but in the long term the instrumental loss outweighs this.

That may be one way of adopting false beliefs the first set of false beliefs. Once the base has been laid (perhaps containing many flaws to hide the falseness), then in evaluating a new belief, it doesn't need to have short term emotional gain to be accepted, as long as it fits in with the current network of beliefs.

When I think of this, I think of missionaries, promising that having faith in God with help them through the bad times. Then after they accept that, move onto the usual discussion of Hell and if only you do what they say, you'll be fine.

Comment by AndrewH on Why Our Kind Can't Cooperate · 2009-03-20T21:02:25.202Z · LW · GW

Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.