How can I spend money to improve my life? 2014-02-02T10:16:52.975Z
The first AI probably won't be very smart 2014-01-16T01:37:49.048Z


Comment by jpaulson on question: the 40 hour work week vs Silicon Valley? · 2014-11-04T05:49:26.221Z · LW · GW

I think a more likely explanation is that people just like to complain. Why would people do things that everyone thought were a waste of time? (At my office, we have meetings and email too, but I usually think they are good ways to communicate with people and not a waste of time)

Also, you didn't answer my question. It sounds like your answer is that you are compelled to waste 20 hours of time every week?

Comment by jpaulson on question: the 40 hour work week vs Silicon Valley? · 2014-11-02T06:19:22.394Z · LW · GW

I don't understand. Are you saying you could get 2x as much work done in your 40 hour week, or that due to dependencies on other people you cannot possibly do more than 20 hours of productive work per week no matter how many hours you are in the office?

Comment by jpaulson on question: the 40 hour work week vs Silicon Valley? · 2014-11-02T06:09:16.151Z · LW · GW

False. At a company-wide level, Google makes an effort to encourage work-life balance.

Ultimately you need to produce a reasonable amount of output ("reasonable" as defined by your peers + manager). How it gets there doesn't really matter.

Comment by jpaulson on question: the 40 hour work week vs Silicon Valley? · 2014-11-02T06:01:34.278Z · LW · GW

Sort of. My opinion takes that objection into account.

But on the other hand, I don't have any data to quantitatively refute or support your point.

Comment by jpaulson on question: the 40 hour work week vs Silicon Valley? · 2014-10-31T00:57:11.561Z · LW · GW

I work at Google, and I work ~40 hours a week. And that includes breakfast and lunch every day. As far as I can tell, this is typical (for Google).

I think you can get more done by working longer hours...up to a point, and for limited amounts of time. Loss in productivity still means the total work output is going up. I think the break-even point is 60h / week.

Comment by jpaulson on Applications of logical uncertainty · 2014-10-30T05:57:29.733Z · LW · GW

Why not start with a probability distribution over (the finite list of) objects of size at most N, and see what happens when N becomes large?

It really depends on what distribution you want to define though. I don't think there's an obvious "correct" answer.

Here is the Haskell typeclass for doing this, if it helps:

Comment by jpaulson on Power and difficulty · 2014-10-30T05:42:42.820Z · LW · GW

Unfortunately, it seems much easier to list particularly inefficient uses of time than particularly efficient uses of time :P I guess it all depends on your zero point.

Comment by jpaulson on A discussion of heroic responsibility · 2014-10-30T05:37:28.120Z · LW · GW

I think for most things, it's important to have a specific person in charge, and have that person be responsible for the success of the thing as a whole. Having someone in charge makes sure there's a coherent vision in one person, makes a specific person accountable, and helps make sure nothing falls through the cracks because it was "someone else's job". When you're in charge, everything is your job.

If no one else has taken charge, stepping up yourself can be a good idea. In my software job, I often feel this way when no one is really championing a particular feature or bug. If I want to get it done, I have to own it and push it through myself. This usually works well.

But I don't think taking heroic responsibility for something someone else already owns is a good idea. Let them own it. Even if they aren't winning all the time, or even if they sometimes do things you disagree with (obviously, consistent failure is a problem).

Nor do I think dropping everything to fix the system as a whole is necessarily a good idea (but it might be, if you have specific reforms in mind). Other people are already trying to fix the system; it's not clear that you'll do better than them. It might be better to keep nursing, and look for smaller ways to improve things locally that no one is working on yet.

Comment by jpaulson on Power and difficulty · 2014-10-30T05:09:30.256Z · LW · GW

I was using "power" in the sense of the OP (which is just: more time/skills/influence). Sorry the examples aren't as dramatic as you would like; unfortunately, I can't think of more dramatic examples.

Comment by jpaulson on Power and difficulty · 2014-10-30T00:29:01.409Z · LW · GW

I disagree.

1 and 2 are "negative": avoiding common failure modes.

3 and 4 are "positive": ways to get "more bang for your buck" than you "normally" would.

Comment by jpaulson on Power and difficulty · 2014-10-29T16:43:04.413Z · LW · GW

This seems true, but obvious. I'm not sure that I buy that fiction promotes this idea: IMO, fiction usually glosses over how the characters got their powers because it's boring. Some real-life examples of power for cheap would be very useful. Here are some suggestions:

  • Stick your money in index funds. This is way easier and more effective than trying to beat the market.
  • Ignore the news. It will waste your time and make you sad.
  • Go into a high-paying major / career
  • Ask for things/information/advice. Asking is cheap, and sometimes it works.

Anyone have other real-world suggestions?

Comment by jpaulson on Reference Frames for Expected Value · 2014-03-16T20:05:24.086Z · LW · GW

Say the player thought that they were likely win the lottery, that it was a good purchase. This may seem insane to someone familiar with probability and the lottery system, but not everyone is familiar with these things.

I would say this person made a good decision with bad information.

Perhaps we should attempt to stop placing so much emphasis on individualism and just try to do the best we can while not judging others nor other decisions much.

There are lots of times when it's important to judge people e.g. for hiring or performance reviews.

Comment by jpaulson on Is my view contrarian? · 2014-03-13T05:09:59.150Z · LW · GW

The pervasive influence of money in politics sort of functions as a proxy of this. YMMV for whether it's a good thing...

Comment by jpaulson on Is my view contrarian? · 2014-03-13T05:04:43.508Z · LW · GW

Doesn't "contrarian" just mean "disagrees with the majority"? Any further logic-chopping seems pointless and defensive.

The fact that 98% of people are theists is evidence against atheism. I'm perfectly happy to admit this. I think there is other, stronger evidence for atheism, but the contrarian heuristic definitely argues for belief in God.

Similarly, believing that cryonics is a good investment is obviously contrarian. AGI is harder to say; most people probably haven't thought about it.

It seems like the question you're really trying to answer is "what is a good prior belief for things I am not an expert on?"

(I'm sorry about arguing over terminology, which is usually stupid, but this case seems egregious to me).

Comment by jpaulson on A defense of Senexism (Deathism) · 2014-02-16T19:09:20.243Z · LW · GW

Most of your post is not arguments against curing death.

People being risk-averse has nothing to do with anti-aging research and everything to do with individuals not wanting to die...which has always been true (and becomes more true as life expectancy rises and the "average life" becomes more valuable). The same is true for "we should risk more lives for science".

I agree that people adapt OK to death, but I think you're poking a strawman; the reason death is bad is because it kills you, not because it makes your friends sad.

I think "death increases diversity" is a good argument. On the other hand, most people who present that argument are thrilled that life expectancy has increased to ~70 from ~30 in ancient history. Why stop at 70?

Comment by jpaulson on A defense of Senexism (Deathism) · 2014-02-16T19:01:13.233Z · LW · GW

The problem of "old people will be close-minded and it will be harder for new ideas to gain a foothold" seems pretty inherent in abolishing death, and not just an implementation detail we can work around.

Comment by jpaulson on How can I spend money to improve my life? · 2014-02-04T08:24:55.687Z · LW · GW

Yeah, this is a priority for me. My plan is to stick my money in a few mutual funds and forget about it for 40 years. Hopefully the economy will grow in that time :)

Comment by jpaulson on How can I spend money to improve my life? · 2014-02-03T05:14:07.146Z · LW · GW

OK, I believe there is conflicting research. There usually is. And as usual, I don't know what to make of it, except that the preponderance of search hits supports $75k as satisficing. shrug

Comment by jpaulson on How can I spend money to improve my life? · 2014-02-03T04:46:26.694Z · LW · GW

I think I saw that on LessWrong quite recently. That study is trying to refute the claim that income satisficing happens at ~$20k (and is mostly focused on countries rather than individuals). $20k << $75k.

Comment by jpaulson on How can I spend money to improve my life? · 2014-02-03T04:16:44.267Z · LW · GW

I was pretty into magic in high school but haven't played at all since, so thanks for that bit of nostalgia :)

I just discovered Hearthstone this weekend (simplified clone of magic by Blizzard), which is pretty good (and online, and free).

Comment by jpaulson on How can I spend money to improve my life? · 2014-02-03T04:13:46.550Z · LW · GW

Do you have ideas for durable goods? My apartment has a laundry machine, and I can't think of a piece of furniture I would want.

Comment by jpaulson on How can I spend money to improve my life? · 2014-02-03T03:59:29.266Z · LW · GW

The studies on income satisficing (past 75k, more money doesn't correlate with more happiness) certainly suggest that this is true.

But I'm still hoping it's not, and most people just haven't figured out how to buy happiness efficiently. Seems worth trying, at any rate.

Comment by jpaulson on Why I haven't signed up for cryonics · 2014-02-02T09:01:36.162Z · LW · GW

Pratchett's donation appears to account for 1.5 months of the British funding towards Alzheimer's (numbers from, math from me) . Which is great and all, but public funding is way better. So I stand by my claim.

Comment by jpaulson on Productivity as a function of ability in theoretical fields · 2014-01-28T07:14:29.595Z · LW · GW

Reading this reminded me of Terrence Tao's blog post about how you don't have to be a genius to do math:

I think you are severely oversimplifying "intelligence" and "productivity" into 1-dimensional quantities. In my experience, "genius" (i.e. the insight that solves a problem) is about acquiring a bag of tricks to throw at new problems, and translating your insight into a solution is the result of practice.

Comment by jpaulson on Continuity in Uploading · 2014-01-25T02:35:39.931Z · LW · GW

1) 1.0 2) 1.0 3) 1.0 4) It depends on the artificial substitutes :) If they faithfully replicate brain function (whatever that means), 1.0 5) Again, if the process is faithful, 1.0 6) It really depends. For example, if you drop all my memories, 0.0. If you keep an electronic copy of my brain on the same network as several other brains, 1.0. in-between: in-between

(Yes, I know 1.0 probabilities are silly. I don't have enough sig-figs of accuracy for the true value :)

Comment by jpaulson on Continuity in Uploading · 2014-01-24T08:13:54.998Z · LW · GW

Because I want to be alive. I don't just want humanity to have the benefit of my skills and knowledge.

Comment by jpaulson on Continuity in Uploading · 2014-01-24T08:11:38.894Z · LW · GW

You are dodging the question by appealing to the dictionary. The dictionary will not prove for you that identity is tied to your body, which is the issue at hand (not "whether your body dies as the result of copying-then-death", which as you point out is trivial)

Comment by jpaulson on Continuity in Uploading · 2014-01-24T08:05:01.612Z · LW · GW

Under one interpretation I wake up in the other room. In the other I do not - it is some other doppelgänger which shares my memories but whose experiences I do not get to have.

I don't understand how to distinguish "the clone is you" from "the clone is a copy of you". Those seem like identical statements, in that the worlds where yon continue living and the world where the clone replaces you are identical, atom for atom. Do you disagree? Or do you think there can be a distinction between identical worlds? If so, what is it?

He isn't me. He is a separate person that just happens to share all of the same memories and motivations that I have.

In the same sense, future-you isn't you either. But you are willing to expend resources for future-you. What is the distinction?

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-18T04:29:24.430Z · LW · GW

I don't have a copy handy. I distinctly remember this claim, though. This purports to be a quote from near the end of the book.

4 "Will there be chess programs that can beat anyone?" "No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players." (

Comment by jpaulson on Things I Wish They'd Taught Me When I Was Younger: Why Money Is Awesome · 2014-01-17T08:21:01.719Z · LW · GW

I don't think lack of access to training or facilities is the reason

Assume that 1% of people could become good programmers. If we trained (or offered training to) 10x as many people, we would still end up with 10x as many programmers.

I grew up with computers in my home; I had a programmable calculator in middle school; my high school offered programming courses; my family could pay for me to go to a very strong CS university. Not everyone has those opportunities.

Comment by jpaulson on Things I Wish They'd Taught Me When I Was Younger: Why Money Is Awesome · 2014-01-17T08:17:53.319Z · LW · GW

individuals' capability to produce value differs GREATLY

I have heard this claim repeated many times; I would love to see some evidence for it.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-17T08:11:21.308Z · LW · GW

Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times.

You are extrapolating Moore's law out almost as far as it's been in existence!

We could make it a million times more efficient if we trim the fat and keep the essence.

It's nice to think that, but no one understands the brain well enough to make claims like that yet.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-17T08:06:29.573Z · LW · GW

Is it necessary that we understand how intelligence works for us to know how to build it? This may almost be a philosophical question.

This is definitely an empirical question. I hope it will be settled "relatively soon" in the affirmative by brain emulation.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-17T08:05:20.390Z · LW · GW

I think I remember one particular prominent intellectual who, decades ago, essentially declared that when chess could be played better by a computer than a human, the problem of AI would be solved.

Hofstadter, in Godel, Escher, Bach?

Maybe you're one of those Cartesian dualists who thinks humans have souls that don't exist in physical reality and that's how they do their thinking

Not at all. Brains are complicated, not magic. But complicated is bad enough.

Would you consider the output of a regression a black box?

In the sense that we don't understand why the coefficients make sense; the only way to get that output is feed a lot of data into the machine and see what comes out. It's the difference between being able to make predictions and understanding what's going on (e.g. compare epicycle astronomy with the Copernican model. Equally good predictions, but one sheds better light on what's happening).

What's your machine learning background like, by the way?

One semester graduate course a few years ago.

It seems like you are counting it as a point against chess programs that we know exactly how they work, and a point against Watson that we don't know exactly how it works.

The goal is to understand intelligence. We know that chess programs aren't intelligent; the state space is just luckily small enough to brute force. Watson might be "intelligent", but we don't know. We need programs that are intelligent and that we understand.

My impression is that many, if not most, experts in AI see human intelligence as essentially algorithmic and see the field of AI as making slow progress towards something like human intelligence

I agree. My point is that there isn't likely to be a simple "intelligence algorithm". All the people like Hofstadter who've looked for one have been floundering for decades, and all the progress has been made by forgetting about "intelligence" and carving out smaller areas.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-17T07:52:37.321Z · LW · GW

1) I expect to see AI with human-level thought but 100x as slow as you or I first. Moore's law will probably run out sooner than we get AI, and these days Moore's law is giving us more cores, not faster ones.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-16T08:59:02.442Z · LW · GW

In understanding how intelligence works? No.

Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue's evaluation for a specific position is more "intelligent", but it's just hard-coded by the programmers. Deep Blue didn't think of it.

Watson can "read", which is pretty cool. But:

1) It doesn't read very well. It can't even parse English. It just looks for concepts near each other, and it turns out that the vast quantities of data override how terrible it is at reading.

2) We don't really understand how Watson works. The output of a machine-learning algorithm is basically a black box. ("How does Watson think when it answers a question?")

There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient "intelligence algorithm", or "understanding how intelligence works".

Comment by jpaulson on Results from MIRI's December workshop · 2014-01-16T08:47:41.948Z · LW · GW

Sure; bet on mathematical conjectures, and collect when they are resolved one way or the other.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-16T08:39:42.866Z · LW · GW

Evolution moves incrementally and it's likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn't happen to discover for whatever reason.

Maybe, but that doesn't mean we can find them. Brain emulation and machine learning seem like the most viable approaches, and they both require tons of distributed computing power.

Comment by jpaulson on Things I Wish They'd Taught Me When I Was Younger: Why Money Is Awesome · 2014-01-16T08:25:39.219Z · LW · GW

It turns out that it's not clear this is actually true—some studies have found more money leads to greater happiness up through the highest income levels examined.

The "highest income levels examined" -- based on the chart on that page -- appear to be 128k/yr. Since the income satisficing level (for an unattached individual) is ~75k, this doesn't seem like good evidence one way or another.

Comment by jpaulson on Things I Wish They'd Taught Me When I Was Younger: Why Money Is Awesome · 2014-01-16T08:19:46.953Z · LW · GW

Counterpoint: spending 40 hours a week on your job is a huge time commitment. It's also a huge willpower drain (doing worthwhile things requires effort to grind out results, not just time). It's hard for me to believe that the hours money allows you to "buy back" are worth the "wasted hours" on work. So it's important that work not be a waste.

Also, you will probably make more money and do more worthwhile things at a job you enjoy.

Money is definitely a big factor, but I don't think it totally dominates everything else.

Comment by jpaulson on Why I haven't signed up for cryonics · 2014-01-16T07:29:20.021Z · LW · GW

No one is working on cryonics because there's no money/interest because no one is signed up for cryonics. Probably the "easiest" way to solve this problem is to convince the general public that cryonics is a good idea. Then someone will care about making it better.

Some rich patron funding it all sounds good, but I can't think of a recent example where one person funded a significant R&D advance in any field.

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-16T07:09:08.275Z · LW · GW

I'm happy to be both "borderline tautological" and in disagreement with the prevailing thought around here :)

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-16T06:00:16.643Z · LW · GW

1) You are right; that was tangential and unclear. I have edited my OP to omit this point.

2) It's evidence that it will take a while.

3) Real-time access to neurons is probably useless; they are changing too quickly (and they are changing in response to your effort to introspect).

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-16T05:57:34.131Z · LW · GW

I agree. My point is merely that super-human intelligence will probably not appear as a sudden surprise.

EDIT: I changed my OP to better reflect what I wanted to say. Thanks!

Comment by jpaulson on The first AI probably won't be very smart · 2014-01-16T05:53:19.990Z · LW · GW

A bold claim, since no one understands "the algorithms used by the brain". People have been trying to "understand how intelligence works" for decades with no appreciable progress; all of the algorithms that look "intelligent" (Deep Blue, Watson, industrial-strength machine learning) require massive computing power.

Comment by jpaulson on Another Critique of Effective Altruism · 2014-01-08T08:18:55.454Z · LW · GW

(This comment is on career stuff, which is tangential to your main points)

I recently had to pick a computer science job, and spent a long time agonizing over what would have the highest impact (among other criteria). I'm not convinced startups or academia have a higher expected value than large companies. I would like to be convinced otherwise.

(Software) Startups:

1) Most startups fail. It's easy to underestimate this because you only hear the success stories.

2) Many startups are not solving "important" problems. They are solving relatively minor problems for relatively rich people, because that's where the money is. Snapchat, Twitter, Facebook, Instagram are examples.

3) Serious problems are complicated, and usually require more resources than a startup can bring to bear.

4) Financially: If you aren't a founder, your share of the company is negligible.

(Computer Science) Academia:

1) My understanding is that there are dozens of applications for each tenure-track opening. So your chance of success is low, and your marginal advantage over the next-best applicant is probably low.

2) I trust markets more than grant committees for distributing money.

3) It seems easier to get sidetracked into non-useful work in academia

Comment by jpaulson on Welcome to Less Wrong! (July 2012) · 2013-01-18T02:09:32.545Z · LW · GW

You seem to be making a fully general argument against action.

Comment by jpaulson on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2013-01-17T19:13:03.659Z · LW · GW

1) Evidence/Reasoning?

6) Evidence/Reasoning?

8) I thought the idea was that if it were discovered a year later, people were going to assume Bellatrix had died in her cell. This requires the death doll to decay, which might be implausible.

Comment by jpaulson on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2013-01-17T05:41:54.107Z · LW · GW

(Long-time lurker; first post)

Some points from earlier chapters that remain unclear to me: any insights would be appreciated?

1) Why did Neville's remembrall go off so vividly in Harry's hands? Also, how are there now two remembralls?

2) Do we have any more information/guesses about Trelawney's prophecy that Dumbledore cut off? What starts with 'S'?

3) Who told Harry to look for Hermione on the train? The writing is ambiguous, and it's not really clear why McGonagall would've wanted them to meet. I guess other theories are worse, though.

4) What's up with Harry's father's rock? Just a way for Dumbledore to encourage Harry to practice transfiguration?

5) Why are we so sure Dumbledore burned a chicken (or transfigured something)? His explanation makes total sense, and Harry's confusion at the time is well-explained by his lack of familiarity with phoenixes. It seems more reasonable to assume almost-burned-out phoenixes look like chickens than...whatever the alternative is.

6) Who is saying "I'm not serious" in Azkaban?

7) Is the "terrible secret" of Lily's potion book really that Snape and Lily fought about it? That just seems like a bizarre reason for a friendship to end. Were Dumbledore's suggestions incorporated into the potion Petunia took?

8) Why did Quirrell leave a polyjuice potion in Bellatrix's cell? (especially since the crime was meant to go unnoticed)