Posts

Negative "eeny meeny miny moe" 2019-08-20T02:48:41.509Z · score: 22 (8 votes)
Automated Nomic Game 2 2019-02-05T22:11:13.914Z · score: 20 (9 votes)
Boston Solstice 2018 Retrospective 2018-12-23T20:04:40.244Z · score: 46 (10 votes)
Interpreting genetic testing 2018-12-15T15:56:57.339Z · score: 25 (8 votes)
Boston Secular Solstice 2018-12-10T01:59:24.756Z · score: 11 (3 votes)
Boston Solstice 2018 2018-10-28T20:37:47.679Z · score: 28 (8 votes)
How to parent more predictably 2018-07-10T15:18:33.660Z · score: 69 (33 votes)
Futarchy and Unfriendly AI 2015-04-03T21:45:41.157Z · score: 13 (12 votes)
We Haven't Uploaded Worms 2014-12-27T11:44:45.411Z · score: 92 (94 votes)
Happiness Logging: One Year In 2014-10-09T19:24:15.861Z · score: 15 (15 votes)
Persistent Idealism 2014-08-26T01:38:20.167Z · score: 11 (12 votes)
Conservation of Expected Jury Probability 2014-08-22T15:25:34.102Z · score: 10 (13 votes)
Relative and Absolute Benefit 2014-06-18T13:56:19.437Z · score: 12 (15 votes)
Questioning and Respect 2014-06-10T10:52:32.660Z · score: 20 (23 votes)
Cryonics As Untested Medical Procedure 2014-01-17T16:36:20.461Z · score: 16 (23 votes)
Be Skeptical of Correlational Studies 2013-11-20T22:19:28.281Z · score: 8 (11 votes)
Supplementing memory with experience sampling 2013-10-28T11:52:13.319Z · score: 13 (16 votes)
Is it immoral to have children? 2013-10-22T12:13:26.610Z · score: 17 (29 votes)
Does Checkers have simpler rules than Go? 2013-08-13T02:09:41.984Z · score: 14 (17 votes)
Valuing Sentience: Can They Suffer? 2013-07-29T12:39:04.481Z · score: 6 (11 votes)
The Argument From Marginal Cases 2013-07-26T13:30:17.215Z · score: 15 (21 votes)
Consumption Smoothing and Hedonic Adaptation 2013-07-19T14:41:47.699Z · score: 5 (6 votes)
Prioritizing Happiness 2013-07-06T16:01:20.723Z · score: 1 (10 votes)
Is our continued existence evidence that Mutually Assured Destruction worked? 2013-06-18T14:40:36.167Z · score: 9 (17 votes)
All-pay auction for charity? 2013-06-12T12:46:02.879Z · score: 5 (8 votes)
Weak evidence that eating vegetables makes you live longer 2013-06-10T13:09:21.691Z · score: 8 (11 votes)
The impact of whole brain emulation 2013-05-14T19:59:11.520Z · score: 3 (6 votes)
Keeping Choices Donation Neutral 2013-05-11T12:58:07.286Z · score: 19 (23 votes)
Antijargon Project 2013-05-05T17:26:33.879Z · score: 12 (13 votes)
Pay other people to go vegetarian for you? 2013-04-12T01:56:59.745Z · score: 12 (15 votes)
Taking Charity Seriously: Toby Ord talk on charity effectivess 2013-04-10T00:59:48.204Z · score: 17 (18 votes)
What Rate of Return Should You Expect? 2013-04-07T13:40:41.264Z · score: 11 (16 votes)
The Unintuitive Power Laws of Giving 2013-04-02T02:10:03.624Z · score: 28 (31 votes)
Getting myself to eat vegetables 2013-03-07T01:48:54.065Z · score: 12 (25 votes)
Risks of Genetic Publicy 2013-03-03T17:30:40.409Z · score: 7 (9 votes)
When should you give to multiple charities? 2013-02-27T02:56:39.785Z · score: 7 (8 votes)
Offer: I'll match donations to the Against Malaria Foundation 2013-02-04T16:47:52.944Z · score: 17 (24 votes)
More Cryonics Probability Estimates 2012-12-17T20:59:49.000Z · score: 20 (23 votes)
Cryonics as Charity 2012-11-10T14:21:25.105Z · score: 3 (8 votes)
[LINK] The most important unsolved problems in ethics 2012-10-17T20:03:46.874Z · score: 20 (23 votes)
Online Optimal Philanthropy Meetup: Tue 10/9, 8pm ET 2012-10-06T13:35:42.346Z · score: 4 (5 votes)
Parenting and Happiness 2012-10-03T13:43:58.406Z · score: 20 (25 votes)
Magical Healing Powers 2012-08-12T03:19:50.062Z · score: 0 (19 votes)
Many-worlds implies the future matters more 2012-07-26T12:09:26.914Z · score: -7 (22 votes)
Value of a Computational Process? 2012-07-09T17:33:43.323Z · score: 3 (10 votes)
Boston MA: Optimal Philanthropy Meetup (July 6th) 2012-07-03T03:31:35.608Z · score: 4 (5 votes)
Hedonic vs Preference Utilitarianism in the Context of Wireheading 2012-06-29T13:50:48.261Z · score: 6 (9 votes)
Balancing Costs to Ourselves with Benefits to Others 2012-06-22T22:31:55.112Z · score: 4 (7 votes)
Altruistic Kidney Donation 2012-06-17T20:02:25.254Z · score: 21 (25 votes)
Debate between 80,000 hours and a socialist 2012-06-07T13:30:01.258Z · score: 5 (42 votes)

Comments

Comment by jkaufman on Dialogue on Appeals to Consequences · 2019-07-19T20:37:25.267Z · score: 10 (2 votes) · LW · GW

It sounds to me like Jessica is using "appeal to consequences" expansively to include not just "X has bad consequences so you should not believe X" to "saying X has bad consequences so you should not say X"?

Comment by jkaufman on Dialogue on Appeals to Consequences · 2019-07-19T16:55:50.955Z · score: 22 (7 votes) · LW · GW

My main objection is that the post is built around a case where Quinn is very wrong in their initial "bad consequences" claim, and that this leads people to have misleading intuitions. I was trying to propose an alternative situation where the "bad consequences" claim was true or closer to true, but where Quinn would still be wrong to suggest Carter shouldn't describe what they'd found.

(Also, for what it's worth, I find the Quinn character's argumentative approach very frustrating to read. This makes it hard to take anything that character describes seriously.)

Comment by jkaufman on Dialogue on Appeals to Consequences · 2019-07-19T11:18:36.964Z · score: 40 (11 votes) · LW · GW

The motivating example for this post is whether you should say "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies", with Quinn arguing that you shouldn't say it because saying it has bad consequences. The problem is, saying this has very clearly good consequences, which means trying to use it as a tool for figuring out what you think of appeals to consequences sets up your intuitions to confuse you.

(It has clearly good consequences because "how much money goes to PADP right now" is far less import than "building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones". Plus if, say, Animal Charity Evaluators trusted this higher number of puppies saved and it had lead them to recommend PADP as I've if their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you can't do this if you declare discussing negative information about organizations off limits.)

The post would be a lot clearer if it had a motivating example that really did have bad consequences, all things considered. As a person who's strongly pro transparency is hard for me to come up with cases, but there are still contexts where I think it's probably the case. What if Carter were a researcher who had run a small study on a new infant vaccine and seen elevated autism rates on the experimental group. There's an existing "vaccines cause autism" meme that is both very probably wrong and very probably harmful, which means Carter should be careful about messaging for their results. Good potential outcomes include:

  • Carter's experiment is replicated, confirmed, and the vaccine is not rolled out.

  • Carter's experiment fails to replicate, researchers look into it more, and discover that there was a problem in the initial experiment / in the replication / they need more data / etc

Bad potential outcomes include:

  • Headlines that say "scientists finally admit vaccines do cause autism"

Because of the potential harmful consequences of handling this poorly, Carter should be careful about how they talk about their results and to who. Trying to get funding to scale up the experiment, making sure the FDA is aware, letting other researchers know, etc, all are beneficial and have good consequences. Going to the mainstream media with a controversial sell-lots-of-papers story, by contrast, would have predictably bad consequences.

When talking with friends or within your field it's hard to think of cases where you shouldn't just say the interesting thing you've found, while with larger audiences and in less truth-oriented cultures you need to start being more careful.

EDIT: expanded this into https://www.jefftk.com/p/appeals-to-consequences

Comment by jkaufman on We Haven't Uploaded Worms · 2019-07-15T17:50:26.959Z · score: 8 (4 votes) · LW · GW

I just tried https://scholar.google.com/scholar?as_ylo=2015&q=c+elegans+emulation and don't see anything relevant. I did find Why is There No Successful Whole Brain Simulation (Yet)? from 2019. While I've only skimmed it and its reference list, if there had been something new here I think they would have cited it.

I think we're still stuck on both (a) we can't read weights from real worms (and so can only model a generic worm) and (b) we don't understand how weights are changed in real worms (and so can't model learning).

Comment by jkaufman on Ask and Guess · 2019-06-22T18:08:34.139Z · score: 6 (3 votes) · LW · GW

On one occasion I had to explicitly explain to a friend that, for her purposes, it was best to assume that the last piece of chicken was simply unavailable to be eaten, ever, by anyone

Thinking about how this works in my household, I realized why this doesn't come up: if there is a last piece of chicken then the host has made a mistake. In my culture there should always be enough food that everyone feels comfortable to eat as much as they would enjoy without worrying that this will limit other's consumption. The host cooks sufficient food to ensure this, with the expectation that there will be leftovers. And then leftovers provide lunches, and occasionally dinners if they accumulate sufficiently. Of course this requires being rich enough to have enough food for everyone to have what they want, but (a) food is much cheaper relative to the rest of life than it used to be and (b) if the cost would be an issue you deal with this by having larger quantities of cheaper food.

In the rare occasions when the host miscalculates, because extra people showed up, people ate more than expected, or something else, my culture's general askiness means we talk about it pretty explicitly ("who else would like more chicken?") and generally divide what's left equally among everyone who wants it.

Comment by jkaufman on Drowning children are rare · 2019-05-29T20:17:51.557Z · score: 34 (10 votes) · LW · GW

I wrote a response to this here: https://www.jefftk.com/p/theres-lots-more-to-do

Comment by jkaufman on Drowning children are rare · 2019-05-29T17:00:04.933Z · score: 7 (4 votes) · LW · GW

I no longer believe such arbitrage is reliably available

Do you not believe GiveDirectly represents this kind of arbitrage?

Comment by jkaufman on Automated Nomic Game 2 · 2019-02-06T00:36:21.888Z · score: 4 (3 votes) · LW · GW

People who would like to join are welcome to send a PR

Comment by jkaufman on Act of Charity · 2019-01-28T03:59:33.614Z · score: 21 (5 votes) · LW · GW
This belief is partially due to private info I have (will PM some details)

The first part of this private info turned out to be a rumor about the way an ex-employee was treated. I checked with the person in question, and they disconfirmed the rumor.

The remainder was recommendations to speak with specific people, which I may manage to do at some point, and links to public blog posts.

Comment by jkaufman on Act of Charity · 2019-01-24T18:59:21.613Z · score: 6 (3 votes) · LW · GW
they would work themselves out of a job and actually have data that their nets break down after four or five years

This is minor, but GiveWell already says "Our best guess is that [nets] last, on average, between 2 and 2.5 years." (https://www.givewell.org/charities/amf)

Comment by jkaufman on Why Don't Creators Switch to their Own Platforms? · 2018-12-23T12:52:18.762Z · score: 6 (4 votes) · LW · GW

Yes, there are costs to building your own platform, especially for video, but my guess is traffic is the main limitation. YouTube, Facebook, etc are trying to find things to put in front of people to entertain them, and if you can do well at this you can have an enormous audience. Streaming video from your own website to fans who care enough to seek it out gives you freedom but adds too much friction to seeing your stuff.

Comment by jkaufman on Act of Charity · 2018-12-19T21:50:46.374Z · score: 14 (5 votes) · LW · GW
Carl: "Why don't you just run a more effective charity, and advertise on that? Then you can outcompete the other charities."
Worker: "That's not fashionable anymore. The 'effectiveness' branding has been tried before; donors are tired of it by now. Perhaps this is partially because there aren't functional systems that actually check which organizations are effective and which aren't, so scam charities branding themselves as effective end up outcompeting the actually effective ones. And there are organizations claiming to evaluate charities' effectiveness, but they've largely also become scams by now, for exactly the same reasons. The fashionable branding now is environmentalism."

I'm confused about this part. Are you saying GiveWell is a scam?

Comment by jkaufman on East Coast Rationalist Megameetup 2018 · 2018-12-11T17:31:52.076Z · score: 4 (2 votes) · LW · GW

I think the first one was 2011, and there were several others in NYC before 2014.

Comment by jkaufman on Boston Solstice 2018 · 2018-10-29T11:48:10.471Z · score: 2 (3 votes) · LW · GW

Whoops! I meant to delete the second one after I found the first (which is much better for learning from)

Comment by jkaufman on Boston Solstice 2018 · 2018-10-29T00:02:51.683Z · score: 6 (3 votes) · LW · GW

Nice! Your version has the guitar a bit too loud for easy melody learning; here's another straightforward version I found:

https://www.youtube.com/watch?v=t-R-wXOwOvM

https://www.youtube.com/watch?v=5KDnbNZKAKw

Comment by jkaufman on Boston Solstice 2018 · 2018-10-28T21:34:36.700Z · score: 4 (2 votes) · LW · GW

I'm interested in song suggestions if people have them. Here are some of the ones I'm thinking of, but we probably want about six more?

Uplift

Why Does the Sun Shine? (TMBG)

Her Mysteries

We Will All Go Together When We Go

When I Die

Somebody Will

Thanksgiving Eve

Jewel in the Night

There But For Fortune

Legends (Julia Ecklar)

Hymn to Breaking Strain

Brighter than Today

Lean on Me

Old Devil Time

Also interested in suggestions for readings!

Comment by jkaufman on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2018-10-26T17:57:21.583Z · score: 6 (3 votes) · LW · GW

Well, you could compute it by adding an upvote and seeing how much it changed. Not that it matters now that we're on LW 2.0 and everything is different...

Comment by jkaufman on Terrorism, Tylenol, and dangerous information · 2018-07-09T00:19:25.861Z · score: 6 (3 votes) · LW · GW

I hope that I would not have published Debt of Honor.

There have been an enormous number of books, movies, etc with various forms of realistic plots. Are you saying this genre shouldn't exist, that authors should make sure their plots are not realistic, or that there's something unusual about this plot in particular that should have kept Clancy from publishing?

Comment by jkaufman on Goodhart Taxonomy · 2018-01-03T16:13:36.186Z · score: 19 (5 votes) · LW · GW
Many N.B.A. hopefuls exaggerate their height while in high school or college to make themselves more appealing to coaches and scouts who prefer taller players. Collins, for example, remembers the exact day he picked to experience a growth spurt.
"Media day, my junior year," Collins, a Stanford graduate, said. "I told our sports information guy that I wanted to be 7 feet, and it's been 7 feet ever since."

And:

Victor Dolan, head of the chiropractic division at Doctors' Hospital in Staten Island, said players could increase their height by being measured early in the morning, because vertebrae become compressed as the day progresses. A little upside-down stretching does not hurt, either.
"If you get measured on an inversion machine, and do it when you first wake up, maybe you could squeeze out an extra inch and a half," Dolan said.

-- http://www.nytimes.com/2003/06/15/sports/basketball/tall-tales-in-nba-dont-fool-players.html

Comment by jkaufman on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-16T17:45:57.932Z · score: 7 (2 votes) · LW · GW

Ah, the problem is the link only appears if you hover over it. Filed a bug: https://github.com/Discordius/Lesswrong2/issues/289

Comment by jkaufman on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-16T16:38:18.431Z · score: 5 (2 votes) · LW · GW

You also need to account for taxes; for them to get $66k after tax you probably need to pay $100k pre tax.

Comment by jkaufman on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-04T21:10:02.848Z · score: 11 (3 votes) · LW · GW
Insufficient archiving of this incident, due to an underappreciation of the value in doing so. While writing this I had a hard time actually finding things I had previously read. Googling "Gleb Tsipursky lesswrong" doesn’t return any direct accounts of bad behaviour, you have to go digging for it. The right to be forgotten has its merits, but it isn’t meant to be applied when people are still doing the thing that got them in trouble in the first place.

I think maybe Concerns With Intentional Insights is what you were looking for?

Comment by jkaufman on Effective altruism is self-recommending · 2017-04-28T21:05:48.488Z · score: 1 (1 votes) · LW · GW

Is this what you were remembering? https://www.effectivealtruism.org/articles/march-2017-ea-newsletter/

It looks pretty balanced to me.

Comment by jkaufman on I Want To Live In A Baugruppe · 2017-03-18T15:29:14.761Z · score: 7 (7 votes) · LW · GW

Rationalists don't all like group houses, but compared to the rest of the population they disproportionately like them. There have been several in person meetup groups that have started houses, and these have generally gone pretty well. (Ex: Citadel in Boston)

Comment by jkaufman on How Much Evidence Does It Take? · 2017-03-18T14:38:50.083Z · score: 5 (1 votes) · LW · GW

Running "1000 experiments" if you don't have to publish negative results, can mean just slicing data until you find something. Someone with a large data set can just do this 100% of the time.

A replication is more informative, because it's not subject to nearly as much "find something new and publish it" bias.

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:59:54.556Z · score: 1 (1 votes) · LW · GW

Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis. System 1 is not a substitute for System 2, though it can help point the way. You still have to track down where the problems are specifically.

Chalmers wrote a big book, not all of which is available through free Google preview. I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail. I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge. Hit the ball back into his court, as it were.

But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:59:47.016Z · score: 1 (1 votes) · LW · GW

I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.

When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence. Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.

If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.

Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI. This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for. So I have to invent a reflectively coherent theory anyway. And when I do, by golly, reflective coherence turns out to make intuitive sense.

So that's the unusual way in which I tend to think about these things. And now I look back at Chalmers:

The causally closed "outer Chalmers" (that is not influenced in any way by the "inner Chalmers" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an "inner Chalmers" that are correct for no logical reason in what happens to be our universe.

But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.

So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense. Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.

This is not a necessary problem for Friendly AI theorists. It is only a problem if you happen to be an epiphenomenalist. If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say "consciousness".

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:59:29.185Z · score: 1 (1 votes) · LW · GW

... (Argument from career impact is not valid, but I say it to leave a line of retreat.)

Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?

When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?

If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage. "I have a dragon in my garage." Great! I want to see it, let's go! "You can't see it—it's an invisible dragon." Oh, I'd like to hear it then. "Sorry, it's an inaudible dragon." I'd like to measure its carbon dioxide output. "It doesn't breathe." I'll toss a bag of flour into the air, to outline its form. "The dragon is permeable to flour."

One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test. Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the "collapse of the wave-function" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud "I think therefore I am." Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.

This is in the process of being tested, and so far, prospects are not looking good for Penrose—

—but Penrose's basic conduct is scientifically respectable. Not Bayesian, maybe, but still fundamentally healthy. He came up with a wacky hypothesis. He said how to test it. He went out and tried to actually test it.

As I once said to Stuart Hameroff, "I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded. Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting."

So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.

I don't think this is actually true of Chalmers, though. If Chalmers lacked self-honesty, he could make things a lot easier on himself.

(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:59:08.256Z · score: 1 (1 votes) · LW · GW

The zombie argument does not rest solely on the intuition of the passive listener. If this was all there was to the zombie argument, it would be dead by now, I think. The intuition that the "listener" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:58:58.726Z · score: 1 (1 votes) · LW · GW

By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are "bridging laws" that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.

The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)

When a philosopher in our world types, "I think the Zombie World is possible", his fingers strike keys in sequence: Z-O-M-B-I-E. There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about "consciousness".

And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking. There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin. The zombie twin also has an internal narrative about "consciousness", that a super-fMRI could read out of the auditory cortex. And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.

So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot. When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:58:48.321Z · score: 1 (1 votes) · LW · GW

One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are "possible". Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were "possible", and didn't bother to define what sort of possibility was meant.

Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility. If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true. In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.

Something will seem possible—will seem "conceptually possible" or "imaginable"—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete."

So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there. It's like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility ("I don't see a problem") to logical possibility in the full technical sense, is a very great leap. It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:58:36.380Z · score: 1 (1 votes) · LW · GW

Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.

And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies. Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:58:28.141Z · score: 1 (1 votes) · LW · GW

(Warning: Long post ahead. Very long 6,600-word post involving David Chalmers ahead. This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers.)

Comment by jkaufman on Zombies Redacted · 2016-07-12T19:58:08.745Z · score: 1 (1 votes) · LW · GW

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

Comment by jkaufman on The Valentine’s Day Gift That Saves Lives · 2016-05-18T11:44:34.172Z · score: 1 (1 votes) · LW · GW

"Paid likes" is a specific practice, one that we've never engaged in

Sorry, yes, you're interpreting my use of "paid likes" as being a very specific thing, and I mean it differently. Specifically, I'm talking about accounts that (a) click like and (b) are operated by someone who received money from InIn and (c) wouldn't have done (a) without (b).

Comment by jkaufman on The Valentine’s Day Gift That Saves Lives · 2016-05-18T11:39:32.477Z · score: 0 (0 votes) · LW · GW

They are the ones who most consistently like them. This is one reason we hired them to work for us.

You're saying that first they start liking all of your posts, then you reach out to them, and in many cases decide to hire them? The hiring doesn't come before the mass-liking?

Comment by jkaufman on The Valentine’s Day Gift That Saves Lives · 2016-05-17T16:25:42.567Z · score: 4 (4 votes) · LW · GW

I agree that it's not beneficial for community building, but here's what makes me think you have paid "followers":

Looking back over the past 12 posts on Intentional Insights, I see the following accounts consistently liking your posts:

These all look fake to me, but let's look at the last one because it's the weirdest. The most recent 19 posts are all re-shares of Intentional Insights posts or posts elsewhere by Gleb. Looking at the fb pages they "like" I see:

  • AlterNet (News/Media Website)
  • Nigerian Movies (Local Business)
  • Poise Hair Collection (Health/Beauty)
  • Bold F.aces (Public Figure)
  • Get Auto Loan (Automobiles and Parts)
  • Closeup (Product/Service)
  • Dr. Gleb Tsipursky (Writer)
  • Hero Lager (Food/Beverages)
  • EBook Korner Kafé (Book)
  • Intentional Insights (Non-Profit Organization)

Additionally, looking through the people who like typical Intentional Insights posts, they're from a wide range of third world countries, with (as far as I can see) no one from richer countries. This also points to paid likes, since poor-country likes are cheaper than rich-country ones, and being popular only in third world countries doesn't seem likely from your writing.

Is there some other explanation for this pattern? "Paid likes" is the only thing that seems plausible to me.

Comment by jkaufman on The Valentine’s Day Gift That Saves Lives · 2016-05-16T00:35:14.732Z · score: 4 (4 votes) · LW · GW

500 FB likes the first day it was posted

Reading through the Intentional Insights fb page [1] it looks to me like you're using paid likes? The "people" who liked those posts all look like fake accounts. While I can't see the specific accounts that 'liked' your TLYCS post, is that what you did there too? If so getting 500 fb likes doesn't tell us that it was unusually good.

[1] https://www.facebook.com/intentionalinsights/

Comment by jkaufman on What makes buying insurance rational? · 2016-04-04T12:01:13.414Z · score: 1 (1 votes) · LW · GW

You can put up a bond instead: https://www.dmv.ca.gov/portal/dmv/detail/pubs/brochures/fast_facts/ffvr18

Comment by jkaufman on What makes buying insurance rational? · 2016-04-04T11:59:25.310Z · score: 2 (2 votes) · LW · GW

In the US, some kinds of insurance are really collective bargaining. Dental and vision usually aren't, but this is a reason to get health insurance even if you could afford to self insure.

Comment by jkaufman on Altruistic Kidney Donation · 2016-03-17T02:19:53.326Z · score: 0 (0 votes) · LW · GW

I don't see how you can count the benefit of all donations in a chain within the donation of the first donor.

If you're trying to compare actions, you should say "how will the world be if I do A instead of B". If you think the chain truly wouldn't have happened if you hadn't decided to donate your kidney then the benefit of all of those people receiving kidneys happens in the world where you donate, and not in the world where you keep your kidney.

Comment by jkaufman on The Valentine’s Day Gift That Saves Lives · 2016-02-02T14:58:14.561Z · score: 0 (0 votes) · LW · GW

Audience matters? The TLYCS blog is very different from LW.

Comment by jkaufman on Altruistic Kidney Donation · 2016-01-11T15:53:40.766Z · score: 0 (0 votes) · LW · GW

Which of the youtube comments are you referring to? There are a bunch of them (and none of them jumped out as an incredible analysis to me? But I was just skimming.)

Comment by jkaufman on Machine learning and unintended consequences · 2015-12-25T15:34:26.278Z · score: 0 (0 votes) · LW · GW

Expanded my comments into a post: http://www.jefftk.com/p/detecting-tanks

Comment by jkaufman on Machine learning and unintended consequences · 2015-12-24T15:27:04.606Z · score: 0 (0 votes) · LW · GW

Except "November Fort Carson RSTA Data Collection Final Report" was released in 1994 covering data collection from 1993, but the parable was described in 1992 in the "What Artificial Experts Can and Cannot Do" paper.

Comment by jkaufman on Machine learning and unintended consequences · 2015-12-24T15:22:10.256Z · score: 0 (1 votes) · LW · GW

Here's the full version of "What Artificial Experts Can and Cannot Do" (1992): http://www.jefftk.com/dreyfus92.pdf It has:

... consider the legend of one of connectionism's first applications. In the early days of the perceptron ...

Comment by jkaufman on Machine learning and unintended consequences · 2015-12-24T15:10:50.785Z · score: 0 (0 votes) · LW · GW

There's also https://neil.fraser.name/writing/tank/ from 1998 which says the "story might be apocryphal", so by that point it sounds like it had been passed around a lot.

Comment by jkaufman on Machine learning and unintended consequences · 2015-12-24T15:09:26.310Z · score: 0 (0 votes) · LW · GW

But it also doesn't look like it's a version of this story. That section of the book is just a straight ahead "how to distinguish tanks" bit.

Comment by jkaufman on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-14T20:11:25.277Z · score: 4 (4 votes) · LW · GW

Instead, you select from a population which is as similar as possible to the treatment group

They did this with an earlier batch (I was part of that control group) and they haven't reported that data. I found this disappointing, and it makes me trust this round of data less.

On Sunday, Sep 8, 2013 Dan at CFAR wrote:

Last year, you took part in the first round of the Center for Applied Rationality's study on the benefits of learning rationality skills. As we explained then, there are two stages to the survey process: first an initial set of surveys in summer/fall 2012 (an online Rationality Survey for you to fill out about yourself, and a Friend Survey for your friends to fill out about you), and then a followup set of surveys one year later in 2013 when you (and your friends) would complete the surveys again so that we could see what has changed.

Comment by jkaufman on LessWrong 2.0 · 2015-12-03T20:17:02.756Z · score: 13 (13 votes) · LW · GW

LW's problem is the decline in quality, so the fix should be quality-oriented, not quantity-oriented.

I think it went the other way: demands for quality, rigor, and fully developed ideas made posting here unsatisfying (compared to the alternatives) for a lot of previously good posters.