Comment by Caspian on Making intentions concrete - Trigger-Action Planning · 2018-12-28T05:45:51.989Z · LW · GW

This initially seemed like it would still be very difficult to use.

I didn't find any easier descriptions of TAPs available on lesswrong for a long time after this was written, but I just had another look and found some more recent posts that suggested a practice step after planning the trigger-action pair.

For example, here:
What are Trigger-Action Plans (TAPs)?

You can either practise with the real trigger, or practise with visualising the trigger.

There's lots more about TAPs on lesswrong now I that I haven't read yet but the practice idea stood out as particularly important.

Comment by Caspian on Lesswrong 2016 Survey · 2016-03-29T03:13:44.697Z · LW · GW

I have taken the survey.

Comment by Caspian on Too good to be true · 2014-07-14T23:15:37.603Z · LW · GW

One mistake is treating 95% as the chance of the study indicating two-tailed coins, given that they were two-tailed coins. More likely it was meant as the chance of the study not indicating two-tailed coins, given that they were not two-tailed coins.

Try this:

You want to test if a coin is biased towards heads. You flip it 5 times, and consider 5 heads as a positive result, 4 heads or fewer as negative. You're aiming for 95% confidence but have to get 31/32 = 96.875%. Treating 4 heads as a possible result wouldn't work either, as that would get you less than 95% confidence.

Comment by Caspian on [LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality · 2014-06-28T02:09:00.473Z · LW · GW

If we're aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.

That's not to say you couldn't still find tricky example societies where the system evaluation isn't doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.

Comment by Caspian on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-29T03:22:14.604Z · LW · GW

Back up your data, people. It's so easy (if you've got a Mac, anyway).

Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac's internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn't need to change anything.

Comment by Caspian on Measuring lethality in reduced expected heartbeats · 2014-01-05T01:07:32.003Z · LW · GW

I think there's an error in your calculations.

If someone smoked for 40 years and that reduced their life by 10 years, that 4:1 ratio translates to every 24 hours of being a smoker reducing lifespan by 6 hours (360 minutes). Assuming 40 cigarettes a day, that's 360/40 or 9 minutes per cigarette, pretty close to the 11 given earlier.

Comment by Caspian on Open thread for December 9 - 16, 2013 · 2013-12-15T03:32:52.080Z · LW · GW

This story, where they treated and apparently cured someone's cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.

cancer treatment link

Comment by Caspian on LINK: AI Researcher Yann LeCun on AI function · 2013-12-15T00:37:27.972Z · LW · GW

Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

But it would be better if you could ask: "suppose I chose to smoke, but my genome and any other similar factors I don't know about were to stay as they are, then what?" where the other similar factors are things that cause smoking.

Comment by Caspian on LINK: AI Researcher Yann LeCun on AI function · 2013-12-15T00:01:30.394Z · LW · GW

In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. "Being able to predict what a user is going to do next is a key feature"

But not predicting everything they do and exactly what they'll type.

Comment by Caspian on Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime · 2013-11-24T03:05:39.207Z · LW · GW

I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.

Comment by Caspian on "Stupid" questions thread · 2013-07-24T23:53:38.562Z · LW · GW

I expect part of it's based on status of course, but part of it could be that it would be much harder for a mugger to escape on a plane. No crowd of people standing up to blend into, and no easy exits.

Also on some trains you have seats facing each other, so people get used to deliberately avoiding each others gaze (edit: I don't think I'm saying that quite right. They're looking away), which I think makes it feel both awkward and unsafe.

Comment by Caspian on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T23:18:31.652Z · LW · GW

Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns. On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

You need to include inputs other than labour, and I think conventional economics allows for doing that.

Then the people who are less efficient than machines at converting the other inputs into products may become unemployed, if the machines are cheap enough.

Comment by Caspian on Instrumental rationality/self help resources · 2013-07-20T23:07:01.606Z · LW · GW

That part of the wiki page was written in this edit

Comment by Caspian on Gains from trade: Slug versus Galaxy - how much would I give up to control you? · 2013-07-20T22:15:21.392Z · LW · GW

Nonlinear utility functions (as a function of resources) do not accurately model human risk aversion. That could imply that we should either change our (or they/their) risk aversion or not be maximising expected utility.

Comment by Caspian on Rationality Quotes July 2013 · 2013-07-19T11:32:29.569Z · LW · GW

That's not intended for people who could work but chose not to. They require you to regularly apply for employment. The applications themselves can be stressful and difficult work if you don't like self-promotion.

Comment by Caspian on The Power of Reinforcement · 2013-07-18T13:35:29.676Z · LW · GW

I think I even have work-like play where a game stops being fun. And yes, play-like work is what I want to achieve.

Comment by Caspian on The Power of Reinforcement · 2013-07-18T13:18:12.982Z · LW · GW

Reinforcing effort only in combination with poor performance wasn't the intent. Pick a better criterion that you can reinforce with honest self-praise. You do need to start off with low enough standards so you can reward improvement from your initial level though.

Comment by Caspian on Applying Behavioral Psychology on Myself · 2013-07-18T13:08:11.876Z · LW · GW

I'm interested in what you rewarded for going to bed earlier (or given the 0% success rate, what you planned to reward if it ever happened) and how/when you rewarded it. Maybe rewarding subtasks would have helped.

Comment by Caspian on The Power of Reinforcement · 2013-07-18T13:04:50.146Z · LW · GW

I just read Don't Shoot The Dog, and one of the interesting bits was that it seemed like getting trained the way it described was fun for the animals, like a good game. Also as the skill was learnt the task difficulty level was raised so it wasn't too easy. And the rewards seemed somewhat symbolic - a clicker, and being fed with food that wasn't officially restricted outside the training sessions.

Thinking about applying it to myself, having the reward not be too important outside the game/practise means I'm not likely to want to bypass the game to get the reward directly. Having the system be fun means it's improving my quality of life in that way in addition to any behaviour change.

I haven't done much about ramping up the challenge. How does one make doing the dishes more challenging?

But I did make sure to make the rewards quicker/more frequent by rewarding subtasks.

Comment by Caspian on "Stupid" questions thread · 2013-07-17T15:41:22.591Z · LW · GW

Well, it seems we have a conflict of interests. Do you agree?

Yes. We also have interests in common, but yes.

If you do, do you think that it is fair to resolve it unilaterally in one direction?

Better to resolve it after considering inputs from all parties. Beyond that it depends on specifics of the resolution.

If you do not, what should be the compromise?

To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways and circumstances such as described by pragmatist), because they prefer that people not introduce themselves to them personally.

Several of the objections to the introduction suggest guidelines I would agree with: keep the introduction brief until the other person has had a chance to respond. Do not signal unwillingness to drop the conversation. Signaling the opposite may be advisable.

Other people (extraverts? NTs? something else?) would prefer people-in-general to adopt a policy of introducing oneself to strangers, because they prefer that people introduce themselves to them personally.

Yeah. Not that I always want to talk to someone, but sometimes I do.

Does this seem like a fair characterization of the situation?


If so, then certain solutions present themselves, some better than others. We could agree that everyone should adopt one of the above policies. In such a case, those people who prefer the other policy would be harmed. (Make no mistake: harmed. It does no good to say that either side should "just deal with it". I recognize this to be true for those people who have preferences opposite to my own, as well as for myself.)

I think people sometimes conflate "it is okay for me to do this" with "this does no harm" and "this does no harm that I am morally responsible for" and "this only does harm that someone else is morally responsible for, e.g. the victim"

The alternative, by construction, would be some sort of compromise (a mixed policy? one with more nuance, or one sensitive to case-specific information? But it's not obvious to me what such a policy would look like), or a solution that obviated the conflict in the first place. Your thoughts?

Working out such a policy could be a useful exercise. Some relevant information would be: when are introductions more or less bad, for those who prefer to avoid them.

Comment by Caspian on "Stupid" questions thread · 2013-07-15T15:55:11.789Z · LW · GW

I think sitting really close beside someone I would be less likely to want to face them - it would feel too intimate.

Comment by Caspian on "Stupid" questions thread · 2013-07-15T15:41:03.651Z · LW · GW

I would always find people in aeroplanes less threatening than in trains. I wouldn't imagine the person in the next seat mugging me, for example, whereas I would imagine it on a train.

What do other people think of strangers on a plane versus on a train?

Comment by Caspian on "Stupid" questions thread · 2013-07-15T15:05:28.226Z · LW · GW

Like RolfAndreassen said: please back the fuck off and leave others alone.

Please stop discouraging people from introducing themselves to me in circumstances where it would be welcome.

Comment by Caspian on "Stupid" questions thread · 2013-07-15T14:48:55.624Z · LW · GW

I now plan to split up long boring tasks into short tasks with a little celebration of completion as the reward after each one. I actually decided to try this after reading Don't Shoot the Dog, which I think I saw recommended on Less Wrong. It's got me a somewhat more productive weekend. If it does stop helping, I suspect it would be from the reward stopping being fun.

Comment by Caspian on Rationality Quotes July 2013 · 2013-07-15T14:22:51.677Z · LW · GW

Getting back to post-scarcity for people who choose not to work, and what resources they would miss out on, a big concern would be not having a home. Clearly this is much more of a concern than drinks on flights. The main reason it is not considered a dire concern is that people's ability to choose not to work is not considered that vital.

Comment by Caspian on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-09T12:26:20.823Z · LW · GW

A second, hidden copy of himself could possibly use the time turner as soon as it was announced the ring was to be transfigured, and make sure Hermione was not in the ring, but I think Harry has better uses than that for as much time turning as he can get.

Comment by Caspian on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-09T12:11:09.186Z · LW · GW

My first thought was that she'd been transfigured into the pajamas, but I don't think that's likely. My theory is that when Harry slept in his bed it was the second time he'd been through that time period. The first time, he stayed invisible with transfigured Hermione in his possession, waited until woken-up Harry had finished being searched, gave her to woken-up Harry, then went back in time and went to bed.

Comment by Caspian on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? · 2013-06-26T14:58:31.965Z · LW · GW

You can get microphones much smaller than 7 cm, and they can detect frequencies way lower than 20 kHz. There's no rule saying you need a large detector to pick up a signal with a large wavelength.

Comment by Caspian on On manipulating others · 2013-06-23T03:29:50.714Z · LW · GW

Women famously say "sometimes I just want to be listened to. Don't try to solve my problems, just show me that you care."

I would interpret that as being specific to problems. There may also be women who would like feigned interest in dopey things they're into, or they may prefer to just discuss them with their girlfriends who are actually interested.

When men do this, women say "yes, that's what I'm talking about" and attempt to reinforce that behavior, perhaps unconsciously.

Explicitly saying this can be taken at face value, I think, but unconsciously reinforcing the behaviour may be meant to reinforce actual interested listening. You can't deduce which is the true preference.

Comment by Caspian on On manipulating others · 2013-06-23T03:04:59.430Z · LW · GW

When I buy stuff from people I don't know I'm mostly treating them as a means to an end. Not completely, because there are ways I'd try to be fair to a human that wouldn't apply to a thing, but to a larger extent than I would want in personal / social relationships.

Another rule of thumb I kind of like is: don't get people into interactions with you that they wouldn't want if they knew what you were doing. I feel like that probably encourages erring too far on the side of caution and altruism. But if you know the other person would prefer you to empathise when not interested rather than be silent, leave or criticise, it's allowed.

ETA: I'm interested in better guidelines, especially from people who get the distaste for manipulation.

Comment by Caspian on [LINK] The Selected Papers Network · 2013-06-21T08:52:29.280Z · LW · GW

Not that I know of, but Advogato's trust metric limits the damage by a rogue endorser of many trolls with a calculation using maximum network flow. It doesn't allow for downvotes.

If you allow downvoting and blocking all of someone's nodes, that could be an incentive for the person to partition their publications into three pseudonyms, so that once the first is blocked, the others are still available.

Comment by Caspian on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-21T04:30:15.200Z · LW · GW

That's a good question. Here's a definition of "fair" aimed at UDT-type thought experiments:

The agent has to know what thought experiment they are in as background knowledge, so the universe can only predict their counterfactual actions in situations that are in that thought experiment, and where the agent still has the knowledge of being in the thought experiment.

This disallows my anti-oneboxer setup here: (because the predictor is predicting what decision would be made if the agent knew they were in Newcomb's problem, not what decision would be made if the agent knew they were in the anti-oneboxer experiment) but still allows Newcomb's problem, including the transparent box variation, and Parfit's Hitchhiker.

I don't think much argument is required to show Newcomb's problem is fair by this definition, the argument would be about deciding to use this definition of fair, rather than one that favours CDT, or one that favours EDT.

Comment by Caspian on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T23:48:33.853Z · LW · GW

You penalise based on the counterfactual outcome: if they were in Newcomb's problem, this person would choose one box.

Comment by Caspian on Rationality Quotes June 2013 · 2013-06-08T03:01:17.423Z · LW · GW

The way I like to think about it is that convincingness is a 2-place function - a simulation is convincing to a particular mind/brain. If there's a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it's cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.

From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.

Dennett's point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.

Comment by Caspian on Rationality Quotes June 2013 · 2013-06-08T01:19:00.668Z · LW · GW

I want to use one of those phrases in conversation. Either grfgvat n znq ulcbgurfvf be znxvat znq bofreingvbaf (spoilers de-rot13ed)

Also I found the creator's page for the comic

Comment by Caspian on Googling is the first step. Consider adding scholarly searches to your arsenal. · 2013-05-26T01:47:49.789Z · LW · GW

I followed the first link and the abstract there had "After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes."

And then later, "These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat." though I'm not sure if the latter was separate because it was specifically about /processed/ meat.

So long as they keep the claim as modest as 'eating red meat "may" increase your risk of type II diabetes.' it seems reasonable. They could still be wrong of course, but the statement allows for that. I should note here that the study was on women over 45, not a general population of an area.

If there's better evidence that the search is not finding, that is a problem.

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-18T01:23:18.232Z · LW · GW

Isn't humour it's own reward? What extra reinforcement system could you use to increase it?

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-18T01:19:42.192Z · LW · GW

Yes, I upvoted it as an interesting idea, but wouldn't endorse actually putting it into practice.

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-18T00:46:44.363Z · LW · GW

I don't think it would substitute for optometrist appointments, just for getting new glasses of the same prescription as you already had. For people who have had LASIK, had your glasses prescriptions been changing up until then? And did you vision continue to change afterwards?

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-18T00:33:25.999Z · LW · GW

As munchkinry, it's pretty good, but I'm not taking this seriously enough to actually try it. It's just a fun idea to me.

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-18T00:10:13.053Z · LW · GW

I am mentally connecting this with the comment about tulpas

No need to modify the host's identity, you can both share their brain.

ETA: and now I'm thinking of the movie Being John Malkovich - the host was treated in an abusive manner, but there was a level of cooperation between the other minds sharing his body.

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-11T02:41:54.912Z · LW · GW

Practice getting off the Internet and going to bed:

Starting while not absorbed in browsing the web, find some not-too-compelling website, browse for a few minutes (not enough to get really into it) and then go and lie in bed for a few minutes (which shouldn't feel as difficult as it's not committing to a full night's sleep). While in bed, let your mind wander away from the internet. This practice can lead into practice for getting out of bed.

I tried this a bit - I'm not sure it was worthwhile, as I did sometimes get absorbed in browsing when trying this exercise.

Comment by Caspian on Post ridiculous munchkin ideas! · 2013-05-11T01:56:20.474Z · LW · GW

When I was having a lot of trouble getting out of bed reasonably promptly in the mornings: practice getting out of bed - but not after just having woken up, that's what I was having trouble with in the first place. No, during the day, having been up for a while, go lie in bed for a couple of minutes with the alarm set, then get up when it goes off. Also, make this a pleasant routine with stretching, smiling and deep breathing.

I found this idea on the net here, which may have more details:

I tried it and it seemed to help a lot for a while, and I feel more in control of my weekend mornings.

Comment by Caspian on The Lifespan Dilemma · 2013-01-07T14:18:32.514Z · LW · GW

I don't have an elegant fix for this, but I came up with a kludgy decision procedure that would not have the issue.

Problem: you don't want to give up a decent chance of something good, for something even better that's really unlikely to happen, no matter how much better that thing is.

Solution: when evaluating the utility of a probabilistic combination of outcomes, instead of taking the average of all of them, remove the top 5% (this is a somewhat arbitrary choice) and find the average utility of the remaining outcomes.

For example, assume utility is proportional to lifespan. If offered a choice between a 1% chance of a million years (otherwise death in 20 years) and certainty of a 50 year lifespan, choose the latter, since the former, once the top 5% is removed, has the utility of a 20 year lifespan. If offered a choice between a 10% chance of a 19,000 year lifespan (otherwise immediate death) and a certainty of a 500 year lifespan, choose the former, since once the top 5% is removed, it is equivalent in utility to (5/95)*19,000 years, or 1,000 years.

New problem: two decisions, each correct by the decision rule, can combine into an incorrect decision by the decision rule.

For example, assume utility is proportional to money, and you start with $100. If you're offered a 10% chance of multiplying your money by 100, otherwise losing it, and then as a separate decision offered the same deal based on your new amount of money, you'd take the offer both times, ending up with a 1% chance of having a million dollars, whereas if directly offered the 1% chance of a million dollars, otherwise losing your $100, you wouldn't take it.

Solution: you choose your entire combination of decisions to obey the rule, not individual decisions. As with updateless decision theory, the decisions for different circumstances are chosen to maximise a function weighted over all possible circumstances (maybe starting when you adopt the decision procedure) not just over the circumstance you find yourself in.

In the example above, you could decide to take the first offer but not the second, or if you had a random number generator, take the first offer and then maybe take the second with a certain probability.

A similar procedure, choosing a cutoff probability for low utility events, can solve Pascal's Wager. Ignoring the worst 5% of events seems too much, it may be better to pick a smaller number, though there's no objective justification for what to pick.

Comment by Caspian on Talking Snakes: A Cautionary Tale · 2013-01-07T00:58:09.334Z · LW · GW

Making up absurd explanations for the talking snake goes against the direction of your post, but I wanted to share this one: a remote control snake the owner can talk through is the sort of thing that could be a children's toy. Santa Claus gave one to Satan, who used it for mischief.

Comment by Caspian on New censorship: against hypothetical violence against identifiable people · 2012-12-24T00:34:50.561Z · LW · GW

Suicide in particular is often illegal.

ETA: possibly this statement of mine was outdated.

Comment by Caspian on That Thing That Happened · 2012-12-24T00:32:16.360Z · LW · GW

BlazeOrangeDeer would be talking about this parody subreddit. Sometimes the parodies are in a similar "meta" style to Konkvistador's post.

Comment by Caspian on Miracle Mineral Supplement · 2012-11-24T00:59:45.577Z · LW · GW

Others have covered your knee jerk poison-is-bad reaction so I'll let that pass, but the thing that stuck out for me as bad epistemic standards from MMS proponents was seeing some "explanation" for why it would give you an upset stomach despite the other claim that it would only harm "bad" bacteria. Something about how it's your body flushing out poisons and it's a good sign. It struck me as an untested rationalisation someone just made up.

Comment by Caspian on How To Have Things Correctly · 2012-10-17T07:12:29.575Z · LW · GW

I'm assuming that if bought your cloak for the same price of a typical sweater, you would preferably use sweaters rather than the cloak.

Instead, just assume that if she had not found excuses to wear the cloak, she would use sweaters rather than the cloak. This could be chosen by habit rather than considered preference.

Comment by Caspian on Causal Diagrams and Causal Models · 2012-10-15T12:35:52.469Z · LW · GW

I had meant to suggest some sort of unintelligent feedback system. Not coincidence, but also not an intelligent optimisation, so still not an exact parallel to his thermostat.