Open thread, June 20 - June 26, 2016

post by Elo · 2016-06-21T02:45:44.402Z · LW · GW · Legacy · 89 comments

Contents

89 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

89 comments

Comments sorted by top scores.

comment by SquirrelInHell · 2016-06-21T08:38:22.891Z · LW(p) · GW(p)

I once had a system in which I was writing checkboxes on paper for tasks I wanted to do regularly.

Stuff like eating vitamins, or doing backups of my server.

It started with the typical daily/weekly/monthly todos, but it gradually evolved into something much less rigid, and calculated in a (increasingly complex) spreadsheet.

For a long time, I've been working out the balance between this system being forgiving...

(as in, allowing for soft recovery, rather then being hit by "do 12 hours of jogging" after a week of vacation)

and also giving you accountability over a longer period

(as in, avoiding the "I'll skip it this week, and instead definitely do it next week" effect).

I've also recently had the idea to publish some Android apps, and one of the first ideas was to code a cleaner, leaner and meaner version of my old spreadsheet.

As far as productivity apps go, this is very basic stuff, but I haven't actually found anything out there that could replace my system.

So lo and behold.

It's still kinda maybe not feature complete, but I already use it myself (and I've finally retired the spreadsheet :D):

If you like this sorta stuff, give it a try and let me know what you'd like to see improved.

Replies from: Elo, niceguyanon, ChristianKl
comment by Elo · 2016-06-22T01:51:56.271Z · LW(p) · GW(p)

Saying all this without actually seeing the app

I have been trying out systems for a while now. So has Regex and various others.

The introspective thing that I have noticed, and you mentioned here without clearly identifying it is the iterative development of systems. Which is to say that you started on paper, and moved to spreadsheet and after moved to an app (as well as probably several versions of each).

What makes the final version work in the face of potential complexity of starting a new system (and taking a leap) is partly the fact that you lived through the various versions, and know why/how/whatFor different factors have changed to improve the system (such is the pure nature of iterative system development).

HOWEVER by publishing only your final version you only publish the (probably very good) system that you are used to, and not all the intermediate steps that made it possible and necessary to get to here. While I imagine that every possible latest system so far developed by many many various people (Productivity Ninja, GTD, FVP to name a few), will have good features and functionality that are neat of themselves, without the iterative stages, you don't really give people the same final system that you have come to be accustomed to.

What I am saying is; I'd like to see the whole process to how you got here in the hopes of making sense of your successes/failures of systems to do what you want them to do and following that be better able to apply it to my own systems.

On top of that; a dream app would be one that starts as a simple list (like you did), and gradually offers you to add complexity to your system (like you ended up making). But in such a way as to let people progress to the final version when they need//want it.

I will look at the app and get back to you.

Replies from: SquirrelInHell, Regex
comment by SquirrelInHell · 2016-06-22T02:56:53.977Z · LW(p) · GW(p)

a dream app would be one that starts as a simple list (like you did), and gradually offers you to add complexity to your system

I like your analysis of this issue, though I think in this particular case the app actually remains very simple.

If you only use the "do it every N days" type of tracking, you get pretty much just a list like the ones I used to have on paper.

One thing I'm definitely seeing more clearly after reading your comment is that if I ever want to add more complexity to this app, I'll instead make a new app that will be the "next step in evolution".

(this doesn't apply to UI improvements of course, which the app still needs a lot)

I'd like to see the whole process to how you got here in the hopes of making sense of your successes/failures of systems

Haha, this calls for a long evening in front of a fireplace :)

comment by Regex · 2016-06-27T15:14:34.863Z · LW(p) · GW(p)

I think you're coming on a little strong in ways you don't intend for requesting his process and previous system iterations. This reads as if you should never share any system without also sharing the process of how to get there, and most of the time that is filled with stuff no one really needs to see.

Replies from: Elo
comment by Elo · 2016-06-27T22:44:59.239Z · LW(p) · GW(p)

yes. okay. What I mean to say is that there is a whole lot of value in with the rest of the system generation process that is missing here. Value that might help understand better how/why it works the way it does and consequently how to make it work for one's self.

comment by niceguyanon · 2016-06-21T17:08:49.288Z · LW(p) · GW(p)

I think your app is great! I am also the kind of person to get really excited about new productivity apps that have that one cool trick that makes it different from other apps, so I might not be a good gauge on how well your app would be received, but yea, I love it.

The only other self tracking app I have used is Beeminder. My only gripe about Beeminder is that everything is linear, if you do 10 units more you are 10 ahead, if you miss 10 units you are now 10 units behind. I have always wanted some sort of discounting for being ahead, and some sort of sped up recovery for being behind, and I think your app does this well.

Replies from: SquirrelInHell
comment by SquirrelInHell · 2016-06-28T02:18:55.080Z · LW(p) · GW(p)

I think your app does this well.

How is it after a week? Do you still use it?

Replies from: niceguyanon, niceguyanon
comment by niceguyanon · 2016-07-26T14:50:50.281Z · LW(p) · GW(p)

Update:

Was I able to use the app successfully to increase my tasks by 50%? No. But I wont blame it on the app.

I found that manually clicking next day was something I did not like. The temptation to delay clicking it and catch up the next day is strong. If it were automatic I would have to live with the consequences of getting a bad score. Furthermore if you accidentally clicked next day before before updating other tasks, then too bad, you cant reverse. So for testing I made a few tasks and advanced it several days, but unless I reinstall the app, the date can not roll back for when I want to stop testing and use it for real.

There is no way to easily see your progress for the last few days. It would be nice to click on the task and see how you did recently or if I missed a few days to see when was the last time I did the task. Sure there is an export button but the data is hard to read if you just want to know quickly how you did recently.

Replies from: SquirrelInHell
comment by SquirrelInHell · 2016-07-28T08:25:18.570Z · LW(p) · GW(p)

Thanks a lot; I'll this it into account, and think how to improve this in next versions.

Though with the "next day" button, it would be a hard tradeoff - you might not have had this experience, but sometimes you travel and your timezone settings get messed up, or your phone's clock is reset etc. It's possible to design something that would avoid these problems, but it's a pretty big change in the internals of the app.

The temptation to delay clicking it and catch up the next day is strong.

This is surprising to me - the algorithm in the app makes it strictly easier to catch up when you click the button first, and then do the tasks rather than the other way around. Is it not enough incentive to make you want to click the button, rather than "cheat"?

Replies from: tut
comment by tut · 2016-07-28T10:51:00.097Z · LW(p) · GW(p)

I think it is about the don't break the streak thing. Suppose that you decide to run every day, and you do it in the morning every day from Sunday to Thursday, then sleep in and don't have time for it on Friday. Now on Saturday you can either advance the day before your run and have a one day streak, or you can run twice, once before and once after advancing the day and have a seven day streak.

Replies from: niceguyanon
comment by niceguyanon · 2016-08-08T15:46:03.169Z · LW(p) · GW(p)

This perfectly expresses my thoughts

comment by niceguyanon · 2016-07-13T15:52:11.750Z · LW(p) · GW(p)

I have not used it since testing it out. No change to how I feel about the app, I just haven't used any self tracking apps recently. I use Trello as a general to do app, which lacks regular occurrence task tracking. I will move meditation and gym tasks to Hastewurm and report back in 2 weeks. Both these tasks are things I wished I did more by about 50%.

My commitment to report back will probably result in increased likelihood of me sticking to this goal, but I could nonetheless try to be mindful of my bias, and provide some feedback on efficacy and or improvements.

comment by ChristianKl · 2016-06-21T09:42:38.057Z · LW(p) · GW(p)

I wouldn't use an app like that without the app being able to export data.

Replies from: SquirrelInHell, WalterL
comment by SquirrelInHell · 2016-06-21T10:22:05.615Z · LW(p) · GW(p)

I wouldn't use an app like that without the app being able to export data.

But it does in fact have this option.

(Admittedly, the format of data is not documented, but it's just plaintext K=V.)

comment by WalterL · 2016-06-21T12:43:42.105Z · LW(p) · GW(p)

Export data....why? Like, what other device are you going to load this data on? You've got your task tracker on your phone...and its records go where else?

I mean, more features = more good, but I'm just curious about the use case here.

Replies from: ChristianKl
comment by ChristianKl · 2016-06-21T13:39:10.853Z · LW(p) · GW(p)

I carry my phone around a lot. I might lose my phone or it might get stolen.

I also don't want to be locked into a single application. Especially when testing a new software. I want to keep my data and not be bound to a single service.

For introspection/QS purposes it's also good to have the data in a way where I can analyse it further. For example I log all calls and all pomodoros I do to a Google calendar. Otherwise most of my data goes to Evernote.

comment by Sable · 2016-06-23T00:35:23.697Z · LW(p) · GW(p)

Out of curiosity: because rationalists are supposed to win, are we (on average) below our respective national averages for things which are obviously bad (the low hanging fruits)?

In other words, are there statistics somewhere on rationalist or LessWrong fitness/weight, smoking/drinking, credit car debt, etc.?

I'd be curious to know how well the higher-level training effects these common failure modes.

Replies from: Evan_Gaensbauer, ChristianKl
comment by Evan_Gaensbauer · 2016-06-24T10:12:59.578Z · LW(p) · GW(p)

I've wondered this too. In particular, for several years, at least among people I know, people have constantly questioned the level of rationality in our community, particularly our 'instrumental rationality'. This is summed up by the question: "if you're so smart, why aren't you rich?" That is, if rationalists are so rational, why aren't they leveraging their high IQs and their supposed rationality skills to perform in the top percentages and all sorts of metrics of coveted success? Even by self-reports, such as the LW survey(s). However, I've thought of a contrapositive question: "if you're stupid, why aren't you poor?" I.e., while rationalists might not all be peak-happiness millionaires or whatever, we might also ask the question about what the rates of (socially perceived) failure are, and how they compare to other cohorts, communities, reference classes, etc.

You're the first person I've seen to pose this question. There might have been others, though.

Replies from: Vaniver, Crux, Ishaan, Viliam
comment by Vaniver · 2016-06-24T18:57:10.179Z · LW(p) · GW(p)

This is summed up by the question: "if you're so smart, why aren't you rich?"

For many LWers, the answer is "I'm young," but I think there are also a lot of people where the answer is "I am rich."

Replies from: JEB_4_PREZ_2016
comment by JEB_4_PREZ_2016 · 2016-06-25T15:37:18.035Z · LW(p) · GW(p)

Also worth noting: LWers should be extracting more utility from their money than non-LWers.

comment by Crux · 2016-06-25T11:27:51.035Z · LW(p) · GW(p)

The rationalist community has a lot of independent thinkers, and independent thinkers are more likely than the general population to find the game of amassing wealth to be an obstruction to their freedom of thought and an inefficient path to happiness and life satisfaction.

Also many rationalists are quite young, as Vaniver pointed out.

Replies from: Viliam
comment by Viliam · 2016-06-27T09:10:28.542Z · LW(p) · GW(p)

independent thinkers are more likely than the general population to find the game of amassing wealth to be an obstruction to their freedom of thought and an inefficient path to happiness and life satisfaction

Heh. Maybe I am not a sufficiently independent thinker, but for me the greatest obstruction to freedom of thought and happiness and life satisfaction is having a daily job, especially one that resembles Dilbert comics.

My problem with the "game of amassing wealth" is that (1) I am not very good at it, and (2) even when you are smart enough to double your wealth in a few years, if you start with a small amount, all you get is double of small amount, and there is a limited amount of years in your lifetime. I mean, compared to my wealth 10 or 20 years ago, I am significantly richer, but if I would keep the same speed, I would be probably able to retire at 60, which feels a bit late.

comment by Ishaan · 2016-06-30T23:09:51.068Z · LW(p) · GW(p)

We don't want "are you rich, do you smoke" because the selection effect (we are rich because we were born upper middle class, and we're not powerful because powerful people have better things to do than explore the internet until they land on odd forums).

Otherwise the value of an idea is judged by the types of people who happen to stumble upon them.

What we want is "After being exposed to the ideas, did you get richer", "did you quite smoking", etc. Before after.

why aren't they leveraging their high IQs

IQ is just another selection effect confound to control for. Priors say there is absolutely no way rationality training will alter your IQ (and besides the IQ data is mostly from standardized test scores taken in high school anyhow) If high IQ people land up here that just means high IQ people crawl the internet more and stick around more.

Replies from: SquirrelInHell
comment by SquirrelInHell · 2016-07-06T05:21:34.296Z · LW(p) · GW(p)

What we want is "After being exposed to the ideas, did you get richer", "did you quite smoking", etc. Before after.

Thanks for being sane.

comment by Viliam · 2016-06-27T13:31:46.571Z · LW(p) · GW(p)

Some random thoughts about the questions:

"if you're so smart, why aren't you rich?"

Not everyone who participated in the survey is a regular LW reader; it was open to the whole diaspora.

Not everyone who reads LW regularly is also working on their own rationality. Some people are here for the insight porn; some people simply enjoy being in the community of other smart people.

Not everyone who tries to become more rational is doing it correctly. For example, some people may go for the applause lights, or still compartmentalize in something important.

Now, assuming that you are trying to do the rational thing (but of course you are not perfect at it)... Also, assuming you have high intelligence (LW already selects for it), and you are mostly healthy (just a base-rate assumption)...

There are essentially two ways to become more rich: get a high income, or multiply the existing wealth. The second option is not available for those who don't have any significant "existing wealth". For those who do, I guess investing in the passively managed index funds is the standard LW advice.

Assuming that (feel free to adjust the numbers if they feel wrong) you can comfortably live on $2,000 a month, and you believe that index funds will reach at least 4% yearly increase in long term... all you need is to get $600,000, once, and then you can play the rest of your life in the "easy mode". On the other hand, if you start from zero, and you are able to save less than $1,000 a month, the bad news is that you are never going to get there. And if you want to get there in, say, 20 years, you better save about $3,000 a month.

So I guess the answer is that even for smart people, saving $3,000 a month is a difficult task, and 20 years is a long time (LW does not even exist for 20 years yet). In other words, yes, it's true that most LW rationalist are not smart enough to make half a million dollars overnight. But after unpacking "so smart" and "rich", it shouldn't surprise many people. Anasûrimbor Kellhus would probably be able to do it in a few weeks or months, but he is a fictional character.

By the way, from the outside "a person with a lifestyle X" and "a person with a lifestyle X, and with a few thousand dollars saved in index funds" will look the same, even if the latter is richer. Converting the former to latter would mean increasing rationality and wealth, and still would be invisible to outsiders. The difference would be that the latter would see some light at the end of the tunnel, but that light could still be a few decades away.

"if you're stupid, why aren't you poor?"

Society seems to have limits on how poor people can get: bankruptcy, unemployment benefits, not being able to borrow insane amounts of money, etc. (Also, there is the natural limit that people who don't have enough resources to survive, die.) Therefore, unlike intelligence which makes the bell curve, wealth makes an assymetric curve.

Looking from the "99% vs 1%" perspective, we could say that most people actually are relatively poor. That the stupid people aren't much poorer than the average, simply because the average already have very little wealth. Being stupid just means you will waste a little more of your money, until you are out of money, and then you can't waste anymore.

You can't become a "negative Bill Gates"; at worst you can become homeless (and then die). Actually, if you are just smart enough to pay your mortgage first, and only do the stupid things with the remaining money, you will probably even avoid homelessness. The average and below-average people have a script to follow, which will more or less keep them in the fixed position, as long as they can have a job.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2016-06-27T15:07:21.170Z · LW(p) · GW(p)

Not everyone who participated in the survey is a regular LW reader; it was open to the whole diaspora.

The LW surveys contain questions about whether people are regular LW readers and allow us to see how people who are regular readers differ.

comment by Lumifer · 2016-06-27T14:48:58.967Z · LW(p) · GW(p)

And if you want to get there in, say, 20 years, you better save about $3,000 a month.

Your math is a bit off -- you're forgetting that your savings also grow at 4%/year while you're accumulating them. So if you save $2,000 / month and can get stable 4% return (after taxes), in 20 years you will have $612K.

The whole calculation, though, is based on guaranteed returns and if your returns are actually volatile (say, the mean is 4% with noticeable standard deviation), the situation changes.

Replies from: gjm
comment by gjm · 2016-06-27T15:40:39.517Z · LW(p) · GW(p)

And of course all these calculations are ignoring inflation.

If inflation is, say, 2%, then

  • to get out $2k/month with 4% nominal returns you need $1.2M rather than $600k; or
  • to get out $2k/month with $600k, you need 4% real returns or about 6% nominal. And
  • the equivalent of $2k/month now is about $3k/month in 20 years. On the other hand,
  • your savings can reasonably be expected to increase in line with inflation too.
Replies from: Lumifer
comment by Lumifer · 2016-06-27T16:00:37.725Z · LW(p) · GW(p)

Yep, so far we've been talking about nominal sums without considering their real purchasing power.

The proper question of what is the sum of money that one can live off as a rentier to maintain a certain standard of living and how much needs to be saved for how long is... complicated.

Replies from: gjm
comment by gjm · 2016-06-27T17:01:10.876Z · LW(p) · GW(p)

Yup. The most sophisticated approach I've seen, which is clearly not actually sophisticated enough, is to guess at possible trajectories of future investment growth by some process along the lines of random sampling of past stock market returns, and then choose a sum that leads to you not running out of money in, say, at least 99% of those trajectories.

Replies from: Lumifer
comment by Lumifer · 2016-06-27T17:23:05.336Z · LW(p) · GW(p)

It's a better start than simple compounding interest calculations :-)

To approach this from another side, one can buy an annuity (which provides a stream of income for the rest of your life). You need to save as much as is needed to buy such an annuity and then you're good (mostly). However I understand that these annuities are not... attractively priced, especially if you want one which adjusts your income stream for inflation.

Replies from: gjm
comment by gjm · 2016-06-27T19:31:49.378Z · LW(p) · GW(p)

That is also my understanding, and I doubt the annuity market has the properties required to make its prices reflect any sort of reality.

comment by ChristianKl · 2016-06-23T11:02:14.563Z · LW(p) · GW(p)

Have you looked into the census numbers?

Replies from: Sable
comment by Sable · 2016-06-23T12:31:36.286Z · LW(p) · GW(p)

I've skimmed them, but I don't remember seeing these kinds of statistics. I'll take another look though. Thanks.

comment by halcyon · 2016-06-24T21:42:52.245Z · LW(p) · GW(p)

I don't want to live forever myself, but I want people who want to live forever to live forever. Does that make me a transhumanist?

Replies from: Elo, Ishaan, Dagon
comment by Ishaan · 2016-06-30T23:12:47.942Z · LW(p) · GW(p)

yes.

comment by Dagon · 2016-06-25T13:55:50.393Z · LW(p) · GW(p)

Probably, if you're spending time thinking about the possibilities and consequences.

I challenge your statement though, and suspect you've got a near-far conflict in your wants. Unless you state the conditions under which you'll want to die, and think those conditions are inevitable or desirable, you want to live forever.

I predict you'll always want to live for at least another few years, and only induction failure is making you say you don't want to live forever.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-06-25T14:42:53.344Z · LW(p) · GW(p)

"Unless you state the conditions..." That is not true. You can want to live a finite life without wanting to die at any particular time.

If you were offered the deal, "Choose the number x and you will receive that much utility, but if you do not choose, you will not receive any," then you will want to choose some finite number, despite the fact that you would prefer a greater number to any particular number. Those desires are consistent, not inconsistent. The problematic issue is in the territory, not in your map of it.

Replies from: Dagon
comment by Dagon · 2016-06-25T21:39:48.831Z · LW(p) · GW(p)

Ok, maybe you don't have to state the conditions, but you have to predict that there will be an actual time that you want to die.

I don't follow your utility comparison. I don't think of utility as a number in this way, but even if so, that's not the deal being offered.

In order to not want immortality, you have to want to die. I think this is pretty straightforward. The deal being offered is "you expect some utility amount every moment you experience. some of these may be negative. You have some influence, but not actual control, over future experiences" If you predict that the sum of future experiences is negative, you would be better off dieing now. If you predict positive, you should continue living.

Unless you can predict a point at which you want to die, you should predict that you'll want to live.

comment by OrphanWilde · 2016-06-22T15:12:11.652Z · LW(p) · GW(p)

A thought occurred to me on a divide in ethical views that goes frequently unremarked, so I thought I'd ask about it: How many of you think ethics/morality is strictly Negative (prohibits action, but never requires action), a combination of Both (can both prohibit or require action), or something else entirely?

ETA: First poll I've used here, and I was hoping to view it, then edit the behavior. Please don't mind the "Option" issue in the format.

[pollid:1159]

Replies from: Dagon
comment by Dagon · 2016-06-23T16:24:54.859Z · LW(p) · GW(p)

I answered a slightly different question. I don't think all ethics or moral systems do either or both of these things. My preferred ruleset (consequentialist personal regret-minimization) both prohibits and requires action, and in fact doesn't distinguish between the two.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-06-23T17:27:36.924Z · LW(p) · GW(p)

I'd classify it loosely as Both; nothing requires an ethical system to distinguish between the two cases, but I think it's a substantial divide in the way people tend to think about ethics.

I'm starting to think "ethics" is an incoherent concept. I'm a strict-negative ethicist - yet I do have an internal concept of a preference hierarchy, in terms of what I want the world to look like, which probably looks a lot like what most people would think of as part of their ethics system. It's just... not part of my ethics. Yes, I'd prefer it if poor people in other countries didn't starve to death, but this isn't an ethical problem, and trying to include it in your ethics looks... confused, to me. How can your ethical status be determined by things outside your control? How can we say a selfish person living in utopia is a better person, ethically, than a selfish person living in a dystopia?

Which isn't to say I'm right. More than half the users apparently include positive ethics in their ethical systems.

Replies from: Dagon
comment by Dagon · 2016-06-24T14:50:22.337Z · LW(p) · GW(p)

People in other countries (note: I'm anti-nationalist, and prefer to just say "people", or if I need to distinguish, "people distant from me") starving is not under my control, but I can have a slight influence that makes it a small amount better for a lot of them. To me, this absolutely puts it in bounds for ethical consideration

Put in decision-making terms as opposed to ethical framing, "my utility function includes terms for the lives of distant strangers". For me, ethics is about analyzing and debating (with myself, mostly) the coefficients of those terms.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-06-24T15:56:40.084Z · LW(p) · GW(p)

Okay. Imagine two versions of you: In one, you were born into a society in which, owing to nuclear war, the country you live in is the only one remaining. It is just as wealthy as our own current society owing to the point this hypothesis is leading to.

The other version of you exists in a society much more like the one we live in, where poor people are starving to death.

I'll observe that, strictly in terms of ethical obligations, the person in the scenario in which the poor people didn't exist is ethically superior, because fewer ethical obligations are being unmet. In spite of their actions being exactly the same.

Outside the hypothetical: I agree wholeheartedly the world in which poor people don't starve is better than the one in which they do. That's the world I'd prefer exist. I simply fail to see it as an ethical issue, as I regard ethics as being the governance of one's own behavior rather than the governance of the world.

Replies from: Dagon
comment by Dagon · 2016-06-24T22:49:19.902Z · LW(p) · GW(p)

Hmm. You're getting close to Repugnant Conclusion territory here, which I tend to resolve by rejecting the redistribution argument rather than the addition argument.

In my view, In terms of world-preference, the smaller world with no poverty is inferior, as there are fewer net-positive lives. If you're claiming that near-starving impoverished people are leading lives that are negative value, I understand but do not agree with your position.

Replies from: gjm, OrphanWilde
comment by gjm · 2016-06-28T16:01:05.707Z · LW(p) · GW(p)

What's your reason for not agreeing with that position?

I ask because my own experience is that I feel strongly inclined to disagree with it, but when I look closer I think that's because of a couple of confusions.

Confusion #1. Here are two questions we can ask about a life. (1) "Would it be an improvement to end this life now?" (2) "Would it be an improvement if this life had simply never been?". The question relevant to the Repugnant Conclusion is #2 (almost -- see below), but there's a tendency to conflate it with #1. (Imagine tactlessly telling someone that the answer to #2 in their case is yes. I think they would likely respond indignantly with something like "So you'd prefer me dead, would you?" -- question #1.) And, because people value their own lives a lot and people's preferences matter, a life has to be much much worse to make the answer to #1 positive than to make the answer to #2 positive. So when we try to imagine lives that are just barely worth having (best not to say "worth living" because again this wrongly suggests #1) we tend to think about ones that are borderline for #1. I think most human lives are well above the threshold for saying no to #1, but quite a lot might be below the threshold for #2.

Confusion #2. People's lives matter not only to themselves but to other people around them. Imagine (ridiculously oversimple toy model alert) a community of people, all with lives to which the answer to question 2 above is (all things considered) yes and who care a lot about the people around them; let's have a scale on which the borderline for question 2 is at zero, and suppose that someone with N friends scores -1/(N^2+1). Suppose everyone has 10 friends; then the incremental effect of removing someone with N friends is to improve the score by about 0.01 for their life and reduce it by 10(1/82-1/101) or about 0.023. In other words, this world would be worse off without any individual in the community -- if what you imagine when assessing that is that that individual is gone and no one else takes their place in others' social relationships. But everyone in the community has a life that, all told, is negative, the world would be better off if none of them had ever lived, and it would be better off if any individual one had never lived and their place in others' lives had been taken by someone else*.

(By the way -- do you feel that sense of outrage as if I'm proposing dropping bombs on this hypothetical community? That's the difference between question 1 and question 2, again. For the avoidance of doubt, I feel it too.)

This second effect, like the first one, tends to make us overestimate how bad a life has to be before the world would have been better off without it, because even if we're careful not to confuse question 1 with question 2 we're still liable to think of a "borderline" life as one for which the world would be neither better nor worse off if it were simply deleted, which accounts for social relationships in the wrong way.

comment by OrphanWilde · 2016-06-28T13:40:08.253Z · LW(p) · GW(p)

There are two problems.

In the first scenario, in which ethics is an obligation (i/e, your ethical standing decreases for not fulfilling ethical obligations), you're ethically a worse person in a world with poverty, because there are ethical obligations you cannot meet. The idea of ethical standing being independent of your personal activities is, to me, contrary to the nature of ethics.

In the second scenario, in which ethics are additive (you're not a worse person for not doing good, but instead, the good you do adds to some sort of ethical "score"), your ethical standing is limited by how horrible the world you are in is - that is, the most ethical people can only exist in worlds in which suffering is sufficiently frequent that they can constantly act to avert it. The idea of ethical standing being dependent upon other people's suffering is also, to me, contrary to the nature of ethics.

It's not a matter of which world you'd prefer to live in, it's a matter of how the world you live in changes your ethical standing.

ETA: Although the "additive" model of ethics, come to think of it, solves the theodicy problem. Why is there evil? Because otherwise people couldn't be good.

Replies from: Dagon
comment by Dagon · 2016-06-28T20:33:00.843Z · LW(p) · GW(p)

I suspect I'm more confused than even this implies. I don't think there's any numerical ethical standing measurement, and I think that cross-universe comparisons are incoherent. Ethics is solely and simply about decisions - which future state, conditional on current choice, is preferable.

I'm not trying to compare a current world with poverty against a counterfactual current world without - that's completely irrelevant and unhelpful. In a world with experienced pain (including some forms of poverty), an agent is ethically superior if it makes decisions that alleviate such pain, and ethically inferior if it fails to do so.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-06-28T20:47:57.031Z · LW(p) · GW(p)

Ethics is solely and simply about decisions - which future state, conditional on current choice, is preferable.

From my perspective, we have a word for that, and it isn't ethics. It's preference. Ethics are the rules governing how preference conflicts are mediated.

I'm not trying to compare a current world with poverty against a counterfactual current world without - that's completely irrelevant and unhelpful.

Then imagine somebody living an upper-class life who is unaware of suffering. Are they ethically inferior because they haven't made decisions to alleviate pain they don't know about? Does informing them of the pain change their ethical status - does it make them ethically worse-off?

Replies from: Dagon
comment by Dagon · 2016-06-28T23:29:21.578Z · LW(p) · GW(p)

Ethics are the rules governing how preference conflicts are mediated.

Absolutely agreed. But it's about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.

upper-class life who is unaware of suffering.

If they're unaware because there's no reasonable way for them to be aware, it's hard for me to hold them to blame for not acting on that. Ought implies can. If they're unaware because they've made choices to avoid the truth, then they're ethically inferior to the version of themselves which do learn and act.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-06-29T12:50:42.161Z · LW(p) · GW(p)

Absolutely agreed. But it's about conflicts among preferred outcomes of a decision, not about preferences among disconnected world-states.

Less about two outcomes your preferences conflict on, and more about, say, your preferences and mine.

Insofar as your internal preferences conflict, I'm not certain ethics are the correct approach to resolve the issue.

If they're unaware because there's no reasonable way for them to be aware, it's hard for me to hold them to blame for not acting on that. Ought implies can. If they're unaware because they've made choices to avoid the truth, then they're ethically inferior to the version of themselves which do learn and act.

This leads to a curious metaethics problem; I can construct a society of more ethically perfect people just by construction it so that other people's suffering is an unknown unknown. Granted, that probably makes me something of an ethical monster, but given that I'm making ethically superior people, is it worth the ethical cost to me?

Once you start treating ethics like utility - that is, a comparable, in some sense ordinal, value - you produce meta-ethical issues identical to the ethical issues with utilitarianism.

Replies from: Dagon
comment by Dagon · 2016-06-29T22:05:11.021Z · LW(p) · GW(p)

more ethically perfect people

You're still treating ethical values as external summable properties. You just can't compare the ethical value of people in radically different situations. You can compare the ethical value of two possible decisions of a single situation.

If there's no suffering, that doesn't make people more or less ethical than if there is suffering - that comparison is meaningless. If an entity chooses to avoid knowledge of suffering, that choice is morally objectionable compared to the same entity seeking knowledge of such.

You can get away to some extent by generalizing and treating agents in somewhat similar situations as somewhat comparable - to the degree that you think A and B are facing the same decision points, you can judge the choices they make as comparable. But this is always less than 100%.

In fact, I think the same about utility - it's bizarre and incoherent to treat it as comparable or additive. It's ordinal only within a decision, and has no ordering across entities. This is my primary reason for being consequentialist but not utilitarian - those guys are crazy.

comment by ChristianKl · 2016-06-26T10:44:03.794Z · LW(p) · GW(p)

After reading a Facebook post by Kaj Sotala about MessagEase I switched to the keyboard because it's much better one than the default Android keyboard I was using the default keyboard.

It allows faster typing. It allows typing beautiful unicode that's hard to type even on a PC. It has macros that allow me to save commonly typed string such as facebook birthday greetings and my email address. It has easy gestures for going to the top or the buttom of a document. You have a copy-paste history.

I still use the default App launcher. Does somebody have a case why I should use a specific different launcher?

comment by knb · 2016-06-26T03:22:03.108Z · LW(p) · GW(p)

What do you think are good ideas for moonshot projects that have not yet been adequately researched or funded?

Replies from: ChristianKl, someonewrongonthenet, ChristianKl, ChristianKl
comment by ChristianKl · 2016-06-26T10:56:02.225Z · LW(p) · GW(p)

Software for automatically playing hypnosis audio via headphones to people who get operated in addition to standard anesthesia.

comment by someonewrongonthenet · 2016-06-30T23:17:57.966Z · LW(p) · GW(p)

Leaving aside the bloody obvious things (universal basic income or other form of care, global internet access, etc)

Prediction market. They tried but it's dead due to gambling laws. Someone should give it a second try.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-02T12:13:17.684Z · LW(p) · GW(p)

Leaving aside the bloody obvious things (universal basic income or other form of care, global internet access, etc)

How's global internet access not funded? Google and Facebook both have programs for it. On another front SpaceX plans to launch the next Iridium satillites.

Prediction market. They tried but it's dead due to gambling laws. Someone should give it a second try.

How is it dead? There PredictIt. There's also Augur (currently in Beta).

comment by ChristianKl · 2016-06-26T10:49:37.117Z · LW(p) · GW(p)

A friend feed that's like Facebooks friend feed but audio instead of text or images to give people a clear alternative to hearing talk radio.

comment by ChristianKl · 2016-06-26T10:39:16.091Z · LW(p) · GW(p)

A pay-for-performance online marketplace for medical services.

comment by [deleted] · 2016-06-22T19:07:57.270Z · LW(p) · GW(p)

Sometimes, things happen that feel subjectively significant in a way, things that seem to throw earlier estimates out of the window and lead to recalculations - at least it feels like that - like an event happened that requires an answer. But it doesn't really condense in words, at least in my case, it seems like a sheet of sure belief in different things than I have actually learned of, in some unspecified ramifications.

How would one uphold rationality in the face of such a, well, learning experience?

Replies from: someonewrongonthenet, ChristianKl
comment by someonewrongonthenet · 2016-06-30T23:21:30.985Z · LW(p) · GW(p)

Wait a few months to a year. It usually goes away.

Replies from: None
comment by [deleted] · 2016-07-01T07:05:40.562Z · LW(p) · GW(p)

Okay, I was wrong to be so vague somewhere where I'm anonymous enough.

My father-in-law is a retired General Practitioner (approximately), but people keep coming to him for help now and again. Recently he was asked to resuscitate a child, but his efforts were too late. The parents drove to our house in the evening, when we were putting our kid to bed, and he (the kid) became quite excited with having unfamiliar people bursting in and asking for help.

I told him he has to behave and not interrupt his grandfather's work, and we went to read a book. My mother-in-law was very upset, and recounted details of the work going on in the yard, and I remember thinking that she needed to compartmentalize more. Then my father-in-law came back, washed his face, picked my kid and rocked him to sleep, totally composed. I had known he's a professional, but usually his professionalism was accompanied by, uh, loud noises (he has a carrying voice). This time... It was a perfectly normal evening.

And I find that I respect him so much more. My model of doctors' professional behavior had been ruined by fiction (think McCoy from StarTrek, etc.), and now it seems just such a simple and hard thing. So...I didn't mean 'learning experience' in a bad way.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2016-07-03T17:18:11.064Z · LW(p) · GW(p)

That sounds like a meaningful experience. Can you be more specific about the paradigm shift it caused and the questions you have about "upholding rationality"?

Replies from: None
comment by [deleted] · 2016-07-03T20:21:58.134Z · LW(p) · GW(p)

I guess it set the concepts of ruthlessness and cruelty more apart in my mind than they used to be. Before, when I had cause to be ruthless, I would always think to myself "but normal people do not interfere with other people selling rare flowers; I have to exercise kindness as a virtue, otherwise see Crime and Punishment for the logical conclusion". (C&P is my father's favourite book, which he used most often to talk to us about morality.) Time and time again I run into the problem of "do I have a right to do this" and gradually decided that yes, I would just have to be cruel. And here my father-in-law made something which did shake him badly to look like a trivial occurrence with which other people besides him simply did not have to engage, for all that my mother-in-law clearly saw it ours to share in. They both belong to the more normal people I know, and I don't really like him, but his brand of ruthlessness is one I had tried to develop and never could. It reset our boundaries, somehow; before, I think I demanded of him to follow the same C&P guidelines, now I'm trying not to. And I really truly believed them the consistent and rational approach to, er, life, even when I didn't behave accordingly, and now I don't have to. There's something which 'normal people' do which doesn't require or invite this kind of moral questioning.

And I wonder what else they can do which I cannot, and what of it I really should be doing.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2016-07-07T21:15:07.249Z · LW(p) · GW(p)

Attempting to resuscitate a child, failing, and then going about one's day is neither ruthless nor cruel, but I think I understand what you mean. It can be jarring for some people when doctors are seemingly unaffected by the high intensity situations they experience.

Doing good does sometimes require overriding instincts designed to prevent evil. For instance, a surgeon must overcome certain natural instincts not to hurt when she cuts into a patient's flesh and blood pours out. The instinct says this is cruelty, the rational mind knows it will save the life of the patient.

There are hazards involved in overriding natural instincts, such as in C&P where the protagonist overrides natural instincts against murder because he is convinced that it is in the greater good, because instincts exist for good reason. There are also hazards involved in following natural instincts. Humans have the capacity for both.

Following instincts vs. overriding instincts, both variants are appropriate at different times. Putting correctly proportioned trust in reasoning vs. instinct is important. You need to consider when instincts mislead, but you also need to consider when reasoning misleads.

It would be a mistake to take a relatively clear cut case of the doctor's override of natural sympathetic instinct (for which there is a great deal of training and precedent which establishes that it is a good idea) and turn it into a generalized principle of "trust reason over moral instinct" under uncertainty. There is no uncertainty in the doctors case, the correct path is obvious. Just because doctors are allowed to override instincts like "don't cut into flesh" and "grieve when witnessing death" in a case where it has already been predecided that this is a good idea doesn't mean they get free license to override just willy nilly whenever they've convinced themselves it's for a greater good, they still have to undergo the deliberative process of asking whether they've rationalized themselves into something bad.

Replies from: None
comment by [deleted] · 2016-07-08T06:43:39.481Z · LW(p) · GW(p)

I agree, although, given the same training you speak of, I think in their cases it is almost "instinct vs. reasoning", and so is not as hard a choice as it could be. (I also might be less unwilling to cut into flesh than other people, having had surgery myself and retained a mild interest in zootomy since my school years, so there's that.)

And in C&P, as I recall, Svidrigaylov blackmailed Raskol'nikov quoting Raskol'nikov's own words that the prostitute's younger sister would go the same way...which might have been the first instance when I learned that people should care not to leak information, whether it be a statement of facts or a statement of their attitude to facts, however morally good it is. So now I take my observations with a grain of salt; and I want to trust my eyes, but that's about it...

comment by ChristianKl · 2016-06-22T19:12:46.252Z · LW(p) · GW(p)

It's often useful in cases like that to put your thoughts into writing.

Replies from: None
comment by [deleted] · 2016-06-22T20:23:51.202Z · LW(p) · GW(p)

Too confidential.

Replies from: g_pepper, ChristianKl
comment by g_pepper · 2016-06-22T21:28:24.789Z · LW(p) · GW(p)

Even if you do not share your written thoughts with anyone, writing them down can help to organize your thoughts into a form that can be more easily analyzed and evaluated (by you).

comment by ChristianKl · 2016-06-23T11:03:34.065Z · LW(p) · GW(p)

You can always write it down into a well encrypted file on your computer.

comment by Daniel_Burfoot · 2016-06-23T19:51:11.356Z · LW(p) · GW(p)

File under "we're not as rich as we think we are", this Wiki page shows that economic-basket-case Greece has higher median net worth than the US. Australia is astoundingly rich, +$60k higher than the US average (which includes the megawealthy) and $175k higher than the US median. Even econo-sluggard Italy has $100k higher median than the US.

Replies from: Lumifer, knb
comment by Lumifer · 2016-06-23T20:31:18.095Z · LW(p) · GW(p)

Australia is astoundingly rich, +$100k higher than the US average (which includes the megawealthy)

You're reading the data wrong. Australian median = $225K, US average = $244K.

Overall, I have doubts about their methodology. The source publication is here and there are some... non-intuitive numbers in there. For example, page 92 shows changes in household wealth between 2012 and 2013. According to their estimates, the Swedes became richer by 15.5% and the Japanese poorer by over 20% in a single year. That looks fishy to me.

But yeah. Australians made out like bandits (ahem) selling ore to China.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-06-24T02:01:17.603Z · LW(p) · GW(p)

Fixed. I was using the mean wealth instead of net mean wealth. It's still amazing to me that the Aussie average exceeds the US average, given that the averages include megawealthy tech and finance billionaires. And amazing that Greece and Italy have higher median wealth than the US.

Replies from: gjm
comment by gjm · 2016-06-24T10:20:39.928Z · LW(p) · GW(p)

US culture is extraordinarily spendy, which arguably is good for GDP but bad for individual wealth except for those whose individual wealth benefits from corporate gain (who are mostly well above the median and therefore don't affect the median wealth).

comment by knb · 2016-06-24T23:49:12.958Z · LW(p) · GW(p)

I think apples-to-apples comparison is tricky here. Things like the age structure of the population can matter a lot here. A country with an average age of 50 should have a higher level of net worth than one with an average age of 30.

In any case I'm not sure net worth is the valid way to think about "how rich we are" compared to income or consumption or quality of life or whatever.

comment by halcyon · 2016-06-23T13:58:56.707Z · LW(p) · GW(p)

I collected some social statistics from the internet and computed their correlations: https://drive.google.com/open?id=0B9wG-PC9QbVERHdiTi1uTlFMMlU My sources were: http://pastebin.com/ERk1BaBu

But I'm not sure how to proceed from there: https://drive.google.com/open?id=0B9wG-PC9QbVEWlRZSG9KM0ZFeVk ?? Dotted lines represent positive correlations and arrowed lines negative correlations.

I obtained that confusing chart by following this questionable method: https://drive.google.com/open?id=0B9wG-PC9QbVEVHg1T1lQNE1ZTk0 First, drop some of the trivial correlations like the ones among the different measures of national wealth, and weaker correlations between +.5 and -.5. For each variable, select the correlation furthest from 0 and throw it into the chart. I also tried keeping only one measure of national wealth in the model in hopes of less confusion: https://drive.google.com/open?id=0B9wG-PC9QbVEZlExWmhoOWRjVk0

I'm looking for help in analyzing this data. Are there any methods you would recommend? Which variables should I drop for better results? I tried keeping only proportions at one point. (Bayesian causal inference assumes the nonexistence of circular causation AIUI, a condition I can't guarantee with this data, to say the least.)

(Fixed the links. Sorry about that.)

Replies from: Lumifer
comment by Lumifer · 2016-06-23T14:34:53.092Z · LW(p) · GW(p)

What is it that you want to do?

Just looking at correlations and nothing else can lead to funny results.

Replies from: halcyon
comment by halcyon · 2016-06-23T14:43:15.611Z · LW(p) · GW(p)

I'm trying to get at least a vague handle on what I can legitimately infer from what using data that might, and probably does, contain circular causation. I'm looking for statistical tools that might help me do that. Should I try Bayesian causal inference anyway, just to see what I get? Support vector machines? Markov random fields? Does the Spurious Correlations book have ideas on that? (No, it just seems to be an awesome set of correlations. Thanks, BTW.)

(Also notice that these are not just any correlations. These are the strongest correlations that pertain among a large number of variables relative to each other. I mean, I computed all possible correlations among every combination of 2 variables in hopes that the strongest I find for each variable might show something interesting.)

Replies from: Lumifer
comment by Lumifer · 2016-06-23T15:19:04.533Z · LW(p) · GW(p)

I'm trying to get at least a vague handle on what I can legitimately infer

That's not a very well-defined goal. You are engaging in what's known as a spaghetti factory analysis: make a lot of spaghetti, throw it on the wall, pick the most interesting shapes. This doesn't tell you anything about the world.

Sure, you can start with correlations. But that's only a start. Let's say you've got a high correlation between A and B. The next questions should be: Does it make sense? Is there a plausible mechanism underlying this correlation? Is it stable in time? Is it meaningful? And that's before diving into causality which correlations won't help you much with.

You still need a better goal of the analysis.

Should I try Bayesian causal inference anyway, just to see what I get? Support vector machines? Markov random fields?

Nooooo! You don't understand basic stats, trying to (mis)use complicated tools will just let you confuse yourself more thoroughly.

Replies from: halcyon
comment by halcyon · 2016-06-23T15:31:33.668Z · LW(p) · GW(p)

Sure, I can always offer my own interpretations, but the whole idea was to minimize that as much as possible. I can rationalize anything. Watch: Milk consumption is negatively correlated with income inequality. Drinking less milk leads to stunted intelligence, resulting in a rise in income inequality. Or income inequality leads to a drop in milk consumption among poor families. Or the alien warlord Thon-Gul hates milk and equal incomes.

What conditions must my goal satisfy in order to qualify as a "well-defined goal"? Have I made any actual (meaning technical) mistakes so far? (Anyway, thanks for reminding me to check for temporal stability. I should write a script to scrape the data off pdfs. (Never mind, I found a library.))

Replies from: Lumifer
comment by Lumifer · 2016-06-23T16:27:27.989Z · LW(p) · GW(p)

the whole idea was to minimize that as much as possible

I believe this idea to be misguided. The point of the process is to understand. You can't understand without "interpretation" -- looking for just the biggest numbers inevitably leads you astray.

The issue isn't what you can rationalize -- "don't be stupid" is still the baseline, level zero criterion.

What conditions must my goal satisfy in order to qualify as a "well-defined goal"?

A specification of what kind of answers will be acceptable and what kind will not.

Have I made any actual (meaning technical) mistakes so far?

Are you asking whether your spaghetti factory mixes flour and water in the right ratio?

Replies from: halcyon
comment by halcyon · 2016-06-23T16:48:49.604Z · LW(p) · GW(p)

Not being stupid is an admirable goal, but it's not well-defined. I tried Googling "spaghetti factory analysis" and "spaghetti factory analysis statistics" for more information, but it's not turning up anything. Is there a standard term for the error you are referring to?

Can't I have my common sense, but make all possible comparisons anyway just to inform my common sense as to the general directions in which the winds of evidence are blowing?

I don't see how informing myself of correlations harms my common sense in any way, and the only alternative I can think of is to stick to my prejudices, but whenever some doubt arises as to which of my prejudices has a stronger claim, I should thoroughly investigate real world data to settle the dispute between the two. As soon as that process is over, I should stop immediately because nothing else matters.

Is that the course of action you recommend?

Replies from: Lumifer
comment by Lumifer · 2016-06-23T17:53:25.339Z · LW(p) · GW(p)

Not being stupid is an admirable goal, but it's not well-defined.

It's not a goal. It is a criterion you should apply to the steps which you intend to take. I admit to it not being well-defined :-)

Is there a standard term for the error you are referring to?

In statistics that used to be called "data mining" and was a bad thing. Data science repurposed the term and it's now a good thing :-/ Andrew Gelman calls a similar phenomenon "garden of the forking paths" (see e.g. here).

Basically the problem is paying attention to noise.

Can't I have my common sense, but make all possible comparisons anyway

You can. It's just that you shouldn't attach undue importance to which comparison came the first and which the second. You're generating estimates and at the very minimum you should also be generating what you think are the errors of your estimates -- these should be helpful in establishing how meaningful your ranking of all the pairs is.

And you still need to define a goal. For example, a goal of explanation/understanding is different from the goal of forecasting.

I'm not telling you to ignore the data. I'm telling you to be sceptical of what the data is telling you.

Replies from: halcyon
comment by halcyon · 2016-06-23T20:42:21.057Z · LW(p) · GW(p)

Thank you! Those data mining algorithms are exactly what I was looking for.

(Personally, I would describe the situation you are warning me against as reducing it "more than is possible" rather than "as much as possible". I am definitely in favor of using common sense.)

comment by OrphanWilde · 2016-06-22T17:21:40.847Z · LW(p) · GW(p)

Incidentally, do we have anybody about who can answer a very specific question about meditation practice? (And if you don't know exactly why I'm asking this question, instead of asking the question I want to ask, you shouldn't volunteer to try to answer.)

Replies from: ChristianKl
comment by ChristianKl · 2016-06-22T18:25:10.803Z · LW(p) · GW(p)

I do meditate for a long time and I'm learning it from qualified people. I think I can answer a wide range of question but there might be questions arrising from techniques that I don't know where I can't give good answers.

comment by [deleted] · 2016-06-23T02:17:53.941Z · LW(p) · GW(p)

In liueu of a media thread

Replies from: Viliam
comment by Viliam · 2016-06-24T12:59:26.034Z · LW(p) · GW(p)

How much time would it take you to write a short description of what is in the linked article (I assume you have read it) and why could anyone be interested to read it?

Compare it with the time spent together by all the people who click the link and then feel confused. (On the other hand, if no one clicks the link, what's the point of posting it?)