Open thread, Nov. 23 - Nov. 29, 2015
post by MrMind · 2015-11-23T07:59:52.370Z · LW · GW · Legacy · 258 commentsContents
258 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
258 comments
Comments sorted by top scores.
comment by John_Maxwell (John_Maxwell_IV) · 2015-11-24T04:48:59.788Z · LW(p) · GW(p)
MealSquares (the company I'm starting with fellow LW user RomeoStevens) is searching for nutrition experts to join our advisory team. The ideal person has a combination of formally recognized nutrition expertise & also at least a casual interest in things like study methodology and effect sizes (this unfortunately seems to be a rare combination). Advising us will be an opportunity to improve the diets of many people, it should not be much work, you'll get a small stake in our company, and you'll help us earn money for effective giving. Please get in touch with us (ideally using this page) if you or someone you know might be interested!
Replies from: None, None, MarsColony_in10years, passive_fist↑ comment by [deleted] · 2015-11-24T10:46:54.160Z · LW(p) · GW(p)
I'm not the right person at all, but if you ever want an amateur data enthusiast to help clean and present research results, I'd be willing to donate my time. The project is interesting and I would like to start stretching my skill set. I am pretty good at graphing in R, have a solid understanding of probability theory (undergrad level). I also have a good intuition for cleaning data sets.
All of that evaluation is based on what other math nerds have told me, so I understand if you're not interested!
↑ comment by [deleted] · 2015-11-25T15:32:20.422Z · LW(p) · GW(p)
Do you have any plans for international shipping? (Say, the UK)
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-11-26T08:18:53.721Z · LW(p) · GW(p)
We've experimented with doing international shipping. It gets expensive, and it's also a bit of a hassle. It makes more sense if you're doing a group buy (90+ squares). If you really want MealSquares and you're willing to pay a bunch extra for international shipping, contact us and we can work out details. Long term we would love to set up production facilities in foreign countries like a regular multinational, but that won't be for a while.
↑ comment by MarsColony_in10years · 2015-11-25T16:24:43.630Z · LW(p) · GW(p)
you'll help us earn money for effective giving
I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now. However, 2 questions:
Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate?
What sorts of EA charities are you interested in?
I've been using MealSquares regularly, without realizing that that you guys were LWers or EAs. As such, I've been using mostly s/Soylent because of the cost difference. (A 400 Calorie MealSquare is ~$3, a 400 Calorie jug of Soylent 2.0 is ~$2.83, 400 Calories worth of unmixed Soylent powder is ~$1.83, and the ingredients for 400 Calories worth of DIY People Chow are ~$0.70. All these are slightly cheaper with a subscription/large purchase.)
I ask, because if you happen to be interested in similar EA causes to me, and expect to eventually donate X% of proceeds, then I should be budgeting my expenses to factor that in. If (100%-X%) * MealSquares_Cost < soylent_Cost, then I would buy much less soylent and much (/many?) more MealSquares. I'd be paying a premium to Soylent in order to add a bit more culinary variety. (Also, I realize this X isn't equal to the expected altruistic return on investment, but that would be even harder to estimate.)
Replies from: John_Maxwell_IV, Lumifer↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-11-26T08:43:46.338Z · LW(p) · GW(p)
I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now.
Yep, that's what we've been doing. (We've been providing free MealSquares to some EA organizations, but we haven't been donating a significant portion of our profits directly.)
Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate?
At least 10%, hopefully significantly more.
What sorts of EA charities are you interested in?
We've been trying to focus on growing our business rather than evaluating EA giving opportunities. If we actually do make a lot of money to donate, it will make sense to spend a lot of time thinking about where to give it. And we'll try & focus on identifying opportunities that we have a comparative advantage in (opportunities that are more suited to large donors, like funding a new organization from scratch).
I'm not exactly sure why, but for some reason the idea of people buying our product because we are EAs makes me uncomfortable. I would much rather people buy it because it's good for you, convenient, tasty, etc. As you point out, we are less than 10% more expensive on a per-calorie basis than jug form Soylent. Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?
Replies from: MarsColony_in10years↑ comment by MarsColony_in10years · 2015-11-26T10:16:50.176Z · LW(p) · GW(p)
the idea of people buying our product because we are EAs makes me uncomfortable.
In retrospect, I think that would make me uncomfortable too. In your position, I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion. On the other hand, maybe a deep feeling of obligation to charity isn't a bad thing?
Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?
Based on my (fairly limited) understanding of nutrition, I suspect that any marginal difference between your products is fairly small. I suspect humans get strongly diminishing returns (in the form of increased lifespan) once we have our basic nutritional requirements met in bio-available forms and without huge amounts of anything harmful. After that, I'd expect the noise to overpower the signal. For example, perhaps unmeasured factors like my mood or eating habits change as a function of my Soylent/MealSquares choice, and I wind up getting fast food more often, or get less work done or something. Let's say it would take me a month of solid researching and reading nutrition textbooks to make a semi-educated decision of which of two good things is best. Would the added health benefit give me an additional month of life? What if I value my healthy life, here and now, far more than 1 more month spent senile in a nursing home? What if I also apply hyperbolic discounting?
I've probably done more directed health-related reading than most people. (Maybe 24 hours total, over the pasty year or so?) Enough to minimize the biggest causes of death, and have some vague idea of what "healthy" might look like. Enough to start fooling around with my own DIY soylent, even if I wouldn't want to eat that every day without more research. If someone who sounds knowledgeable sits down and does an independent review, I'd probably read it and scan the comments for critiques of the review.
Replies from: John_Maxwell_IV, Tem42↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-11-26T23:04:35.545Z · LW(p) · GW(p)
Thanks for the explanation. I wrote up some of the details of our approach here. Nutrition is far from being settled, and major discoveries have been made just in the past 50 years. Therefore we take an approach that's fairly conservative, which means (among other things) getting most of our nutrients from whole foods, the way humans have been eating for virtually all of our species' history. We think the burden of proof should be on Soylent to show that their approach is a good one.
↑ comment by Tem42 · 2015-11-26T17:57:41.326Z · LW(p) · GW(p)
the idea of people buying our product because we are EAs makes me uncomfortable.
I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion.
I think many people would run the equation the other way -- buying from a company that gives a potion to charity is a way to pressure competing companies to do the same. In other words, MealSquares give consumers a way to put pressure on the industry. Of course, there are a lot of ways that that model could be flawed, but you're hardly abusing the people who make that choice.
↑ comment by Lumifer · 2015-11-25T16:39:41.374Z · LW(p) · GW(p)
I'd be paying a premium to Soylent in order to add a bit more culinary variety.
/chokes on his foie gras X-D
Replies from: MarsColony_in10years↑ comment by MarsColony_in10years · 2015-11-26T08:45:09.167Z · LW(p) · GW(p)
Someone gave you a downvote. If it was on my behalf or on the behalf of Soylent, then for the record I thought it was funny. :)
↑ comment by passive_fist · 2015-11-24T23:07:03.178Z · LW(p) · GW(p)
How does your product compare to widely-available meal replacement foods, like, say: http://www.cookietime.co.nz/osm.html ?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-11-25T03:09:22.512Z · LW(p) · GW(p)
MealSquares are nutritionally complete--5 MealSquares contain all the vitamins & minerals you need to survive for a day, in the amounts you need them. In principle you could eat only MealSquares and do quite well, although we don't officially recommend this. It's more about having an easy "default meal" that you can eat with confidence once or twice a day when you don't have something more interesting to do like get dinner with friends.
MealSquares is made from a variety of whole foods, and almost all of the vitamins and minerals are from whole food sources (as opposed to competing products like Soylent that use dubious vitamin powders). Virtually every nutrition expert in the past century has recommended eating a variety of whole foods, and MealSquares stuffs more than 10 whole food ingredients in to a single convenient package, including 3 different fruits and 3 different vegetables.
We've put a lot of research in to MealSquares to make it better for you than most or all competing products on the market. For example, the first ingredient in Clif Bar is brown rice syrup (basically a glorified form of sugar), and they get their protein from rice and soy (not as bioavailable as other sources). MealSquares contains only a bit of added sugar (dark chocolate chips) and bioavailable protein sources. I'm having a hard time finding solid nutrition info on the One Square Meal website. But you can see that our 400 calorie bar (120 grams) has only 12 grams of sugar, so 10% sugar by weight, whereas their bar is 17.1% sugar by weight.
Most competing meal bars are similar: non-bioavailable protein sources and lots of sugar, generally added sugar. Clif Bar is basically a candy bar disguised to be healthy: it has 23 grams of sugar in a 230 calorie bar, and a Hershey's Milk Chocolate with Almonds bar has 19 grams of sugar in a 210 calorie bar. Most meal bar makers are doing the nutritional equivalent of taking a Hershey bar, adding in some vitamin powders and soy protein isolate, and telling their customers that it's a healthy snack.
The biggest practical difference between us and One Square Meal is probably that we are available in the US and they are available in New Zealand.
Replies from: passive_fist↑ comment by passive_fist · 2015-11-25T03:36:23.212Z · LW(p) · GW(p)
Interesting, thanks for the info. Yes most meal replacement bars seem to be simply soy-augmented candy bars, however there is of course a practical reason for this: sweet foods sell better.
It might be worth mentioning on your site that your product is more healthy and has less sugar than the alternatives. Another problem is soy protein. Some research hints at soy protein having undesirable hormone-imitating effects: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074428/ so this could be a selling point as well as I presume you do not use soy protein.
comment by [deleted] · 2015-11-24T23:27:42.443Z · LW(p) · GW(p)
More data on Kepler star KIC 8462852.
http://www.nasa.gov/feature/jpl/strange-star-likely-swarmed-by-comets
After going back through Spitzer space telescope infrared images, the star did not have an infrared excess as recently as earlier in 2015, meaning that there wasn't some kind of event that generated huge amounts of persistent dust between the last measurements of spectra and the Kepler dataset showing the dips in brightness. This bolsters the 'comet storm / icy body breakup' theory in that that would generate dust close to the star that rapidly goes away and is positioned such that we are primed to see large fractions of it as it is generated close to the star rather than a tiny fraction of dust further away.
(This comes after the Allen telescope array, failing to detect anything interesting, put an upper limit on radio radiation coming from the system at 'weaker than 400x the strength we could put out with Aricebo in narrow bands, or 5,000,000x in wide bands' for what that's worth)
comment by [deleted] · 2015-11-24T04:09:49.720Z · LW(p) · GW(p)
Why is my karma so low? Is there something I'm consistently doing wrong that I can do less wrong? I'm sorry.
Replies from: Viliam, Lumifer, ChristianKl, NancyLebovitz, None, entirelyuseless, Richard_Kennaway, polymathwannabe, MrMind, Elo, SanguineEmpiricist↑ comment by Viliam · 2015-11-24T09:11:30.541Z · LW(p) · GW(p)
The first association I have with your username is "spams Open Threads with not really interesting questions".
Note that there are two parts in that objection. Posting a boring question in an Open Thread is not a problem per se -- I don't really want to discourage people from doing that. It's just that when I open any Open Thread, and there are at least five boring top-level comments by the same user, instead of simply ignoring them I feel annoyed.
Many of your comments are very general debate-openers, where you expect others to entertain you, but don't provide anything in return. Choosing your recent downvoted question as an example:
How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?
First, how do you estimate "threats and your ability to cope"? If you ask other people to provide their data, it would be polite to provide your own.
Second, what is your goal here? Are you just bored and want to start a debate that could entertain you? Or are you thinking about a specific problem you are trying to solve? Then maybe being more specific in the question could help to give you more relevant answer. But the thing is, your not being specific seems like an evidence for the "I am just bored and want you to entertain me" variant.
↑ comment by Lumifer · 2015-11-24T15:37:42.111Z · LW(p) · GW(p)
You use LW as a dumping ground for whatever crosses your mind at the moment, and that is usually random and transient noise.
Replies from: None↑ comment by ChristianKl · 2015-11-24T09:58:29.266Z · LW(p) · GW(p)
As I said before, I think it would be good if you get in the habit of trying to predict the votes that your posts get beforehand and then not post when you think that a post would produce negative karma.
One way to do this might be, whenever you write a post keep it in a textfile and wait a day. The next day you ask yourself whether there anything you can do to improve it. If you feel you can improve it, do it. Then you estimate a confidence interval for the karma you expect your post to get and take a note of it in a spreadsheet. If you think it will be positive post your comment.
If you train that skill I would expect you to raise your karma and learn a generally valuable skill.
If at the end of writing a post you think "I’m not sure where I was going with this anymore." as in http://lesswrong.com/r/discussion/lw/mzx/some_thoughts_on_decentralised_prediction_markets/ , don't publish the post. If you yourself don't see the point in your writing it's unlikely that others will consider it valuable.
Replies from: moridinamael, Tem42↑ comment by moridinamael · 2015-11-24T16:16:16.952Z · LW(p) · GW(p)
As I said before, I think it would be good if you get in the habit of trying to predict the votes that your posts get beforehand and then not post when you think that a post would produce negative karma.
This is the best advice. The trick to keeping high karma is to cultivate your discernment. Each time you write a post, assess its value, and then delete it if you don't anticipate people appreciating it. View that deletion as a victory equal to the victory of posting a high-karma comment.
Replies from: Elo↑ comment by NancyLebovitz · 2015-11-24T14:19:39.966Z · LW(p) · GW(p)
Thank you for asking. I've been trying to figure out what to say to you, but couldn't figure out quite what the issue is. One possibility in terms of karma is to bundle a number of comments into a single comment, but this doesn't address how the comments could be better.
A possible angle is to work on is being more specific. It might be like the difference between a new computer user and a more sophisticated computer user. The new user says "My computer doesn't work!", and there is no way to help that person from a distance until they say what sort of computer it is, what they were trying to do, and some detail about what happened.
Being specific doesn't come naturally to all people on all subjects, but it's a learnable skill, and highly valued here.
↑ comment by [deleted] · 2015-11-24T04:46:41.754Z · LW(p) · GW(p)
I think it's that you post a lot of questions and not a lot of content. Less Wrong is predisposed to upvoting high-content responses. I haven't had an account for very long, but I have lurked for ages. That's my impression, anyways. I recognize that since I haven't actually pulled comment karma data from the site and analyzed it, I could be totally off-base.
Maybe when you ask questions, use this form:
[This is a general response to the post] and [This is what is confusing me] but [I thought about it and I think I have the answer, is this correct?] or [I thought about it, came up with these conclusions, but rejected them for reasons listed here, I'm still confused]
EDIT: I just looked at your submitted history. You do post content in Main, apparently, but your posts seem to run counter to the popular ideas here. There is bias, and LessWrong has a lot of ideas deemed "settled." Effective Altruism appears to be one, and you have posted arguments against it. I've also seen some of your posts jump to conclusions without explaining your explicit reasons. LWers seem to appreciate having concepts reduced as much as possible to make reasoning more explicit.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-24T10:13:07.320Z · LW(p) · GW(p)
There is bias, and LessWrong has a lot of ideas deemed "settled."
Any group has a lot of ideas that are settled. If you want to convince any scientific minded group that the Aristoteles four elements is true, then you have to hit a high bar for not getting rejected. If anything LW allows a wide array of contrarian points.
LW's second highest voted post is Holden's post against MIRI and is contrarian to core ideas of this community in the same sense as a post criticizing EA is. The difference is that the post actualy goes deep and make a substantive argument.
Replies from: None↑ comment by [deleted] · 2015-11-24T10:35:51.955Z · LW(p) · GW(p)
I want to say that that's what I was trying to imply, but that might be backwards-rationalization. I do have the impression that contrarian ideas are accepted and lauded if and only if they're presented with the reasoning standards of the community. I'll be honest: LW does strike me as far-fetched in some respects BUT I recognize that I haven't done enough reading on those subjects to have an informed opinion. I've lurked but am not an ingrained member of the community and can't give a detailed analysis of the standards. Only my impression.
AND I realize that this sounds defensive, and I know there's no real reason for my ego to be wounded. I appreciate your input! I hope that my advice to Clarity wasn't too far off the mark. I tried to be clear about my advice being based on impressions more than data.
EDIT: removed "biased," replaced with "far-fetched."
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-24T11:15:08.478Z · LW(p) · GW(p)
I do have the impression that contrarian ideas are accepted and lauded if and only if they're presented with the reasoning standards of the community.
Yes, LW does have reasoning standards. That's part of what being refining the art of human rationality is about.
LW does strike me as biased in some respects
What do you mean with "biased"? That LW is different than mainstream society in the ideas it values?
Do you think it's a bias to treat badly reasoned post which might result in people dying the differently than harmless badly reasoned posts?
Replies from: None↑ comment by [deleted] · 2015-11-24T11:39:23.612Z · LW(p) · GW(p)
Obviously it has reasoning standards. They are much higher than the average person might expect, because that's one of the goals of the community.
Bias was an poor word to use, and I retract my use of the term. I mean that as a relatively new participant, there are ideas that seem far-fetched because I have not examined the arguments for them. I admit that this is nothing more than my visceral reaction. Until I examine each issue thoroughly, I won't be able to say anything but "that viscerally strikes me as biased." Cryonics, for instance, is a conclusion that seems far-fetched because I have a very poor understanding of biology, and no exposure to the discussion around it. Without a better background in the science and philosophy of cryonics, I have no way of incorporating casual acceptance of the idea into my own conclusion. I recognize that, admit it, and am apparently not being clear about that fact. In trying to express empathy with a visceral reaction of disbelief, I misused the word "bias" and will be more clear in the future.
On the second point: I understand that there's a cost to treating every post with the same rigor. Posts that are poorly reasoned, and come to potentially dangerous conclusions, should be examined more rigorously. Posts that are just as bad, but whose conclusions are less dangerous, can probably be taken less seriously. Even so...someone who makes many such arguments, with a mix of dangerous and less-dangerous conclusions, might see a lack of negative feedback as positive feedback. That's an issue in itself, but newcomers wouldn't be in a position to recognize that.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-24T11:59:44.381Z · LW(p) · GW(p)
Cryonics is not a discussion that's primarily about biology. A lot of outsider will want to either think that cryonics works or that it doesn't. On LW there a current that we don't make binary judgements like that but instead reason with probabilities. So thinking that there a 20% chance that cryonics works is enough for people to go out and buy cryonics insurance because of the huge value that cryonics has if it succeeds. That's radically different than most people outside of LW think.
Replies from: Viliam, None↑ comment by Viliam · 2015-11-25T07:03:58.601Z · LW(p) · GW(p)
Cryonics is not a discussion that's primarily about biology.
Well, the biological aspect is "where exactly in the body is 'me' located"?
For example, many people on LW seem to assume that the whole 'me' is in the head; so you can just freeze the head, and feed the rest to the worms. Maybe that's a wrong idea; maybe the 'me' is much more distributed in the body, and the head is merely a coordinating organ, plus a center of a few things that need to work really fast. Maybe if the future science will revive the head and connect it to some cloned/artificial average human body, we will see the original personality replaced by more or less an average personality; perhaps keeping the memories of the original, but unable to empathise with the hobbies or values of the original.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-25T09:58:48.304Z · LW(p) · GW(p)
For example, many people on LW seem to assume that the whole 'me' is in the head; so you can just freeze the head, and feed the rest to the worms.
Whether you need to freeze the whole body or whether the head is enough is a meaningful debate, but it has little to do with why a lot of people oppose cryonics.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-11-25T13:28:10.721Z · LW(p) · GW(p)
At this stage, I can see an argument for freezing the gut, or at least samples of the gut, so as to get the microbiome. Anyone know about reviving frozen microbes?
Replies from: Lumifer↑ comment by [deleted] · 2015-11-25T01:20:31.741Z · LW(p) · GW(p)
I understand that; I'm still not comfortable enough with the discussion about cryonics to bet on it working.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-25T10:01:09.935Z · LW(p) · GW(p)
Do you have a probability in your head about cryonics working or not working, or do you feel uncomfortable assigning a probability?
Replies from: None↑ comment by [deleted] · 2015-11-25T10:30:18.030Z · LW(p) · GW(p)
A little of both, I think.
- There is evidence for and against cryonics that I KNOW exists, but I haven't parsed most of it yet.
- If I come to the conclusion that cryonics insurance is worth betting on, I am not sure I can get my spouse on board. Since he'd ultimately be in charge of what happens to my remains, AND we have an agreement to be open about our financial decisions, him being on board is mandatory.
- If I come to the conclusion that cryonics is worth betting on, I might feel morally obligated to proselytize about it. That has massive social costs for me.
- I'm freaked out by the concept because very intelligent people in my life have dismissed the concept as "idiotic," and apparently cryonics believers make researchers in the field of cryogenics very uncomfortable.
Basically, it's a whole mess of things to come to terms with. The spouse thing is the biggest.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-25T12:04:21.372Z · LW(p) · GW(p)
I think those concerns are understandable but the thing that makes LW special is that discourse here often ignores uncomfortable barriers of thought like this. That can feel weird for outsiders.
↑ comment by entirelyuseless · 2015-11-24T14:38:58.297Z · LW(p) · GW(p)
A large proportion of your comments seem very distracting and sort of off-topic for Less Wrong.
Replies from: None↑ comment by [deleted] · 2015-11-24T23:45:04.294Z · LW(p) · GW(p)
Thanks. Can I have an example which is either self-evident as distracting and off-topic or explain why it is?
Replies from: NancyLebovitz, entirelyuseless↑ comment by NancyLebovitz · 2015-11-25T13:45:06.731Z · LW(p) · GW(p)
I looked at a few pages of your comment history to see if I could find a particularly horrible example to base an explanation on (entirelyuseless's link is appropriate), but I was surprised to find that the vast majority of your comments had no karma rather than downvotes.
I'm not sure what you need to do to upgrade or edit out your typical comment. Possibly you could review your upvoted comments to see how they're different from your usual comments.
↑ comment by entirelyuseless · 2015-11-25T02:21:28.166Z · LW(p) · GW(p)
This is a sufficiently evident example.
↑ comment by Richard_Kennaway · 2015-11-25T11:05:03.358Z · LW(p) · GW(p)
In addition to what everyone else has said, here's a useful article on how to ask smart questions. It's talking about asking technical questions on support forums, but the matter generalises, especially the advice to make your best effort to answer it yourself, before asking it publicly, and when you do, to provide the context and where you have got to already.
Replies from: Nonewhile it isn't necessary to already be technically competent to get attention from us, it is necessary to demonstrate the kind of attitude that leads to competence — alert, thoughtful, observant, willing to be an active partner in developing a solution.
↑ comment by [deleted] · 2015-11-28T00:17:26.877Z · LW(p) · GW(p)
Thanks, that article is incredible. I hope to see one that is about how to answer questions, and how to understand answers too! After reading, some contemplation on the matter, and some chance happenings upon information I feel is relevant to the issue, I believe I've changed a lot:
Recently a highly admired friend of mine said something along the lines of 'I've never said anything that wasn't intention'. Whereas for me, most of that which I say is unintentional, just observed. So this got me thinking pretty hard about these things. Being on my mind, I suppose I got the following sliver of personal development when I started looking up some podcasts to comfort myself the following day:
I'm vain. When I listen to things, personal development podcasts or not, I tend to look for what could be about me. I sampled the Danger and Play podcasts and like what I've heared. Inspired by the way he frames self-talk as interpersonal ilocutation, my mental landscaped has changed steeply. One consequence of this has been that I'm no longer held captive to 'believing' the first thought or idea that comes to my head. Rather, it's as if it's just one mental subagents proposition, to be contested and such. I am not biased towards reserving my thoughts till a more complex stopping rule, like coming to a conclusion that a certain verbablisation would lead to a certain outcome (e.g. the conclusion is positive emotionally, raises my anxiety to an optimal level, and/or functional by way of interpersonal compliance) , rather than something that just spews from my mind.
Perhaps a precursor to this has been a general dampening of how seriously I've been taking my moral intuitions. I've contextualised them in terms of the facts that they are predated by evolutionary foreces, context, and such. Approximately an expressivist position, championed sometimes by A.J Ayer and the logical positives, regarding moral language, if I remember the wikipedia page correctly...but even say, in ingratituated sense of helplessness then seems no longer to relate to entrenched circumstances, but liable to change depending on the path dependence on my memory - something influenced by the past, but continuously influenced by the ongoing present, even for older memories that are revisited and updated, reframed etc.
Danger and Play is part of the 'red pill' 'manosphere' of content. Frequently the movement is derided as mysogenistic. I can't speak on that, since I reckon that it would be heterogenous with peoples attitudes towards women and labelling a broad category critically is misleading (like labelling all Islamists as terrorists, for analogy). Some of my sticking points in gender and sexual relations seem to relate to underdeveloped learned optimism and growth mindset. It seems like some 'red pill' and related 'seduction' movements include elements that are concurrently antithetical to developing these:
To illustrate, the prominent RSD company often frames things in ways that don't suggest negative things are situational and temporary, while making global judgemnets about negative things (eg: 'life is hard...'). That's a recipe for learned helplessness. Which, may very well be good for their business model, combined with all the motivation they spew out. In fact, this observation probably holds for a number of motivational video channels to keep people coming back. There are certainly exceptions - I remember one which started off with that quote from Albert Einstein that closely approximates a pithy summation of a growth mindset and learned optimism, but the details escape me.
One thing that really compels and reminds me to think in this reflective way is simply that a lot of my intuitions are really quite mean to myself. When that podcast instructed me to stand back and think of myself as another person, it just seems absurd to treat myself like that. I mean, if I find effective altruism things compelling because they're nice to do, isn't the most proximate and therefore likely one of the easier or more reliable niceties to be nice to myself. In turn, it looks like that will lead to:
competence — alert, thoughtful, observant, willing to be an active partner in developing a solution.
The kind of attitude that makes for smarter questions...
↑ comment by polymathwannabe · 2015-11-24T16:29:21.940Z · LW(p) · GW(p)
Usually, your questions feel more suited for a general-purpose forum than the narrowly specialized set of interests commonly discussed here. (We do have "Stupid Questions" and "Instrumental Rationality" threads, but even those follow the same standards for comment quality as the rest of LW.)
Also, posting a dozen questions in succession may give users the impression that you're trying to monopolize the discussion. Even if that's not your intention, I would understand it if some users ended up thinking it is.
I would suggest looking for specialized forums on some of the topics that interest you, and using LW only for topics likely to be of interest to rationalists.
Replies from: None↑ comment by [deleted] · 2015-11-24T23:45:31.330Z · LW(p) · GW(p)
Thanks. Do you have a suggestion for another forum you recommend I move to?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-11-25T01:19:39.650Z · LW(p) · GW(p)
I don't know much about topic-specific forums, but seeing as you like to ask frequent questions, Reddit and Quora come to mind.
↑ comment by MrMind · 2015-11-24T08:02:47.423Z · LW(p) · GW(p)
Many of your comment get downvoted, sometimes heavily. In every open thread you post a lot of questions, some of them completely off topic.
A single good question in the open thread can give you 2-3 karma, but a single bad one can go down as -7 or less. So stop asking so much irrelevant questions and start contributing.
↑ comment by Elo · 2015-11-24T22:32:55.248Z · LW(p) · GW(p)
as a hard rule; when posting in open; the ratio of your posts to posts by others should always be below 1:3 (other's might want to comment and suggest 1:4). You should post less then 1 in 4 of the posts in the open thread. They often read like a stream of consciousness (I think you know this already), and you might be better off taking on board some of the ideas of sitting on thoughts over a day or so and re-evaluating them for yourself before posting.
As a side note: presentation of an idea can help the reception. We are still human; and do care for delicate wording on some topics.
Replies from: None↑ comment by [deleted] · 2015-11-24T23:49:40.463Z · LW(p) · GW(p)
Thanks. I do tend to sit on my ideas, or I like to post and update those posts or reply with reflections upon revisitations of those thoughts so that I and others can see how my thinking changes over time.
My ratio is only that high when there is a new open thread. Since I post in blocks by formulating several posts then posting then when I next get a chance, it may appear early on that my ratio is high. But by the end of the month, I am certainly no where near that ratio.
I am continuously trying to improve my presentation. Unfortunately, till date I have received minimal specific feedback on how to improve presentation. Sometimes I feel the stream of consciousness approach illustrates the way I'm thinking about a certain thing more illustratively.
Replies from: gjm, Elo↑ comment by gjm · 2015-11-25T00:03:11.069Z · LW(p) · GW(p)
It may well do, but illustrating the way you're thinking about something isn't necessarily a good goal here. Why should anyone else care how you happen to be thinking about something?
There may be special cases in which they do. If you are a world-class expert on something it could be very enlightening to see how you think about it. If you are just a world-class thinker generally, it might be fascinating to see how you think about anything. Otherwise, not so much.
↑ comment by Elo · 2015-11-25T09:49:18.399Z · LW(p) · GW(p)
It may be worth releasing the posts gradually over the course of the week so as to not make it look like a clump. (and again paying attention to that ratio). I agree that you seem to post a chunk and once in a week. but it may serve better to spread out your posts.
↑ comment by SanguineEmpiricist · 2015-11-27T08:03:44.792Z · LW(p) · GW(p)
Don't buy these comments too much. i'm glancing through them, they're much too critical. Listen to Nancy if anyone.
comment by ShardPhoenix · 2015-11-23T23:16:36.631Z · LW(p) · GW(p)
What is the optimal amount of attention to pay to political news? I've been trying to cut down to reduce stress over things I can't control, but ignoring it entirely seems a little dangerous. For an extreme example, consider the Jews in Nazi Germany - I'd imagine those who kept an eye on what was going on were more likely to leave the country before the Holocaust. Of course something that bad is unlikely, but it seems like it could still be important to be aware of impactful new laws that are passed - eg anti-privacy laws, or internet piracy now much more heavily punishable, etc.
So what's the best way to keep up on things that might have an impact on one's life, without getting caught up in the back-and-forth of day-to-day politics?
Replies from: fubarobfusco, NancyLebovitz, VoiceOfRa, Lumifer, ChristianKl, Elo, Tem42↑ comment by fubarobfusco · 2015-11-24T20:42:17.833Z · LW(p) · GW(p)
Some things to think about:
Are there actual political threats to you in your own polity (nation, state, etc.)? Do you belong to groups that there's a history of official repression or large-scale political violence against? Are there notable political voices or movements explicitly calling for the government to round you up, kill you, take away your citizenship or your children, etc.? (To be clear: An entertainer tweeting "kill all the lawyers" is not what I mean here.)
Are you engaged in fields of business or hobbies that are novel, scary, dangerous, or offensive to a lot of people in your polity, and that therefore might be subject to new regulation? This includes both things that you acknowledge as possibly harmful (say, working with poisonous chemicals that you take precautions against, but which the public might be exposed to) as well as things that you don't think are harmful, but which other people might disagree. (Examples: Internet; fossil fuels; drones; guns; gambling; recreational drugs; pornography)
Internationally — In the past two hundred years, how often has your country been invaded or conquered? How many civil wars, coups d'état, or failed wars of independence have there been; especially ones sponsored by foreign powers? How much of your country's border is disputed with neighboring nations?
Replies from: Lumifer↑ comment by NancyLebovitz · 2015-11-24T00:14:16.466Z · LW(p) · GW(p)
For the extreme stuff, I think you'll get clues from things like how people like you are treated on the street.-- if it's your country. If you're at risk of being conquered by a government that hates you, the estimate is more complicated.
For the more likely things to keep track of, think about what's likely to affect you (like changes in laws) and use specialist sources.
↑ comment by VoiceOfRa · 2015-11-27T04:22:26.594Z · LW(p) · GW(p)
This is harder than it seems. For example, to find out when you need to withdraw your money ahead of a banking crisis, like what happened in Cyprus and Greece, you need to figure this out ahead of everybody else. Furthermore, the authorities are going to be doing their best to cover up the impending crisis.
↑ comment by Lumifer · 2015-11-24T01:10:56.215Z · LW(p) · GW(p)
What is the optimal amount of attention to pay to political news?
To electioneering, zero would be about right (unless you appreciate the entertainement value). To particular laws and/or regulations which might affect you personally, enough to know the landscape.
↑ comment by ChristianKl · 2015-11-24T10:12:29.959Z · LW(p) · GW(p)
If you live in the US I would guess that if you read LW you will see comments about really important political events.
↑ comment by Elo · 2015-11-24T22:38:05.799Z · LW(p) · GW(p)
how I do it -
Things that I care about: local events (likelyhood of terrorism; or safety threats nearby)
Things I don't care about: any politics that is further away than that. (and not likely to affect my life)
global, country-wide, natural disasters that are far away.
↑ comment by Tem42 · 2015-11-24T21:32:58.107Z · LW(p) · GW(p)
Get weekly updates from light, happy sources (The Daily Show, The News Quiz, Mock the Week), and then specific searches for things that sound important.
Replies from: VoiceOfRa, Viliam↑ comment by VoiceOfRa · 2015-11-27T04:19:53.775Z · LW(p) · GW(p)
Those strike me as worse than useless for the kind of things ShardPhoenix is interested in, e.g., they are the kinds of shows that would mock the "idiots" who believe the "ridiculous conspiracy theory" that the Nazis are actually planning to systematically exterminate the Jews.
comment by Curiouskid · 2015-11-23T16:54:33.723Z · LW(p) · GW(p)
So, it seems like lots of people advise buying index funds, but how do I figure out which specific ones I should choose?
Replies from: Vaniver, Richard_Kennaway, None, Curiouskid, Lumifer, Richard_Kennaway↑ comment by Vaniver · 2015-11-23T18:57:51.497Z · LW(p) · GW(p)
Short version: try something like Vanguard's online recommendation, or check out Wealthfront or Betterment. Probably you'll just end up buying VTSMX.
Long version: The basic argument for index funds over individual stocks is that you think that a is going to outperform a because of general economic growth and reduced risk through pooling. So if you apply the same logic to index funds, what that argues is that you should find the index fund that covers the largest possible pool.
But it also becomes obvious that this logic only stretches so far--one might think that meta-indexing requires having a stock index fund and a bond index fund that are both held in proportion to the total value of stocks and bonds. So let's start looking at the factors that push in the opposite direction.
First, historically stocks have returned more than bonds long-term, with higher variability. It makes sense to balance your holdings based on your time and risk preferences, rather than the total market's time and risk preferences. (If you're young, preferentially own stocks.)
As well, you might live in the US, for example, and find it more legally convenient to own US stocks than international stocks. The corresponding fund is VTSMX, for the total US stock market. If you want the global fund, it's VTWSX.
You might have beliefs about small caps and large caps, or sectors, and so on and so on. One mistake to avoid here is saying "well, I have three options, so clearly I should put a third of my money into each option," especially because many of these options contain each other--the global fund mentioned earlier is also a US fund, because the US is part of the globe.
Replies from: solipsist↑ comment by solipsist · 2015-11-25T12:50:38.050Z · LW(p) · GW(p)
Asset allocation (what portion of your money is in stocks and bonds) is very important, depends on your age, and will get out of whack unless you rebalance. So use a Vanguard Target Retirement Date fund.
Replies from: Lumifer↑ comment by Lumifer · 2015-11-25T15:33:01.182Z · LW(p) · GW(p)
what portion of your money is in stocks and bonds
There are more financial assets than just stocks and bonds.
Replies from: banx↑ comment by banx · 2015-11-25T21:47:47.869Z · LW(p) · GW(p)
Yes, but those are the important ones. Stocks for high expected returns and bonds for stability. You can generalize "bonds" to include other things that return principal plus interest like cash and CDs.
Replies from: Lumifer↑ comment by Lumifer · 2015-11-25T23:43:17.080Z · LW(p) · GW(p)
What's the criterion of importance?
...other things that return principal plus interest like cash
Um.... I hate to break it to you...
Replies from: banx↑ comment by banx · 2015-11-26T00:11:16.730Z · LW(p) · GW(p)
What's the criterion of importance?
Important to the goal of increasing one's wealth while managing the risk of losing it. Certainly there are other possible goals (perhaps maximizing the chance of having a certain amount of money at a certain time, for example) but this is the most common, and the one that I assume people on LW discussing basic investing concepts would be interested in.
Um.... I hate to break it to you...
I'm not sure if you're referring to the fact that popular banks are returning virtually zero interest or if you're interpreting "cash" as "physical currency notes". If the former, I have cash in bank accounts that return .01%, 1%, and 4.09% (each serving different purposes). If the latter, I apologize for the confusion. The word is used to mean different things in different contexts. In the context of investing it is standard to include in its meaning checking and savings accounts, and often also CDs.
Replies from: Lumifer↑ comment by Lumifer · 2015-11-26T00:33:00.003Z · LW(p) · GW(p)
Important to the goal of increasing one's wealth while managing the risk of losing it.
Given this definition, I don't see why only stocks and bonds qualify.
The word is used to mean different things in different contexts.
True, but given that you said "cash and CDs" I thought your idea of cash excludes deposits. Still, there are more asset classes than equity and fixed income.
Replies from: banx↑ comment by banx · 2015-11-26T00:50:39.807Z · LW(p) · GW(p)
Given this definition, I don't see why only stocks and bonds qualify.
My claim is that equity and fixed income are the important pieces for reaching that goal. With a total stock index fund and a total bond index fund you can achieve these goals almost as well as any other more complicated portfolio. Additional asset classes can add additional diversification or hedge against specific risks. What other asset classes do you have in mind? Real estate? Commodities? Currencies?
True, but given that you said "cash and CDs" I thought your idea of cash excludes deposits.
Fair enough. I was unclear.
Replies from: Lumifer↑ comment by Lumifer · 2015-11-30T17:19:56.089Z · LW(p) · GW(p)
My claim is that equity and fixed income are the important pieces for reaching that goal.
They are, of course, important. The question is whether they are the only important pieces.
What other asset classes do you have in mind
Real estate is the most noticeable thing here, given how for a lot of people it is actually their biggest financial investment (and often highly leveraged, too). Commodities and such generally require paying at least some attention to what's happening and the usual context of financial discussions on LW is the "into what can I throw my money so that I can forget about it until I need it?"
↑ comment by Richard_Kennaway · 2015-11-24T12:02:28.627Z · LW(p) · GW(p)
I have a secondary question to that. These things seem to all operate online only, without bricks and mortar. How do I assure myself that a website that I have never seen before is trustworthy enough to invest, say, 6-figure sums of money in? Are there official ratings or registers, for probity rather than performance?
Replies from: Vaniver, None↑ comment by Vaniver · 2015-11-24T18:25:35.898Z · LW(p) · GW(p)
That's easy to answer for Vanguard, which has been around since 1975 and has $3T under management. It's not going anywhere. Both Wealthfront and Betterment were founded in 2008, in Palo Alto and NYC respectively, and have about $2B and $3B under management. I don't think there are any official ratings of probity out there; I'm not sure there's a good source besides trawling through the business press looking for red flags.
↑ comment by [deleted] · 2015-11-23T22:00:41.122Z · LW(p) · GW(p)
The best argument for getting an index fund is the expense ratio; not broad versus narrow. Managed mutual funds have higher expense ratios because of the broker's salary. Private trading instead of buy and hold will similarly cost you more because of the transaction cost. To justify their transactions, a broker doesn't just have to beat the market, but to beat the market by a large enough swing to justify those extra costs. Because of the number of brokers out there, even if one has consistently beaten the market, it is impossible to determine whether that is due to skill or luck for any given broker. Large domestic index funds will generally have the lowest expense ratios.
↑ comment by Curiouskid · 2016-04-20T06:38:48.890Z · LW(p) · GW(p)
So, I think the correct answer to the question "I have a 5-figure sum of money to invest" is to just go with Betterment/Wealthfront rather than Vanguard, so that you get diversification between asset classes (whereas a specific index fund will get you diversification within an asset class). If I'd known this when I'd asked the question, I would have picked a better mix of Vanguard index funds, and not hesitated as much with figuring out where to put the money. To be fair, Vaniver basically said this, I just think the links below explain it better, so I could feel certain enough to make a decision rather than let the money burn away through inflation.
http://www.mrmoneymustache.com/2012/02/17/book-review-the-intelligent-asset-allocator/
http://www.mrmoneymustache.com/2014/11/04/why-i-put-my-last-100000-into-betterment/
Replies from: Vaniver↑ comment by Richard_Kennaway · 2015-11-24T12:00:46.816Z · LW(p) · GW(p)
I have a secondary question to that. These things seem to all operate online only, without bricks and mortar. How do I assure myself that an online website that I have never seen before is trustworthy enough to invest, say, 6-figure sums of money in? Are there official ratings or registers, for probity rather than performance?
comment by Panorama · 2015-11-26T16:45:13.777Z · LW(p) · GW(p)
Meta-research: Evaluation and Improvement of Research Methods and Practices by John P. A. Ioannidis , Daniele Fanelli, Debbie Drake Dunne, Steven N. Goodman.
Replies from: NoneAs the scientific enterprise has grown in size and diversity, we need empirical evidence on the research process to test and apply interventions that make it more efficient and its results more reliable. Meta-research is an evolving scientific discipline that aims to evaluate and improve research practices. It includes thematic areas of methods, reporting, reproducibility, evaluation, and incentives (how to do, report, verify, correct, and reward science). Much work is already done in this growing field, but efforts to-date are fragmented. We provide a map of ongoing efforts and discuss plans for connecting the multiple meta-research efforts across science worldwide.
comment by Nate646 · 2015-11-28T11:57:08.970Z · LW(p) · GW(p)
The prediction market I was using, iPredict is closing. Apparently it represents a money laundering risk and the Government refused to grant an exemption. Does anyone know any good alternatives?
Replies from: Douglas_Knight, Elo↑ comment by Douglas_Knight · 2015-11-30T17:10:24.662Z · LW(p) · GW(p)
I asked about this recently. I think that the sports bookie Betfair is the best existing option, in terms of liquidity and diversity of topics. The only prediction markets that I know to be open to Americans are the Iowa Electronic Markets and PredictIt, both with smaller limits than iPredict.
comment by Lumifer · 2015-11-25T19:16:41.969Z · LW(p) · GW(p)
Paper in Nature about differences in gene expression correlated with chronological age.
tl;dr -- "We identified 1,497 genes that are differentially expressed with chronological age."
Quickdraw conclusion: this will require A LOT of silver bullets.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-11-25T20:11:24.937Z · LW(p) · GW(p)
I don't think we learn a lot through the number. It might be that multiple genes are regulated by the same mechanism and turning that mechanism down brings us forward.
Replies from: None, zslastman↑ comment by [deleted] · 2015-11-28T07:44:34.165Z · LW(p) · GW(p)
Indeed, not only is this only looking at the very broad end results of what is seen to co-vary with age in a regular way completely agnostic to mechanism, it is looking only at gene expression in peripheral blood, one very highly specialized (to the point of being a liquid) tissue type.
↑ comment by zslastman · 2015-11-26T12:33:04.612Z · LW(p) · GW(p)
Yeah it doesn't say much. For one thing I'd say it's just about all of the genes that are differentially expressed, if you look hard enough. Regardless, that doesn't tell us how many of them really matter with respect to the things we care about, how many causal factors are at work, or how difficult it will be to fix. Doesn't rule out a single silver bullet aging cure (though other things probably do)
comment by RicardoFonseca · 2015-11-25T18:07:27.234Z · LW(p) · GW(p)
Are there any studies that highlight which biases become stronger when someone "falls in love"? (Assume the love is reciprocated.) I am mainly interested in biases that affect short- and medium-term decisions, since the state of mind in question usually doesn't last long.
One example is the apparent overblown usage of the affect heuristic when judging the goodness of the new partner's perceived characteristics and actions (the halo effect on steroids).
Replies from: RicardoFonseca, LessRightToo↑ comment by RicardoFonseca · 2015-11-27T12:46:06.992Z · LW(p) · GW(p)
Here is a study finding that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control": free copy%20Reduced%20cognitive%20control%20in%20passionate%20lovers.pdf) / springer link
Also, while I was searching for studies, I found a news article saying this about a study by Robin Dunbar:
"The research, led by Robin Dunbar, head of the Institute of Cognitive and Evolutionary Anthropology at Oxford University, showed that men and women were equally likely to lose their closest friends when they started a new relationship."
More specifically, the study found the average number of lost friends per new relationship was two.
Except there is no publicly published paper anywhere online, despite what the news article says, there are only quotes by Dunbar at the 2010 British Science Festival, which seems a bit suspicious to me, maybe suggesting that the study was retracted later.
Replies from: None↑ comment by [deleted] · 2015-11-27T15:31:29.624Z · LW(p) · GW(p)
It's not necessarily that the study was retracted. The news article from the Guardian you linked mentioned that the study was submitted to the journal Personal Relationships; this means it had not yet been accepted for publication. And indeed it looks like that study never got published there despite all the media coverage.
Actually it has finally come out, 5 years later! Burton-Chellew, M.N and Dunbar, Robin I. M. (2015). Romance and reproduction are socially costly. Evolutionary Behavioral Sciences, 9(4), 229-241. http://dx.doi.org/10.1037/ebs0000046
From the abstract
Replies from: RicardoFonsecaWe used an Internet sample of 540 respondents to test and show that the average size of support networks is reduced for individuals in a romantic relationship. We also found approximately 9% of our sample reported having an “extra” romantic partner they could call on for help, however these respondents did not have an even smaller network than those in just 1 relationship. The support network is also further reduced for those who have offspring, however these effects are contingent on age, primarily affecting those under the age of 36 years. Taking into account the acquisition of a new member to the network when entering a relationship, the cost of romance is the loss of nearly 2 members. On average, these social costs are spread equally among related and nonrelated members of the network.
↑ comment by RicardoFonseca · 2015-11-28T19:07:49.868Z · LW(p) · GW(p)
Nice! Good to know the information is (more) reliable after all :)
↑ comment by LessRightToo · 2015-11-25T20:41:30.243Z · LW(p) · GW(p)
A study that relies only on self-reported claims of 'being in love' might be interesting to read, but such a study would be of higher quality if there was an objective way to take a group of people and sort them into one of two groups: "in love" or "not in love." Based on my own experience and experiences reported by others, I wouldn't reject the notion that such a sorting is possible in principle, although it may be beyond our current technological capability. The pain associated with being suddenly separated from someone that you have 'fallen in love with' can rival physical pain in intensity. What type of instrumentation would we need to detect when a person is primed for such a response? I have no idea.
Replies from: ChristianKl, RicardoFonseca↑ comment by ChristianKl · 2015-11-25T23:48:05.042Z · LW(p) · GW(p)
A study that relies only on self-reported claims of 'being in love' might be interesting to read, but such a study would be of higher quality if there was an objective way to take a group of people and sort them into one of two groups: "in love" or "not in love."
No, not automatically. An objective measurement can be both worse and be better than a self-reported measurement. There no reason to believe that one is inherently better.
Replies from: LessRightToo↑ comment by LessRightToo · 2015-11-28T14:01:33.457Z · LW(p) · GW(p)
New material added to this thread uses the phrase being in a relationship rather than being in love. I found the latter phrase problematic because it involves a poorly defined mental state that has changed meaning over time. The former phrase is objectively verifiable by external observers.
I have read a book or two on the Design of Experiments over the years purely for intellectual curiosity; I've never actually defined and run a scientific experiment. So I don't have anything worthwhile to say on the general topic of the relative value of objective vs. subjective measurements in scientific studies.
↑ comment by RicardoFonseca · 2015-11-27T12:35:02.830Z · LW(p) · GW(p)
Why do you think "a person being primed for feeling pain when being separated from their new partner" matters here?
Are you thinking about studies that, at the very least, suggest the possibility of such a separation being an option that the subject will experience based on the outcome of some action/decision being studied? :( that's horrible ):
Replies from: LessRightToo↑ comment by LessRightToo · 2015-11-28T14:10:04.347Z · LW(p) · GW(p)
An objectively verifiable indication that an animal has pair-bonded would be a visible indication of distress when forcibly separated from his/her mate. I'm not suggesting that this is the best way to determine whether an animal has pair-bonded. For example, an elevated level of some hormone in the blood stream (a "being in love" hormone) that reliably indicates being pair-bonded would be a superior objectively verifiable indication (in my opinion) because it doesn't involve causing distress in an animal.
I'm not a biologist - just an occasional recreational reader of popular works in biology. So, my opinion isn't worth much.
Replies from: RicardoFonseca↑ comment by RicardoFonseca · 2015-11-28T19:06:02.330Z · LW(p) · GW(p)
Right now, it seems that "passionate love" is measured in a discrete scale based on answers to a questionnaire. The "Passionate Love Scale" (PLS) is mentioned in this blog post and was introduced by this article in 1986.
In my other reply to my original comment I showed a study%20Reduced%20cognitive%20control%20in%20passionate%20lovers.pdf) that finds that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control", in which they use the PLS.
comment by Lumifer · 2015-11-23T21:35:51.112Z · LW(p) · GW(p)
Post-human mathematics at arXiv.
Replies from: passive_fist↑ comment by passive_fist · 2015-11-23T22:14:00.671Z · LW(p) · GW(p)
Present day mathematics is a human construct, where computers are used more and more but do not play a creative role.
It always seemed very strange to me how, despite the obvious similarities and overlaps between mathematics and computer science, the use of computers for mathematics has largely been a fringe movement and mathematicians mostly still do mathematics the way it was done in the 19th century. This even though precision and accuracy is highly valued in mathematics and decades of experience in computer science has shown us just how prone humans are to making mistakes in programs, proofs, etc. and just how stubbornly these mistakes can evade the eyes of proof-checkers.
Replies from: Sarunas, Richard_Kennaway, IlyaShpitser, MrMind, bogus↑ comment by Sarunas · 2015-11-24T11:57:48.999Z · LW(p) · GW(p)
Correctness is essential, but another highly desirable property of a mathematical proof is its insightfulness, that is, whether they contain interesting and novel ideas that can later be reused in others' work (often they are regarded as more important than a theorem itself). These others are humans and they desire, let's call it, "human-style" insights. Perhaps if we had AIs that "desired" "computer-style" insights, some people (and AIs) would write their papers to provide them and investigate problems that are most likely to lead to them. Proofs that involve computers are often criticized for being uninsightful.
Proofs that involve steps that require use of computers (as opposed to formal proofs that employ proof assistants) are sometimes also criticized for not being human verifiable, because while both humans make mistakes and computer software can contain bugs, mathematicians sometimes can use their intuition and sanity checks to find the former, but not necessarily the latter.
Mathematical intuition is developed by working in an area for a long time and being exposed to various insights, heuristics, ideas (mentioned in a first paragraph). Thus not only computer based proofs are harder to verify, but also if an area relies on a lot of non human verifiable proofs that means it might be significantly harder to develop an intuition in that area which might then make it harder for humans to create new mathematical ideas. It is probably easier understand the landscape of ideas that were created to be human understandable.
That is neither to say that computers have little place in mathematics (they do, they can be used for formal proofs, generating conjectures or gathering evidence for what approach to use to solve a problem), nor it is to say that computers will never make human mathematicians obsolete (perhaps they will become so good that humans will no longer be able to compete).
However, it should be noted that some people have different opinions.
Replies from: passive_fist↑ comment by passive_fist · 2015-11-24T20:17:34.443Z · LW(p) · GW(p)
Automated theorem proving is a different problem entirely and it's obviously not ready yet to take the place of human mathematicians. I'm not in disagreement with you here.
However there's no conflict between being 'insightful' and 'intuitive' and being computer-verifiable. In the ideal case you would have a language for expressing mathematics that mapped well to human intuition. I can't think of any reason this couldn't be done. But that's not even necessary -- you could simply write human-understandable versions of your proofs along with machine-verifiable versions, both proving the same statements.
↑ comment by Richard_Kennaway · 2015-11-24T11:53:09.785Z · LW(p) · GW(p)
Substantial work has been done on this. The two major systems I know of are Automath (defunct but historically important) and Mizar (still alive). Looking at those articles just now also turns up Metamath. Also of historical interest is QED, which never really got started, but is apparently still inspiring enough that a 20-year anniversary workshop was held last year.
Creating a medium for formally verified proofs is a frequently occurring idea, but no-one has yet brought such a project to completion. These systems are still used only to demonstrate that it can be done, but they are not used to write up new theorems.
Replies from: Vaniver↑ comment by Vaniver · 2015-11-24T14:44:36.826Z · LW(p) · GW(p)
I thought there were several examples of theorems that had only been proved by computers, like the Four Color Theorem, but that they're sort of in their own universe because they rely on checking thousands of cases, and so not only could a person not really be sure that they verified the proof (because the odds of them making a mistake would be so high) they couldn't get much in the way of intuition or shared technique from the proof.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-11-24T15:12:41.365Z · LW(p) · GW(p)
I thought there were several examples of theorems that had only been proved by computers, like the Four Color Theorem
Yes, although as far as I know things like that, and the FCT in particular, have only been proved by custom software written for the problem.
There's also a distinction between using a computer to find a proof, and using it to formalise a proof found by other means.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2015-11-25T18:15:56.502Z · LW(p) · GW(p)
Indeed, the computer-generated proofs of 4CT were not only not formal proofs, they were not correct. Once a decade, someone would point out an error in the previous version and code his own. But now there is a version for an off the shelf verifier.
↑ comment by IlyaShpitser · 2015-11-25T21:55:37.587Z · LW(p) · GW(p)
People are working on changing that (at CMU for example).
↑ comment by MrMind · 2015-11-24T08:13:20.208Z · LW(p) · GW(p)
I think the difficulty is in part due to the fact that mathematicians use classical metalogic (e.g. proof by contradiction) which is not easily implemented in a computer system. The most famous mathematical assistant, Coq, is based on a constructive type theory. Even the univalence program, which is ambitious in its goal to formalize all mathematics, is based on a variant of intuitionistic meta-logic.
↑ comment by bogus · 2015-11-24T00:07:49.667Z · LW(p) · GW(p)
Converting most of existing math into formal developments suitable for computer use would be a huge undertaking, possibly requiring several hundred man-years of work. Most people aren't going to work on such a goal with any seriousness until it's clear to them that the results will in fact be widely used. This in turn requires further work in order to come up with lightweight, broadly-applicable logical foundations/frameworks, as well as more work on the usability of proof environments. Progress on these things has been quite slow, although we have seen some encouraging news lately, such as the recent 'formal proof' of the Kepler conjecture. And even that was actually a bunch of formal proofs developed under quite different systems, that can be argued to solve the conjecture only when they're somehow combined. I think this example makes it abundantly clear that current approaches to this field - even at their most successful - do have non-trivial drawbacks.
Replies from: passive_fist↑ comment by passive_fist · 2015-11-24T01:42:09.459Z · LW(p) · GW(p)
You're speaking of unifying all of math under the same system. I don't think that's strictly necessary, or even desirable. The computer science equivalent of that would be a development environment where every algorithm in the literature is implemented as a function.
I'm wondering more about why problem-specific computer-verifiable proofs aren't used.
Replies from: bogus↑ comment by bogus · 2015-11-24T03:11:06.326Z · LW(p) · GW(p)
The problem is, no matter how 'problem-specific' your proofs are, they aren't going to be 'verifiable' unless you specify them all the way down to some reasonable foundation. That's the really big undertaking, so you'll want to unify things as much as possible, if only to share whatever you can and avoid any duplication of effort.
Replies from: passive_fist↑ comment by passive_fist · 2015-11-24T07:27:15.072Z · LW(p) · GW(p)
The problem is, no matter how 'problem-specific' your proofs are, they aren't going to be 'verifiable' unless you specify them all the way down to some reasonable foundation.
If that's true then it logically follows that most existing mathematics literature is un-verifiable - a statement that I think mathematicians would take issue with. After all, that's not how most mathematics literature is presented.
Replies from: Viliam↑ comment by Viliam · 2015-11-24T09:27:01.282Z · LW(p) · GW(p)
I agree with that.
In the future, it would be best to derive everything from the axioms. (Using libraries where the frequently used theorems are already proved.) The problem is, the most simple theorems that we can derive from the axioms quickly are not important enough to pay for the development and use of the software.
So a better approach would be for the system to accept a few theorems as (temporary) axioms. Essentially, if it would be okay to use the Pythagorean theorem in a scientific paper without proving it, then in the first version of the program it would be okay to use the Pythagorean theorem as an axiom -- displaying a warning "I have used Pythagorean theorem without having a proof of it".
This first version would already be helpful at verifying current papers. And there is an option to provide the proof of the Pythagorean theorem from the first principles later. If you add it later, you can re-run the papers and get the results with less warnings. If the Pythagorean theorem happens to be wrong, as long as you have provided the warnings for all papers, you know which ones of them to retract.
Actually, I believe such systems would be super helpful e.g. in set theory, when you want to verify whether the proof you used does rely on the axiom of choice. Because even if you didn't use it directly, maybe one of them theorems you used was based on it. Generally, using different sets of axioms could become easier.
Replies from: passive_fist↑ comment by passive_fist · 2015-11-24T20:04:37.470Z · LW(p) · GW(p)
Yes that's an insightful way of looking at how computer verification could assist in real mathematics research.
Going back to the CS analogy, programmers started out by writing everything in machine language, then gradually people began to write commonly-used functions as libraries that you could just install and forget about (they didn't even have to be in the same language) and they wrote higher-level languages that could automatically compile to machine code. Higher and higher levels of abstraction were recognized and implemented over the years (for implementing things like parsers, data structures, databases, etc.) until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. There was very little universal coordination in all of this; it just grew out of the needs of various people. No one in 1960 sat down and said, "Ok, let's write python."
Replies from: Lumifer↑ comment by Lumifer · 2015-11-24T20:10:11.411Z · LW(p) · GW(p)
until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. ... No one in 1960 sat down and said, "Ok, let's write python."
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
As to "writing out your thoughts", people did design such a language in 1959...
P.S. Oh, and do your thoughts flow like this..?
public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } }
Replies from: passive_fist, bogus↑ comment by passive_fist · 2015-11-24T20:24:41.631Z · LW(p) · GW(p)
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
That the implementation of python is fairly slow is a different matter, and high-level languages need not be any slower than, say, C or Fortran, as modern JIT languages demonstrate. It just takes a lot of work to make them fast.
As to "writing out your thoughts", people did design such a language in 1959...
Lisp was also designed during that same period and probably proves your point even better. But 1960's Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
Replies from: bogus, Lumifer↑ comment by bogus · 2015-11-24T20:32:09.906Z · LW(p) · GW(p)
But 1960's Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
Computerized math is the same today. No one wants to write everything they need from scratch, unless they're working in a genuinely self-contained (i.e. 'synthetic') subfield where the prereqs are inherently manageable. See programming languages (with their POPLmark challenge) and homotopy-type-theory as examples of such where computerization is indeed making quick progress.
↑ comment by Lumifer · 2015-11-24T21:04:28.618Z · LW(p) · GW(p)
as bare-bones as it was high-level
Umm... LISP is elegant and expressive -- you can (and people routinely do) construct complicated environments including DSLs on top of it. But that doesn't make it high-level -- it only makes it a good base for high-level things.
But if you use "high-level" to mean "abstracted away from the hardware" then yes, it was, but that doesn't have much to do with "writing out your thoughts".
↑ comment by bogus · 2015-11-24T20:22:35.128Z · LW(p) · GW(p)
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
LISP was definitely a thing in the 1960s, and python is not that different. For a long time, the former was pretty much 'the one' very-high-level, application-oriented language. Much like Python or Ruby today.
Replies from: Lumifer↑ comment by Lumifer · 2015-11-24T21:00:36.093Z · LW(p) · GW(p)
... and python is not that different.
8-0 Allow me to disagree.
pretty much 'the one' ... application-oriented language
Allow me to disagree again. LISP was lambda calculus made flesh and was very popular in academia. Outside of the ivory towers, the suits used COBOL, and the numbers people used Fortran (followed by a whole lot of Algol-family languages) to write their applications.
comment by JoshuaZ · 2015-11-27T17:59:59.815Z · LW(p) · GW(p)
Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.
comment by Fluttershy · 2015-11-24T05:41:34.113Z · LW(p) · GW(p)
Do transhumanist types tend to value years of life lived past however long they'd expect to live anyways linearly (I.e. if they'd pay a maximum of exactly n to live an extra year, then would they also be willing to pay a maximum of exactly 100n to live 100 extra years)?
If so, the cost effectiveness of cryonics (in terms of added life years lived) could be compared with the cost effectiveness of other implementable health interventions would-be cryonicists are on the fence on. What's the marginal disutility that a given transhumanist might get from forcing themselves to eat a bit more healthily, and how much would that extend their life expectancy by? What about for exercise? Or going to the doctor over that odd itch in their throat that they'd like to ignore just one more day?
The point I'm coming to is that if I want my friends to live longer lives (or have more QALYs, or whatever) in expectation, it's probably better for me to pester them about certain lifestyle choices and preventative interventions than it is to pester them to sign up for cryonics. (By the same token, I seem to recall that Hanson or Yudkowsky once pointed out that cryonics would be expected to add more years to ones life than an open heart surgery (?) relative to the cost, or something like that.)
Replies from: HungryHobo, Soothsilver↑ comment by HungryHobo · 2015-11-24T16:29:40.334Z · LW(p) · GW(p)
The levels of uncertainty make this really hard to work with.
On the one hand perhaps it works and the person gets to live for billions of deeply fulfilling years, till the heat death of the universe experiencing 10x subjective time giving trillions of QALYs.
Or perhaps they get awoken into a world where life extension is possible but legally limited to a couple hundred years.
Or perhaps they get awoken into a world where they're considered on the same moral level as lab rats and millions of copies of their mind get to suffer in countless interesting ways.
so you end up with a very very wide range of values, negative to trillions of QALYs with no way to assign reasonable probabilities to anything in the range which makes cost effectiveness calculations a little less convincing.
↑ comment by Soothsilver · 2015-11-24T21:14:37.240Z · LW(p) · GW(p)
I also ask myself these questions and I'm unable to answer them. In the end, I exercise and modify my diet as much as my will allows without causing me too much stress.
As for valuing years of life, if I considered that the very best outcome of cryonics (as HungryHobo described) is certain, then, well, even for very small values that will result in cryonics giving me far more utility than exercice. I don't value later years of my life that low.
Yudkowsky believes that cryonics has a greater than 50% chance of working, and that we will be able to have fun for any amount of time, so for him, the expected value of cryonics is ginormous.
I get quite a bit of disutility from forcing myself to eat a bit more healthily. My food diversity is very power; if I try to ingest one of many foods I don't like, I will throw up. Attempting to eat those foods anyway causes me great discomfort. So that's not a great way for me to increase overall utility.
On the last paragraph, it appears to me that the two basics - avoiding obesity and not smoking - are the best thing you can pester them about. But the other lifestyle choices have the expected benefit of a few years total, if you don't expect any new medical technology to be developed.
Replies from: MarsColony_in10years↑ comment by MarsColony_in10years · 2015-11-25T18:41:54.463Z · LW(p) · GW(p)
avoiding obesity
Not to be pedantic, but I thought this might be of interest: As I understand it, amount of exercise is a better predictor of lifespan than weight. That is, I would expect someone overweight but who exercises regularly to outlive someone skinny who never exercises.
For example, this life expectancy calculator outputs 70 years for a 5"6" 25 year old male who weighs 300lbs, but exercises vigorously daily. Changing the weight to 150 lbs and putting in no exercise raised the life expectancy by only 1 year. (a bit less than I was expecting, actually. I was about to significantly update, but then it occurred to me that 300 lbs isn't the definition of obesity. I knew this previously, but apparently hadn't fully internalized that.) EDIT: This calculator may not work well for weights over ~250 lbs. See comment below.
So, my top two recommendations to friends would be quit smoking and exercise regularly. I'd recommend Less Wrongers either do high intensity workouts once a can read or watch Khan Academy or listen to The week to minimize the amount of time spent on non-productive activities, or pick a more frequent but lower intensity activity they Sequences audiobook while doing. I'm not an expert or anything. That's just the impression I've gotten from my own research.
Replies from: Soothsilver, Lumifer, Viliam↑ comment by Soothsilver · 2015-11-25T20:45:48.142Z · LW(p) · GW(p)
I'm not sure I would trust that calculator. I'm not used to US units so I put in 84kg (my weight) and it said "with that BMI you can't be alive" so I put in 840 thinking maybe it wants the first decimal as well. Now I realize it wanted pounds. And for this, 840lbs, it also outputed 70 years.
I'm not sure where the calculator gets its data from.
Replies from: MarsColony_in10years↑ comment by MarsColony_in10years · 2015-11-26T08:31:04.026Z · LW(p) · GW(p)
Hmmm, that's worrying. I played with some numbers for a 5'6" male, and got this:
99 lbs yields "Your BMI is way too low to be living"
100lbs yields 74 years
150lbs yields 76 years
200lbs yields 73 years
250lbs yields 69 years
300lbs yields 69 years
500lbs yields 69 years
999lbs yields 69 years
It looks to me like they are pulling data from a table, and the table maxes out under 250lbs?
↑ comment by Lumifer · 2015-11-25T19:08:43.073Z · LW(p) · GW(p)
amount of exercise is a better predictor of lifespan than weight
First, there is no reason for you to care about ranking ("better"), you should only care whether something is a good predictor of lifespan. Predictors are not exclusive.
Second, weight effect on lifespan is nonlinear. As far as I remember it's basically a U-shaped curve.
Replies from: gjmcomment by username2 · 2015-11-23T19:17:06.775Z · LW(p) · GW(p)
Why are there many LWers from, say, Europe, but not China?
Replies from: Vaniver, iarwain1↑ comment by Vaniver · 2015-11-23T20:15:39.821Z · LW(p) · GW(p)
I'm going to guess that English language proficiency is far higher in Europe than it is in China. But Asian Americans seem underrepresented on LW relative to the fields that LW draws heavily from, so that seems unlikely to be a complete explanation.
Replies from: username2↑ comment by iarwain1 · 2015-11-23T20:06:56.096Z · LW(p) · GW(p)
I'm going to guess it's based on some of the East-West thinking differences outlined by Richard Nisbett in The Geography of Thought (I very highly recommend that book, BTW). I don't remember everything in the book, but I remember he had some stuff in there about why easterners are often less interested in, and have a harder time with, the sort of logical/scientific thinking that LW advocates.
Replies from: MrMind, g_pepper↑ comment by MrMind · 2015-11-24T08:17:32.790Z · LW(p) · GW(p)
Which is weird because, if you take seriously the ethnic-IQ correlation (which I don't), Asians show an higher-than-westerners average IQ.
Replies from: iarwain1↑ comment by iarwain1 · 2015-11-24T14:59:18.726Z · LW(p) · GW(p)
Nothing to do with IQ, but with modes of thinking. According to Nisbett, Eastern thinking is more holistic and concrete vs. the Western formal and abstract approach. He says that Easterners often make fewer thinking mistakes when dealing with other people, where a more holistic approach is needed (for example, Easterners are much less prone to the Fundamental Attribution Error). But at the same time they tend to make more thinking mistakes when it comes to thinking about scientific questions, as that often requires formal, abstract thinking. Nisbett also speculates that this is why science developed only in the west even though China was way ahead of the west in (concrete-thinking-based) technological progress.
In general there's very little if any correlation between IQ and rationality. A lot of Keith Stanovich's work is on this.
comment by Viliam · 2015-11-26T19:21:06.706Z · LW(p) · GW(p)
Facebook question:
I have different types of 'friends' on Facebook, such as "Family", "Rationalists", "English-speaking", etc. Different materials I post are interesting for different groups. There is an option to select visibility of my posts, but that seems not exactly what I want.
What I'd like is to make my posts so that they are available to everyone, including people I don't know (e.g. if anyone clicks on my name, they will see everything I ever posted), but I don't want all my posts to appear automatically on all of my 'friends' home pages, if they follow me. In other words, I don't want to spam my 'friends'' pages with stuff they are unlikely to read, yet I want anyone to be able to read each of my posts if they wish so.
Is there an option "don't push this automatically to all people, but let them see it if they click on a permalink"?
Replies from: ChristianKl, polymathwannabe↑ comment by ChristianKl · 2015-11-26T22:57:42.086Z · LW(p) · GW(p)
I don't understand why facebook messes up the language issue so strongly. It seems like the American's at facebook quarters just don't care about bilinguals.
Replies from: solipsist↑ comment by solipsist · 2015-11-29T06:57:00.228Z · LW(p) · GW(p)
Yeah, your explanation sounds absolutely correct. But before you think "silly monoglot Americans", remember that London is closer to Istanbul than New York is to Mexico. Countries where people don't mostly speak English are thousands of kilometers away from most Americans.
Replies from: polymathwannabe, username2↑ comment by polymathwannabe · 2015-11-29T15:59:23.234Z · LW(p) · GW(p)
Those are suspiciously convenient examples. A more relevant comparison would be: Los Angeles is closer to Tijuana than London is to Paris.
Replies from: tut, solipsist↑ comment by tut · 2015-11-30T15:24:19.599Z · LW(p) · GW(p)
Here is a map with London and Istanbul on it. In between them are many countries with at least six majority languages (and that's a low count, where some people would lynch me for saying that their language is the same as the one their neighbor speaks). Los Angeles and Tijuana on the other hand are two cities right by a border, and the only languages commonly spoken between them is English, the language of the USA, and Spanish, the language of Mexico.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-11-30T15:42:17.423Z · LW(p) · GW(p)
I understood solipsist's argument to mean that Americans can be excused for being ignorant of other languages because most of them live too far from other linguistic communities, and pointed at the mutual closeness of European countries for contrast, implying that it's likelier to find a Turkish-speaking Brit than a Spanish-speaking American.
What I tried to say was that there was no need to artificially inflate the comparison distance by choosing Istanbul. Londoners can find speakers of a completely different language by merely driving to Cardiff. But the U.S. is not a monolingual bloc of homogeneity either: ironically, solipsist chose New York for his example, a multilingual smorgasbord if ever there was one.
↑ comment by solipsist · 2015-11-29T18:03:53.585Z · LW(p) · GW(p)
Well, I don't know. Some of the US is near Mexico, but most of it isn't. In Europe the farthest you can get from a border to foreign speaking country is perhaps southern Italy. The four US states which border Mexico are each bigger than Italy. Germany is a bigish country in Europe area-wise, but it's less than 3.7% the size of the US. The Mercator projection makes an optical illusion -- the US is huge.
↑ comment by username2 · 2015-11-29T21:23:05.738Z · LW(p) · GW(p)
Just because they have an excuse that geography made them silly monoglots doesn't mean they aren't silly monoglots :p
Replies from: gjm↑ comment by gjm · 2015-11-29T23:14:35.191Z · LW(p) · GW(p)
I think solipsist's point isn't that they have an excuse but that they have a reason -- being monoglot hurts them less than it would if they were e.g. on the European continent, so monoglossy (or whatever the right word is) isn't necessarily silly for them.
[EDITED to add:] Disappointingly, OED suggests that the right word is just "monoglottism".
↑ comment by polymathwannabe · 2015-11-26T20:35:33.430Z · LW(p) · GW(p)
The way Facebook works, you decide what's available, but each of your friends has to individually decide how much they want to see of you.
Replies from: Viliam↑ comment by Viliam · 2015-11-27T06:45:42.128Z · LW(p) · GW(p)
The problem is exactly the "how much they want to see of you" part, namely that there is only the one undifferentiated "you" instead of "your rationality posts", "your family photos", "your posts with kitten videos". I don't want to bother my family with rationality posts, and don't want to bother my LW friends with Slovak posts, but as long as I don't want to limit it all to 'friends of my friends' I don't have a choice.
Technically, the solution would be to create multiple accounts for mutliple aspects of my life, and have different sets of 'friends' for each. But this is against Facebook TOS, and is also technically inconvenient.
Actually, maybe I could use the "Pages" feature for this... That allows people to post under multiple identities, so each of them can have different followers. But officially, "Pages are for businesses, brands and organizations". Not sure if "Viliam's comments on politics in Slovakia" qualitfies as any of that.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-11-27T15:06:43.283Z · LW(p) · GW(p)
What you seem to be already doing, which is to manually select what group will see each post, seems to be good enough for your purposes. Anyone who actively wants to see more of you can simply go to your profile and see everything.
comment by Silver_Swift · 2015-11-25T16:12:47.313Z · LW(p) · GW(p)
I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.
Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?
For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).
comment by AstraSequi · 2015-11-25T02:24:34.553Z · LW(p) · GW(p)
I just found out about the “hot hand fallacy fallacy” (Dan Kahan, Andrew Gelman, Miller&Sanjuro paper) as a type of bias that more numerate people are likely more susceptible to, and for whom it's highly counterintuitive. It's described as a specific failure mode of the intuition used to get rid of the gambler's fallacy.
I understand the correct statement like this. Suppose we’re flipping a fair coin.
*If you're predicting future flips of the coin, the next flip is unaffected by the results of your previous flips, because the flips are independent. So far, so good.
*However, if you're predicting the next flip in a finite series of flips that has already occurred, it's actually more likely that you'll alternate between heads and tails.
The discussion is mostly about whether a streak of a given length will end or continue. This is for length of 1 and probability of 0.5. Another example is
Replies from: gjm, Viliam...we can offer the following lottery at a $5 ticket price: a fair coin will be flipped 4 times. if the relative frequency of heads on flips that immediately follow a heads is greater than 0.5 then the ticket pays $10; if the relative frequency is less than 0.5 then the ticket pays $0; if the relative frequency is exactly equal to 0.5, or if no flip is immediately preceded by a heads, then a new sequence of 4 flips is generated. While, intuitively, it seems like the expected payout of this ticket is $0, it is actually $-0.71 (see Table 1). Curiously, this betting game may be more attractive to someone who believes in the independence of coin flips, rather than someone who holds the Gambler’s fallacy.
↑ comment by gjm · 2015-11-25T14:03:18.691Z · LW(p) · GW(p)
I think this is not quite right, and it's not-quite-right in an important way. It really isn't true in any sense that "it's more likely that you'll alternate between heads and tails". This is a Simpson's-paradox-y thing where "the average of the averages doesn't equal the average".
Suppose you flip a coin four times, and you do this 16 times, and happen to get each possible outcome once: TTTT TTTH TTHT TTHH THTT THTH THHT THHH HTTT HTTH HTHT HTHH HHTT HHTH HHHT HHHH.
- Question 1: in this whole sequence of events, what fraction of the time was the flip after a head another head? Answer: there were 24 flips after heads, and of these 12 were heads. So: exactly half the time, as it should be. (Clarification: we don't count the first flip of a group of 4 as "after a head" even if the previous group ended with a head.)
- Question 2: if you answer that same question for each group of four, and ignore cases where the answer is indeterminate because it involves dividing by zero, what's the average of the results: Answer: it goes 0/0 0/0 0/1 1/1 0/1 0/1 1/2 2/2 0/1 0/1 0/2 1/2 1/2 1/2 2/3 3/3. We have to ignore the first two. The average of the rest is 17/42, or just over 0.4.
What's going on here isn't any kind of tendency for heads and tails to alternate. It's that an individual head or tail "counts for more" when the denominator is smaller, i.e., when there are fewer heads in the sample.
Replies from: AstraSequi↑ comment by AstraSequi · 2015-11-26T01:53:56.817Z · LW(p) · GW(p)
My intuition is from the six points in Kahan's post. If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails. If we have an equal number of heads and tails left, P(HT) > P(HH) for the next two flips. After the first heads, the probability for the next two might not give P(TH) > P(TT), but relative to independence it will be biased in that direction because the first T gets used up.
Is there a mistake? I haven't done any probability in a while.
Replies from: gjm↑ comment by gjm · 2015-11-26T02:23:18.678Z · LW(p) · GW(p)
If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails.
No, that is not correct. Have a look at my list of 16 length-4 sequences. Exactly half of all flips-after-heads are heads, and the other half tails. Exactly half of all flips-after-tails are heads, and the other half tails.
The result of Miller and Sanjuro is very specifically about "averages of averages". Here's a key quotation:
We demonstrate that in a finite sequence generated by i.i.d. Bernoulli trials with probability of success p, the relative frequency of success on those trials that immediately follow a streak of one, or more, consecutive successes is expected to be strictly less than p
"The relative frequency [average #1] is expected [average #2] to be ...". M&S are not saying that in finite sequences of trials successes are actually rarer after streaks of success. They're saying that if you compute their frequency separately for each of your finite sequences then the average frequency you'll get will be lower. These are not the same thing. If, e.g., you run a large number of those finite sequences and aggregate the counts of streaks and successes-after-streaks, the effect disappears.
↑ comment by Viliam · 2015-11-25T09:33:42.556Z · LW(p) · GW(p)
However, if you're predicting the next flip in a finite series of flips that has already occurred, it's actually more likely that you'll alternate between heads and tails.
...because heads occurring separately are on average balanced by heads occurring in long sequences; but limiting the length of the series puts a limit on the long sequences.
In other words, in infinite sequences, "heads preceeded by heads" and "heads preceeded by tails" would be in balance, but if you cut out a finite subsequence, if the first one was "head preceeded by head", by cutting out the subsequence you have reclassified it.
Am I correct, or is there more?
Replies from: gjmcomment by Elo · 2015-11-24T22:13:01.221Z · LW(p) · GW(p)
This week on the slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/
- AI - language/words as a storage-place for meaning.
- art and media - MGS V, Leviathan, SOMA, Undertale, advertising methods,
Business and startups - CACE (Change Anything Chances Everything) with respect to startups and machine learning. prediction.io , ,meetings: [each person speaks, so the length of meeting of the meeting is O(n) and there are n people, so the total meeting cost is O(n^2). On the margin, adding one person to the standup means they listen to n people speak, and n people listen to them speak.] and how they cost businesses money. machine speech ability, data wrangling is tedious, data processing resources: data source, computing power and blidness. "the whole world is simpler if greed is the primary motivator for everything". "People talk a lot about market failure but government failure is a thing too.". VC's and extortionary practices. what is the intention of implementing UBI? (unanswered). "if the game-plan (the economy) changes - i.e. by automation; or basic income. The people with more resources will be able to adapt to it faster..." wealth distribution.
Debating and rhetoric - we break apart the discussions and arguments from other places... We analysed where the first statement of an argument elsewhere shifted from discussion to disagreement. (surprisingly early) a two-pronged approach to offence. in regards to:
- a statement could be taken the offensively
- it was taken offensively by someone.
1: clean up the statement so that it is harder to take offensively (steelman)
2: encourage less personal offence from the original statement
both sides are needed to make discussions more productive.
Grice's Maxims of communication - https://en.wikipedia.org/wiki/Cooperative_principle
this is also interesting: http://www.smart-words.org/linking-words/transition-words.html
Effective altruism - EA Global have started hosting videos from this year's conference on their site. Duplicates of what is already up. Nothing at all from the Oxford conference yet. http://eaglobal.org/videos
goals of lesswrong - raising the sanity waterline, and before we extinct the planet of humans. how could the sanity waterline be raised:
- Changing the education system
- Getting enough influential writers
- Getting enough famous people to be rationalists so that people want to emulate then
- Creating a movie or TV series about rationalists
- Get enough rationalists within the population that everyone gains some understanding of rationalist ideas asking a few teachers about how you might go about teaching the LW ideas to the average person...
human relationships - living in different places and different cultures of doing so. driving vs public transport and safety concerns. "youthful optimism" and it's contrasting "aging pessimism" as an exploration-exploitation problem. If we make a rough assumption that both things exist and at some point a youthful optomist transitions to an aging pessimist; what can we learn about that and how can we benefit from knowing that as a natural process.
lingustics - the phrase; "If I understand you correctly; you were saying..." followed by what you are saying next. it slows down a conversation; but keeps it clear.
Open - so many things! IQ/ the sports gene, (re: parable of talents), Accountability groups, A Big disagreement about a thing about this thing http://lo-tho.blogspot.com/2014/12/epistemic-trust.html , http://www.informationisbeautiful.net/visualizations/rhetological-fallacies/ , QS data, Case law and it's influence on the law and an analogy to Edge testing in programming. Some discussions on the layers of the state of our facebooks post-paris-events. some online courses, fighting death, advice about how to think about motivated cognition (clever-arguer) vs intellectual honesty (by which I just mean the lack of motivated cognition) in the case where one person has a really high probability for X and honestly believes that the argument is very one-sided?.
The quotation you’re looking for is from Chesterton’s 1929 book, The Thing, in the chapter entitled, “The Drift from Domesticity”:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
Parenting - (uncharacteristically quiet) some talk about video games that we let kids play
philosophy - is there's a fundamental difference in the peer relationships among men as compared to the peer relationships among women. I've heard often that men by default are indifferent to each other while women by default are adversaries.
response: sounds like an armchair philosophy. what evolutionary characteristics or behaviours did we or did we not pick up. even if you found a population with that to hold true; I doubt it would hold true everywhere. it may have temporarily been true for some people at some point. but evolution is all about gaming the rules. as soon as anything becomes a "rule" in the sense of being a regularly repeated behaviour; some individual who was not winning at the rule would try to generate a different win-condition so that they can continue to win.
in summary: how could we know? and also if it was true for a temporary time and place I doubt it would last more than a handful of generations. by generate I mean: randomly evolve a different pattern of behaviour.
"how should we feel, emotionally, about the real world when the real world kind of sucks, and is there anything we should do about it?" [various ideas; not completely answered]
political talk - article: does gifted education exacerbate social inequality? Feminism/anti-feminism, SJW and meme associated with it, liberatarianism,
programming - code academy!
Projects - Vlog plans, Nanowrimo, VR + presence and BDD, virtual assistant project, OKC method,
real life - joylent/soylent, food prep efficiency, vat-chicken-meat, making meat consumption more healthy, applying to universities, Nasa and how they code, feeling safe generally in the world...
rss feed - we have an RSS feed of any post on LW or SSC that notifies of posts if you are in the channel.
resources and links - http://betterexplained.com/ , http://www.mruniversity.com/ , https://www.kickstarter.com/projects/969324769/the-cold-shoulder-pro-calorie-burning-vest?ref=popular , https://class.coursera.org/modelthinking/lecture , https://www.duolingo.com/ , http://www.trutv.com/shows/adam-ruins-everything/index.html , http://diyhpl.us/wiki/ , https://medium.com/the-exofiles/why-do-we-need-friendly-artificial-intelligence-ce20112f532b
Science and technology - The capital costs of a transition to renewables and new energy forms in general are huge, legal issues of cryonics; and owning something when you are dead/not living (waiting for revival). our current legal system is set up so that dead people cannot own anything. DIYbio, autonomous vehicles and failures of them; also failures of non autonomous vehicles, space manufacturing...
welcome - everyone answers the questions: "Would you like to introduce yourself? Where are you from? What do you do with your time? What are you working on? What problems are you trying to solve?"
Feel free to join us. Active meetup time: A time to try to get lots of people online to talk about things is going to be chosen soon, probably a 12 hour window or so.
We have over 130 people who have signed up. Not nearly that many people are active, but each day something interesting happens...
last month on slack: http://lesswrong.com/r/discussion/lw/mwt/open_thread_oct_26_nov_01_2015/cuq5
comment by NancyLebovitz · 2015-11-25T13:54:16.486Z · LW(p) · GW(p)
Introverts, Extroverts, and Cooperation
As usual, a small hypothetical social science study, but I'm willing to play with the conclusion, which is that extroverts are more likely to cheat unless they're likely to get caught. It wouldn't surprise the hell out of me if introverts are more likely to internalize social rules (or are people on the autism spectrum getting classified as introverts?).
Could "publicize your charity" be better advice for extroverts and/or majority extrovert subcultures than for introverts?
Replies from: Lumifer↑ comment by Lumifer · 2015-11-25T15:29:20.603Z · LW(p) · GW(p)
extroverts are more likely to cheat unless they're likely to get caught
That's not what your link says. First, there is no cheating involved, we are talking about degrees of cooperation without any deceit. And second, it's not about "getting caught", it's about being exposed to the light of the public opinion which, of course, extroverts are more sensitive to.
comment by Bound_up · 2015-11-23T11:45:50.542Z · LW(p) · GW(p)
I've heard the Beatles have some recorded song they never released because they were too low quality. I think it would be worthwhile to study their material in its full breadth, mediocrity included, to get a sense for the true nature of the minds behind some greatness.
I've saved writings and poetry and raw, potentially embarrassing past creations for the sake of a similar understanding. I wish I had recordings of my initial fumblings with the instruments I now play rather better.
So it is in this general context of seeking fuller understanding, that I ask if anyone knows where to find these legendary old writings from Eliezer Yudkowsky, reputed to be embarrassing in their hubris, etc..
Replies from: Viliam, None, NancyLebovitz↑ comment by Viliam · 2015-11-23T12:06:05.004Z · LW(p) · GW(p)
The "legendary old writings from Eliezer Yudkowsky" are probably easy to find, but I am not going to help you.
I do not like the idea of people (generally, not just EY) being judged for what they wrote dozens of years ago. (The "sense for the true nature" seems like the judgement is being prepared.)
Okay, I would make an exception in some situations; the rule of thumb being "more extreme things take longer time to forget". For example if someone would advocate genocide, or organize a murder of a specific person, then I would be suspicious of them even ten years later. But "embarrassing in their hubris"? Come on.
Replies from: IlyaShpitser, None↑ comment by IlyaShpitser · 2015-11-23T21:39:12.501Z · LW(p) · GW(p)
I don't think EY's ego got any smaller with time.
Replies from: polymathwannabe, None, Viliam↑ comment by polymathwannabe · 2015-11-25T22:27:40.973Z · LW(p) · GW(p)
Is it at all meaningful to you that EY writes this in his homepage?
You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.
It is true that EY has a big ego, but he also has the ability to renounce past opinions and admit his mistakes.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-11-26T17:54:22.873Z · LW(p) · GW(p)
Absolutely, it is meaningful.
↑ comment by Viliam · 2015-11-24T08:47:03.417Z · LW(p) · GW(p)
In the meantime he wrote the Sequences and HPMoR, and founded MIRI and CFAR. So maybe the distance between his ego and his real output got smaller.
Also, as Eliezer mentions in the Sequences, he used to have an "affective death spiral" about "intelligence", which is probably visible in his old writings, and contributes to the reader's perception of "big ego".
I don't really mind big egos as long as they drive people to produce something useful. (Yeah, we could have a separate debate about how much MIRI or HPMoR are really useful. But the old writings would be irrelevant for that debate.)
Replies from: IlyaShpitser, None↑ comment by IlyaShpitser · 2015-11-24T15:17:39.037Z · LW(p) · GW(p)
Here is what you sound like:
"But look at all this awesome fan fiction, and furthermore this 'big ego' is all your perception anyways, and furthermore I don't even mind it."
Why so defensive about EY's very common character flaws (which don't really require any exotic explanation, btw, e.g. think horses not zebras)? They don't reflect poorly on you.
EY's past stuff is evidence.
Replies from: Viliam↑ comment by Viliam · 2015-11-25T09:10:36.075Z · LW(p) · GW(p)
I'm defensive about digging in people's past, only to laugh that as teenagers they had the usual teenage hubris, and maybe as highly intelligent people they kept it for a few more years... and then use it to hint that even today 'deeply inside' they are 'essentially the same', i.e. not worth to be taken seriously.
What exactly are we punishing here; what exactly are we rewarding?
Ten or more years ago I also had a few weird ideas. My advantage is that I didn't publish them on visible places in English, and that I didn't become famous enough so people would now spend their time digging in my past. Also, I kept most of my ideas to myself, because I didn't try to organize people into anything. I didn't keep a regular diary, and when I find some old notes, I usually just cringe and quickly destroy them.
(So no, I don't care about any of Eliezer's flaws reflecting on me, or anything like that. Instead I imagine myself in a parallel universe, where I was more agenty and perhaps less introverted, so I started to spread my ideas sooner and wider, had the courage to try changing the world, and now people are digging up similar kinds of my writings. Generally, this is a mechanism for ruining sincere people's reputations: find something they wrote when they were just as sincere as now only less smart, and make people focus on that instead of what they are saying today.)
I guess I am oversensitive about this, because "pointing out that I failed at something a few years ago, therefore I shouldn't be trusted to do it, ever" was something my mother often did to me while I was a teenager. People grow up, damn it! It's not like once a baby, always a baby.
Everyone was a baby once. The difference is that for some people you have the records, and for other people you don't; so you can imagine that the former are still 'deep inside' baby-like and the latter are not. But that's confusing the map with the territory. As the saying goes, "an expert is a person who came from another city" (so you have never seen their younger self.). As the fictional evidence proves, you could have literally godlike powers, and people would still diss you if they knew you as a kid. But today on internet, everything is one big city, and anything you say can get documented forever. (Knowing this, I will forbid my children to use their real names online. Which probably will not help enough, because twenty years later there will be other methods for easily digging in people's past.)
Ah, whatever. It's already linked here anyway. So if it makes you feel better about yourself (returning the courtesy of online psychoanalysis) to read stupid stuff Eliezer wrote in the past, go ahead!
EDIT: I also see this as a part of a larger trend of intelligent people focusing too much on attacking each other instead of doing something meaningful. I understand the game-theoretical reasons for that (often it is easier to get status by attacking other people's work than presenting your own), but I don't want to support that trend.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-11-25T18:14:35.113Z · LW(p) · GW(p)
EY is not a baby, and was not a baby in the time period under discussion. He is in his mid thirties today.
I have zero interest in gaining status in the LW/rationalist community. I already won the status tournament I care about. I have no interest in "crabbing" for that reason. I have no interest in being a "guru" to anyone. I am not EY's competitor, I am involved in a different game.
Whether me being free of the confounding influence of status in this context makes me a more reliable narrator I will let you decide.
What I am very interested in is decomposing cult behavior into constituent pieces to try to understand why it happens. This is what makes LW/rationalists so fascinating to me -- not quite a cult in the standard Scientology sense, but there is definitely something there.
Replies from: Viliam, Lumifer, OrphanWilde↑ comment by Lumifer · 2015-11-25T19:02:46.224Z · LW(p) · GW(p)
This is what makes LW/rationalists so fascinating to me
Welcome to the zoo! Please do not poke the animals with sticks of throw things at them to attract their attention. Do not push fingers or other object through the fences. We would also ask you not to feed the animals as it might lead to digestive problems.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-11-25T19:15:26.769Z · LW(p) · GW(p)
It's an interesting zoo, where all the exhibits think they're the ones visiting and observing...
Replies from: Viliam, Lumifer↑ comment by OrphanWilde · 2015-11-25T18:47:47.781Z · LW(p) · GW(p)
Downvote explanation:
Using claim of immunity to status and authority games as evidence to assert a claim.
Which is to say, you are using a claim of immunity to status and authority games to assert status and authority.
Yes, that's right out of my own playbook, too. I welcome anybody who catches me at it to downvote me, and please let me know I've done it, as it is an insidious logical mistake I find it impossible to catch myself at.
Replies from: philh, IlyaShpitser↑ comment by philh · 2015-11-26T14:43:10.228Z · LW(p) · GW(p)
I don't understand your objection.
Using claim of immunity to status and authority games as evidence to assert a claim.
Which is to say, you are using a claim of immunity to status and authority games to assert status and authority.
Asserting a claim is not the same thing as asserting status and authority.
I'm not sure what you want from Ilya here. He seems to be describing his motivations in good faith. Do you think he's lying to gain status? Do you think he's telling the truth, but gaining status as a side effect, and he shouldn't do that?
Quick edit: Oh, I should probably have read the rest of the thread. I think I understand your objection now, but I disagree with it.
↑ comment by IlyaShpitser · 2015-11-25T19:32:43.949Z · LW(p) · GW(p)
I am not claiming status and authority (I don't want it), I am saying EY has a big ego. I don't think I need status and authority for that, right?
Say I did gain status and authority on LW. What would I do with it? I don't go to meetups, I hardly interact with the rationalist community in real life. What is this supposed status going to buy me, in practice? I am not trying to get laid. I am not looking to lead anybody, or live in a 'rationalist house,' or write long posts read by the community. Forget status, I don't even claim to be a community member, really.
I care about status in the context relevant to me (my academic community, for example, or my workplace).
Or, to put it simply, you guys are not my tribe. I just don't care enough about status here.
Replies from: OrphanWilde, Lumifer↑ comment by OrphanWilde · 2015-11-25T20:17:29.667Z · LW(p) · GW(p)
You're claiming to have status and authority to make a particular claim about reality - "Outsider" status, a status which gains you, with respect to adjucation of insider status and authority games... status and authority.
Now, your argument could stand or fall on its own merits, but you've chosen not to permit this, and instead have argued that you should be taken seriously on the merits of your personal relationship to the group (read: taken to have status and authority relative to the group, at least with respect to this claim).
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-11-25T21:10:08.447Z · LW(p) · GW(p)
[edit: I did not downvote anyone in this thread.]
You're claiming to have status and authority to make a particular claim about reality
I am? Is that how we are evaluating claims now?
Here is how this conversation played out (roughly paraphrased):
me: EY has a big ego.
Viliam: I wish you would stop digging up people's youthful indiscretions like that. Why not go do impressive things instead, why be a hater?
me: EY wasn't young in the time period involved. Also, I have my own stuff going on, thanks! Also, I think this EY dynamic isn't healthy.
you: Argument from status!
me: Don't really want status here, have my own already.
you: You are claiming status by signaling you don't want/need status here! And then using that to make claims!
(At this point if I claim status I lose, and if I don't claim status I also lose.)
Well, look. Grandiose dimensions of EY's ego are not a secret to anyone who actually knows him, I don't think. I think slatestar even wrote something about that.
If you don't think I am being straight with you, and I am playing some unstated game, that's ok. If you have time and inclination, you can dig around my post history and try to figure that out if you care. I would be curious what you find.
I think it is fair to call myself an outsider. I don't self-identify as rationalist, and I don't get any sort of emotional reaction when people attack rationalists (which is how you know what your tribe is). I don't think rationalists are evil mutants, but I think unhealthy things are going on in this community. You can listen to people like me, or not. I think you should, but ultimately your beliefs are your own business. I am not going to spend a ton of energy convincing you.
Replies from: OrphanWilde, polymathwannabe↑ comment by OrphanWilde · 2015-11-25T21:16:42.034Z · LW(p) · GW(p)
If you don't think I am being straight with you, and I am playing some unstated game, that's ok.
I think you're being as completely straight and honest as you are humanly capable of being. I think you also overestimate the degree to which you're capable of being straight and honest. What's your straightest and most honest answer to the question of what probability you assign to the possibility that your actions can be influenced by subconscious status concerns?
Which is to say: Status games are a bias. You're claiming to be above bias. I believe you believe that, but I don't believe that.
↑ comment by polymathwannabe · 2015-11-25T22:20:11.352Z · LW(p) · GW(p)
I think unhealthy things are going on in this community.
Please elaborate.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-11-26T17:38:14.399Z · LW(p) · GW(p)
As I said, I don't think rationalists are actually a cult in the way that Scientology is a cult. But I think there are some cult-like characteristics to the rationalist movement (and a big part of this is EY's position in the movement).
And I think it would be a good idea for the movement to become more like colleagues, and less like what they are now. What I think is somewhat disappointing is both EY and a fair bit of rank and file like things as they are.
Replies from: Tem42↑ comment by Tem42 · 2015-11-26T20:55:03.723Z · LW(p) · GW(p)
I don't know if this matters. I don't particularly care for the Sequences, but that hasn't caused me any problems at all. LessWrong has been an easy site to get into and to learn from, and would be even if I never read anything by EY. (This seems to be true for most aspects of the site; LessWrong is useful even if you don't care about AIs, transhumanism, cybernetics, effective altruism.... there's enough here that you can find plenty to learn.)
You may be seeing the problem as bigger than it is because of the lens that you are looking through, although I agree that charisma is an interesting thing to study, and was central to the development of the site.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-11-28T17:39:31.239Z · LW(p) · GW(p)
It's not just LW, it's the invisible social organization around it.
"Culty" dynamics matter. It's dangerous stuff to be playing with.
↑ comment by Lumifer · 2015-11-25T19:42:27.907Z · LW(p) · GW(p)
What would I do with it?
Bask in the glory? :-)
You might be an exception, but empirically speaking people tend to value their status in online communities, including communities members of which they will never meet in meatspace and which have no effect on their work/personal/etc. life.
Biologically hardwired instincts are hard to transcend :-/
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-11-25T19:44:38.290Z · LW(p) · GW(p)
I think one difference is, I am a bit older than a typical LW member, and have someplace to "hang my hat" already. As one gets older and more successful, one gets less status-anxious.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-11-25T20:20:35.247Z · LW(p) · GW(p)
As one gets older and more successful, one gets less status-anxious.
Which is why you're spending time assuring us that you're high-status?
Replies from: gjm↑ comment by gjm · 2015-11-26T00:05:17.706Z · LW(p) · GW(p)
Ilya's comments about status could indeed be explained by the hypothesis that he's attempting some kind of sneaky second-order status manoeuvre. They could also be explained by his meaning what he says and genuinely not caring much (consciously or otherwise) about status here on LW. To me, the second looks at least as plausible as the first.
More precisely: I doubt anyone is ever completely 100% unaffected by status considerations; the question is how much; Ilya's claim is that in this context the answer is "negligibly"; and I suggest that that could well be correct.
You may be correct to say it isn't. But if so, it isn't enough just to observe that someone motivated by status might say the things Ilya has, because so might someone who in this context is only negligibly motivated by status. You need either to show us something Ilya's doing that's substantially better explained in status-seeking terms, or else give a reason why we should think him much more likely to be substantially status-seeking than not a priori.
[EDITED to add: I have no very strong opinion on whether and to what degree Ilya's comments here are status manoeuvres.]
↑ comment by [deleted] · 2015-11-24T17:40:21.632Z · LW(p) · GW(p)
He literally wrote plans about what he would do with the billions of dollars the singularity institute would be bringing in by 2005 using the words 'silicon crusade' to describe its actions to bring about the singularity and interstellar supercivilization by 2010 so as to avoid the apocalyptic nanotech war that would have started by then without their guidance. He also went on and on and on about his SAT scores in middle school (which are lower than those of one of my friends, taken via the same program at the same age) and how they proved he is a mutant supergenius who is the only possible person who can save the world.
I am distinctly unimpressed.
↑ comment by [deleted] · 2015-11-23T22:14:25.039Z · LW(p) · GW(p)
For many types of problems, analyzing how a system changed over time is a more effective method of understanding a problem than comparing one system's present state with another system's present state.
Replies from: MrMind↑ comment by [deleted] · 2015-11-24T17:40:35.684Z · LW(p) · GW(p)
These are so much fun to read!
(snapshot times chosen more or less at random, and specific pages are what I consider the highlights)
https://web.archive.org/web/20010204095400/http://sysopmind.com/beyond.html
(contains links to everything below and much more)
https://web.archive.org/web/20010213215810/http://sysopmind.com/sing/plan.html (his original founding plans for the singularity institute, extremely amusing)
https://web.archive.org/web/20010606183250/http://sysopmind.com/singularity.html
http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer (some... exceptional quotes in here and you can follow links)
https://web.archive.org/web/20010309014808/http://sysopmind.com/eliezer.html
https://web.archive.org/web/20010202171200/http://sysopmind.com/algernon.html
More can be found poking around on web archive and youtube and vimeo. Even more via PM.
↑ comment by NancyLebovitz · 2015-11-24T00:02:07.553Z · LW(p) · GW(p)
I don't think Eliezer's changes in hubris level are what's interesting-- he's had some influence, and no on seems to think his earliest work is his best. It might make sense to find out what how his writing has changed over time.
comment by ike · 2015-11-29T00:39:33.808Z · LW(p) · GW(p)
The Guardian had an interesting article on biases. Makes a similar point as http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/
comment by [deleted] · 2015-11-28T02:24:18.536Z · LW(p) · GW(p)
I recall a tool, by WeiDao if I'm not mistaken which will display all a users posts and comments ever on one page. I was wondering if anyone had the link. Perhaps we could get a wiki page with all of the LessWrong widgets like this for reference? I am not authorised to make Wiki pages myself.
Replies from: gjmcomment by [deleted] · 2015-11-27T05:00:48.157Z · LW(p) · GW(p)
What are you working on?
Do you need help?
Replies from: None↑ comment by [deleted] · 2015-11-27T12:11:54.735Z · LW(p) · GW(p)
Are you offering help to people, or just curious about support networks? I'm mainly trying to motivate myself to write up a paper on relatively old data: dealing with my usual problem that I am more excited about newer projects, even though the older ones are not completed. Help would be nice but it's essentially my sole responsibility to prepare a first draft, after which my coauthors will contribute.
What are you working on, and do you need help?
Replies from: None, Elo↑ comment by [deleted] · 2015-11-28T00:20:44.714Z · LW(p) · GW(p)
I prompting discussion of these things in case any parties would like to help and/or be helped. Sometimes people who want to help don't feel like starting the discussion, and same for those who want help. But if we're all just mentioning what we're doing, perhaps people can help in ways we hadn't even thought of.
I'd be happy to help if my skills and interest set matches your hopes for a coauthor. I highly doubt that since I'm just a lowly grad student.
I'm working on a social enterprise, my rationality, working out some procedural things with two collaborators on two seperate projects, and getting my notes and records better organised. Don't really need any help from online for those things except rationality, and make pleas for help about that here all the time anyway. Thanks for asking.
Replies from: None↑ comment by [deleted] · 2015-11-30T10:18:21.713Z · LW(p) · GW(p)
I see.... but buried deep in the open thread it's not likely to be seen by many, and not very clear what you are trying to get out of such a brief, open-ended comment when originally posted.
For example, I misunderstood your intent, and thought you were talking more generally about problem solving and social support, vs. requesting help from LW's users.
↑ comment by Elo · 2015-11-28T10:03:35.625Z · LW(p) · GW(p)
I am interested in the paper on the topic; if you drop what you have into a google doc and PM me the link I will add my thoughts. (I have similar troubles with old/new projects)
Replies from: None↑ comment by [deleted] · 2015-11-30T10:10:52.954Z · LW(p) · GW(p)
Sorry, my comment was ambiguous - I am not writing a paper on this subject but am struggling with finishing old projects on other topics, while being seduced by novelty. Writing up my thoughts on old/new projects would make the problem worse as this is well outside the field I need to make progress in to keep a desk over my head.
Replies from: Elo↑ comment by Elo · 2015-11-30T19:41:13.988Z · LW(p) · GW(p)
a suggestion: If you consider the salience of completion more strongly, you might be able to motivate yourself to complete a half-done project sooner than a zero-done project.
Obviously the draw of the new-shiny project is significant and likely to be more interesting because it is novel. The finishing reward is further away though.
Consider: Making a list of what is left to do on this existing project. You might be suffering from a difficulty in knowing what to do next (which masks itself in akrasia and new shiny project feelings). At some point after doing all the obviously easy parts to the project we are left with the not-obviously easy parts (if all the parts were obvious and easy we would be done with the task).
comment by polymathwannabe · 2015-11-26T20:33:12.693Z · LW(p) · GW(p)
In the news:
Nassim Taleb is an inverse stopped clock.
Replies from: username2, ChristianKl↑ comment by username2 · 2015-11-29T21:42:29.077Z · LW(p) · GW(p)
When Nassim Taleb's predictions fail and someone points that out, he calls that person fucking idiot.
↑ comment by ChristianKl · 2015-11-26T22:37:56.138Z · LW(p) · GW(p)
The main complaint seems to be that Taleb violates an orthodoxy and not that he's factually wrong. On the issues of costs the cited paper says:
This cost-saving potential has been supported by several studies that compared homeopathy with conventional medicine. However, our own health economic evaluations did not show a consistent picture. We observed no differences in costs [15] or additional costs [16,17] in the homeopathic group compared to conventional care depending on the setting or diagnosis. [...] A recent systematic review by Viksveen on the cost-effectiveness of homeopathy showed that in eight out of fourteen studies, the homeopathic treatment was less cost-intensive than the conventional treatment; in four studies, the treatment costs were similar; and in two studies, the homeopathic treatment was more costly than conventional treatment
There are observed cases where homeopathy did lead to cost savings as Taleb suggests.
Interestingly the cited PLoS paper puts people who don't take homeopathy into the homeopathy group based on the fact that they could get it for free:
For this analysis, patients belonged to the homeopathy group if they subscribed to the integrated care contract in 2011 and if they were continuously insured through the TK for the observational period (12 months before and 18 months after subscription to the integrated care contract), regardless of whether they used homeopathy during the study period."
comment by [deleted] · 2015-11-24T05:22:50.883Z · LW(p) · GW(p)
If anybody is interested in Moscow postrationality meetup, please comment here or pm me. Thanks!
comment by Gunslinger (LessWrong1) · 2015-11-23T15:40:32.946Z · LW(p) · GW(p)
If molecular interactions are deterministic, are all universes identical?
Replies from: Viliam, polymathwannabe, MrMind↑ comment by Viliam · 2015-11-23T21:07:17.316Z · LW(p) · GW(p)
Depends on what you mean by "deterministic" (and "universe").
1) Do you assume each interaction has only one outcome, or are multiple outcomes (in different Everett branches) possible?
2) Do you assume all universes started in the same state? Molecular interactions in an existing universe are a different topic than the "creation of the universe".
↑ comment by polymathwannabe · 2015-11-23T18:45:11.817Z · LW(p) · GW(p)
In a universe where molecular interactions are deterministic, I don't see any additional universes emerging.
↑ comment by MrMind · 2015-11-24T08:22:41.887Z · LW(p) · GW(p)
If by deterministic you mean informationally, that is with complete information we have the possibility to predict any future states (barred complexity), then we most definitely know that molecular interactions are not deterministic.
However, even hypothesizing a deterministic universe, you could have different starting conditions that would evolve to different universes, and while you are at it, why not postulates different deterministic laws?
comment by Soothsilver · 2015-11-23T11:25:31.501Z · LW(p) · GW(p)
Do you know of any remedy or prevention for hiccups? I can't get anything trusthworthy out of the internet nor out of friends and family. All just anecdotes.
Replies from: None, Manfred, Elo, moridinamael↑ comment by [deleted] · 2015-11-23T11:42:57.493Z · LW(p) · GW(p)
There's a very extensive medical literature - although mostly focusing upon persistent (>48 hours) or intractable (>1 month) hiccups. One possible remedy jumped out at me from Google Scholar results: title alone gives the game away (albeit N=1):
Odeh, M., Bassan, H., & Oliven, A. (1990). Termination of intractable hiccups with digital rectal massage. Journal of internal medicine, 227(2), 145-146.
A very recent review by Steger et al (2015) gives good coverage of "state of the art" in acute hiccups:
In acute hiccups, physical manoeuvres are often effective (Table 2). Many of these ‘remedies’ have not been tested and some appear to have been invented ‘purely for the amusement of the patient's friends’.[23] The principle that links these manoeuvres is the attempt to interrupt or suppress the reflex arc (Figure 1) thought to maintain repetitive diaphragmatic contractions.[8, 12] This is most often attempted by breath holding, the Valsalva manoeuvre or rebreathing into a paper bag. Physiological studies have demonstrated a mechanism by which these manoeuvres improve hiccups, with the frequency of hiccups decreasing as arterial pCO2 rises.[9] This experimental evidence, backed up by personal experience of the senior author, suggests that an effective method to interrupt hiccups is to hold ones breath in expiration (diaphragm relaxed, pCO2 high). Other techniques that can lead to cessation of hiccups involve stimulation of the nose, ear or throat (e.g. ice cold drinks), eyeball pressure, carotid massage or self-induced vomiting. Techniques that ‘push against’ the diaphragm by drawing up the legs to the chest (i.e. ‘rolling into a ball’) may also be helpful. Rectal massage and sexual stimulation have also been reported to help[24, 25]; however, we recommend that this kind of recommendation is reserved for carefully selected patients!
before concluding in case of persistent/intractable hiccups:
This systematic review revealed no high quality data on which to base treatment recommendations.
Steger, M., Schneemann, M., & Fox, M. (2015). Systemic review: the pathogenesis and pharmacological treatment of hiccups. Alimentary pharmacology & therapeutics, 42(9), 1037-1050.
Further note: reference [23] above in Steger et al (2015) is "Watterson B. The Complete Calvin and Hobbes. Kansas City, MO: Andrews McMeel Publishing, 2005."
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2015-11-23T14:30:08.840Z · LW(p) · GW(p)
I've got a method that's reliable for me. I pay attention to how I feel between hiccups, observe what seems like a hiccuppy feeling (in the neighborhood of my diaphragm), and make myself stop feeling it.
↑ comment by Manfred · 2015-11-23T19:16:28.268Z · LW(p) · GW(p)
Well, I know of some remedies, but they're also anecdotal :)
All the good ones I know are essentially breathing exercises, where you have to pay close attention to your breathing for a while (i.e. take control of your diaphragm). Like the classic "drink a glass of water from the far side of the glass" is actually a breathing exercise, which works just as well if you just do the breathing without the glass of water.
↑ comment by Elo · 2015-11-24T22:51:19.769Z · LW(p) · GW(p)
agree with others; the diaphragm is the muscle underneath the lungs that controls your breathing. Hiccups are caused by irritation of the diaphragm. knowing this; you are looking for methods of relaxing the diaphragm. that includes generally trying to work out the control for the automatic muscle; and figuring out how to calm it down.
as for trustworthy, or better-than-anecdotes - you can get surgery if it's a long-term (over several months) problem. how do you relax the diaphragm? for your human-hardware? likely different to other humans' hardware - so not much luck finding non-anecdote solutions.
↑ comment by moridinamael · 2015-11-23T22:08:37.421Z · LW(p) · GW(p)
This works for me: Pour yourself a glass of water and hold it in one hand. Lift your arms up, reaching for the ceiling - this movement has the consequence of lifting your ribcage. Drink a few swallows from the glass of water without dropping your ribcage from its elevated orientation. Do this a few times.
comment by [deleted] · 2015-11-28T04:47:06.282Z · LW(p) · GW(p)
The other day I met a woman named common first name redacted out of respect to commentator's recommendation near the train station. I was just sitting and eating lunch, and she came over to chat. She had been ill with Lithium toxicity in hospital lately. She attends the same (mental) health complex as me. She was lovely, lonely, dated younger guys. She mentioned that her money is controlled by a State Trust to an extent, and that her last boyfriend continues to abuse her financially, and occasionally physically. She mentioned the police have recommended she break up with him, but she says that she loves him. We swapped numbers. Anything I can do for her?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-11-28T18:57:31.120Z · LW(p) · GW(p)
As a first protective measure, don't publish her name on the internet.
She has already contacted the police, and they have already given her the best advice available.
The rest is up to her.
comment by [deleted] · 2015-11-25T09:37:21.981Z · LW(p) · GW(p)
'noisy text analytics'. Has anyone trialed applying those algorithms in their minds with human conversations or text messaging (say through facebook) it to filter information in real life? Was it more efficient than your default or non-volitional approach?
comment by [deleted] · 2015-11-23T13:15:43.933Z · LW(p) · GW(p)
How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?
Replies from: Stingraycomment by [deleted] · 2015-11-28T01:43:48.545Z · LW(p) · GW(p)
I have a student email account that forwards messages to my personal gmail account. Sometimes I have to send messages from my student gmail account. Can these get automatically moved to my personal gmail sent folder so that I can find them with one search?
Replies from: ikecomment by [deleted] · 2015-11-28T00:28:10.184Z · LW(p) · GW(p)
What will Google's new semantic search mean for search strategy?
comment by [deleted] · 2015-11-25T08:07:11.052Z · LW(p) · GW(p)
What, other than an interest in the commercial success of the car lot business, normative social influence and scrupulosity (all tenuous), stops someone from taking a second ticket (by foot) from a gated car park then immediately paying that one off when leaving, rather than paying the original entry ticket?
Replies from: Elo, Richard_Kennaway, ChristianKl↑ comment by Richard_Kennaway · 2015-11-25T11:06:38.909Z · LW(p) · GW(p)
What, other than an interest in the commercial success of the car lot business, normative social influence and scrupulosity (all tenuous)
These are what holds society together. These are what society is -- including the bit about commercial success.
But have you tried? The entry barriers only issue a ticket when there's a car in front of them. That's how it works at the car parks I'm familiar with that use that system.
And, to continue the discussion of why your karma is so persistently low, this is something you might have thought of before posting. See also.
↑ comment by ChristianKl · 2015-11-25T10:15:15.422Z · LW(p) · GW(p)
Why don't people steal from other people if nobody is looking? General ethics.
comment by WhyAsk · 2015-11-24T23:44:55.091Z · LW(p) · GW(p)
Any US lawyers here?
A woman who once worked in a law office told me that clients come and go (she used the word e·phem·er·al) so the real allegiance for a lawyer is to other lawyers. Because they will see them again and again.
And Game Theory has something to say about how to treat a person that you are not likely to see again.
Please, folks, do not ask me to justify this "hearsay". I found her credible, so please take this woman's word as gospel, as an axiom, and go from there.
Please confirm, deny, explain or comment on her statement.
TIA.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-11-25T01:11:42.769Z · LW(p) · GW(p)
A "person that you are not likely to see again" is not a complete description of a lawyer's client; it's missing the part where "this person pays me for my services so I need many of this person in order to make a living."
Replies from: WhyAsk↑ comment by WhyAsk · 2015-11-27T00:35:50.918Z · LW(p) · GW(p)
Your post reminds me of something.
If there is a huge disparity of power between the lawyer and you, Game Theory kind of "goes out the window".
Right?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2015-11-27T00:56:29.504Z · LW(p) · GW(p)
The fact that I have never hired a lawyer may be a factor in my difficulty imagining a scenario where your lawyer turns into your opponent in a power struggle; I see it more likely to happen between you and your opponent's lawyer.
High-profile lawyers with a lot of power don't tend to be hired by ordinary people with little power. In any case, it is in your lawyer's interests that your interests get served. Besides, what you could lose in the worst scenario is that one lawsuit (and possibly money and/or jail time); what your lawyer has to lose in the worst scenario is reputation, future clients, and the legal ability to practice law.
Replies from: Viliam↑ comment by Viliam · 2015-11-27T07:00:49.023Z · LW(p) · GW(p)
Imagine the following situation: we are having a lawsuit against each other. Let's say it is already obvious for both of our lawyers which side is going to win, but it is not so obvious for us.
The lawyers have an option to do it quickly and relatively cheaply. But they also have an option to charge each of us for extra hours of work, if they tell us it is necessary. Neither option will change the outcome of the lawsuit. But it will change how much money the lawyers get from us.
In such case, it would be rational for the lawyers to cooperate with each other, against our interests.
Replies from: WhyAsk, polymathwannabe↑ comment by WhyAsk · 2015-11-27T22:08:45.958Z · LW(p) · GW(p)
That's been my experience, and any questions about "How much more is this going to cost me?" are not received well.
Almost every lawyer I've hired or dealt with gave me almost nothing for my money. And good luck trying to get a bad lawyer disbarred.
What I should probably do is solicit bids for a particular legal problem.
↑ comment by polymathwannabe · 2015-11-27T15:10:03.416Z · LW(p) · GW(p)
In this example the obvious culprit is the practice of charging by the hour, which I've always found a terrible idea.
comment by [deleted] · 2015-11-23T10:15:35.231Z · LW(p) · GW(p)
What would happen if a altcoin was developed where users had to precommit not to forking that coin?
Replies from: Viliam, ChristianKl↑ comment by Viliam · 2015-11-23T11:59:06.108Z · LW(p) · GW(p)
How exactly could users of something anonymous precommit to not do something?
Replies from: None↑ comment by [deleted] · 2015-11-24T03:22:11.548Z · LW(p) · GW(p)
volition
Replies from: Viliam, NancyLebovitz↑ comment by NancyLebovitz · 2015-11-24T14:05:56.045Z · LW(p) · GW(p)
They could do it, but there wouldn't be any strong reason for anyone to trust that they would.
↑ comment by ChristianKl · 2015-11-23T12:34:29.506Z · LW(p) · GW(p)
It wouldn't be much different than the status quo. No one of the direct forks of bitcoin currently compete with bitcoin for the core purpose of being a currency and not just speculation.
comment by [deleted] · 2015-11-28T00:31:46.693Z · LW(p) · GW(p)
Could someone make a text analytics widget for LessWrong?
Replies from: Nonecomment by [deleted] · 2015-11-29T11:38:40.266Z · LW(p) · GW(p)
Charity science is running an internship program for their charity entrepreneurship program. Great concept, cheesey name. The people going after interships are probably young, the same people 80K tends to advise to pursue career capital - advice I'd commend. I reckon charity science is gonna be out of luck. Get back in shape Charity Science! You're an important, less risk-adverse player in the EA organisational space.
comment by [deleted] · 2015-11-29T11:23:43.873Z · LW(p) · GW(p)
What do you think of when you think (off the coast) of West Africa. Jamestown, St. Helana: Negligible crime, tidy urban plan, awesomest climate in the English speaking world, unemployment is low....yet the local youth complain about economic productivity, wages and cost of living. The majority of employment on the island is in the Government. Some parts of the U.K are more socialist than others!
Having just reviewed some job advertisements for the island
Evaluation & assessment of expression of interest
The evaluation will be in accordance with the following criteria
Evaluation Criteria / Weighting Capability and experience in type and scale of work / 40 References regarding previous performance in relation to time, cost, and quality / 30 Resources and availability / 30
They quantitatively weight elements of their selection criteria? Damn. Rationalist paradise up in this biyatch.