Open thread, Oct. 20 - Oct. 26, 2014
post by MrMind · 2014-10-20T08:12:13.056Z · LW · GW · Legacy · 270 commentsContents
270 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
270 comments
Comments sorted by top scores.
comment by Baisius · 2014-10-21T08:04:14.520Z · LW(p) · GW(p)
I have a question about the Effective Altruism community that's always bothered me. Why does the movement not seem to have much overlap with the frugality/early retirement movement? Is it just that I haven't seen it? I read a number of sites (My favorite being Mr Money Mustache, who retired at age 30 on a modest engineer's salary) that focus on early retirement through (many would say extreme) frugality. I wouldn't expect that this, or something close to it, would be hard for most people in the demographic of this site. It seems to me that the two movements have a lot in common, mainly attempting to get people to be more responsible with their money. If you take as an axiom that, for members of the EA movement, marginal income/savings equals increased donations, it seems as though there is tremendous opportunity for synergies between the two.
Replies from: RowanE, Evan_Gaensbauer, jpl68, drethelin, None↑ comment by RowanE · 2014-10-21T08:17:37.068Z · LW(p) · GW(p)
Possibly donating money is easier when it's funging against luxuries than when it's funging against early retirement, and it's hard for people who don't plan on retiring early to read and follow frugality advice that's framed in terms of how much better financial independence is than whichever luxury?
Replies from: beoShaffer↑ comment by beoShaffer · 2014-10-24T02:17:05.606Z · LW(p) · GW(p)
I have found this to be the case. I still find the advice useful, but find myself thinking about how I'm going to retire early before remembering there was another reason I was saving that money.
Replies from: cameroncowan↑ comment by cameroncowan · 2014-10-24T04:27:37.837Z · LW(p) · GW(p)
It takes roughly 2.5 million dollars invested prudently with a return of 7% per annum in order to live off savings. You would have to be earning a great deal and live extremely frugally in order to accomplish that. However, there are people that retire from daily work at 35 who have done it. However, given student debt and this kind of thing I think it is harder now than ever before. I have an issue with extreme altruism movement and the early retirement crowd because I think there is a loss of meaning in both.
Replies from: army1987, Lumifer, RowanE, CAE_Jones↑ comment by A1987dM (army1987) · 2014-10-24T11:40:31.767Z · LW(p) · GW(p)
It takes roughly 2.5 million dollars invested prudently with a return of 7% per annum in order to live off savings.
Why, can you not live on less than $175k/year?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-27T17:52:01.064Z · LW(p) · GW(p)
No, I would have to lay off most of my servants. I can't imagine living like that.
↑ comment by Lumifer · 2014-10-27T18:04:14.890Z · LW(p) · GW(p)
It takes roughly 2.5 million dollars invested prudently with a return of 7% per annum in order to live off savings.
That estimate needs at least two more vital numbers: the expected volatility of your returns and the expected inflation.
↑ comment by RowanE · 2014-10-24T14:50:59.680Z · LW(p) · GW(p)
At the current amount I live off, $2.5 million would last me for 200 years, and that's if it returned 0% post-inflation. I might have less expenses than "real adults", being a student, but unless you're assuming a family of 12, those numbers sound insane.
↑ comment by CAE_Jones · 2014-10-27T18:36:10.372Z · LW(p) · GW(p)
I could live on $20k/year easily, given I stay in the same place. A ROI of ~3%/year on an investment of $1,000,000 would sustain me for life, given that it remains constant at worst.
(Expenses: ~900USD in student loan payments, ~400USD food/utilities/internet/transit, = ~1300/month = ~15600/year. I'll also note that I am not drawing even half that in SSI at the moment, but if not for the student debt, SSI would be livable. This relies on not paying rent/mortgage/whatever you pay for housing. If housing is an issue, location obviously matters--$30k/year in Silicon Valley isn't worth much, but it might get you further in, say, St Louis. I specifically picked St Louis because it is both an excellent city for cheapskates and, at least some journalists there seem to think it's becoming a tech town. I do not live there.)
Of course, if I had $1,000,000 to invest, I'd probably just spend the first $100k to wipe out most of the loans, and invest the rest. The interest drops a little, but the reduction in expenses more than makes up for it (expected gains are ~8k/year). In reality, the most likely reason that I wouldn't win forever if someone handed me a million dollars is that I have no experience with financial shenanigans and probably would fail completely at making these payments/investments happen. That, and the no moving thing (but that's a whole other can of worms).
↑ comment by Evan_Gaensbauer · 2014-10-22T02:09:42.746Z · LW(p) · GW(p)
This is a question that's been bothering me for some months as well, ever since I encountered Early Retirement Extreme a few months ago.
We here in Vancouver have substantial overlap between the meetups for Mr. Money Mustache, effective altruism, rationality, and life extension. It's weird, because there's about a dozen people who are all friends, so we go to each other's meetups lots.
Anyway, much of what the effective altruism community is comes from what was popular in its precursor communities. Less Wrong, academia, and the non-profit world don't all focus on the early retirement movement. If frugality isn't a value in effective altruism lifestyles yet, then let's see if we can't make that happen.
Replies from: Baisius↑ comment by Baisius · 2014-10-22T08:15:16.393Z · LW(p) · GW(p)
What are some strategies for pursuing this? I considered trying to write something, but it seems that the central message of "people are kind of bad at spending money efficiently and you are a people and you are probably bad at it too" is hard to convey without being rude, and unlikely to succeed. Particularly when you're, in effect, going to be asking them to give their money away instead of saving it for retirement.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-10-22T10:46:47.787Z · LW(p) · GW(p)
Oh, well, I've actually received requests to write something up, except for Less Wrong. For the record, I'm unsure why there isn't more about personal finance on Less Wrong, let alone within effective altruism. I figure readers of Less Wrong will be more amenable to being told they're bad at thinking about stuff. On top of that, if they're already intending to give away their money, it wouldn't be that much of a problem. Alternatively, if people do save enough for retirement, then they could spend several extra decades volunteering for effective charities for free.
Anyway, I figured that we spread the ideas more among the community as it already exists, and then the dozens, hundreds, or even thousands of people who integrate it into their own effective altruism lifestyles could brainstorm how to make it amenable to the general public.
Replies from: None, Lumifer, tog↑ comment by [deleted] · 2014-10-23T02:25:43.716Z · LW(p) · GW(p)
I'm also unsure why there isn't more about personal finance / money management here. It seems like an excellent use-case for rationality: it's so trivially quantifiable that it comes pre-quantified, and it's something that a lot of people are bad at, so there ought to be room for improvement.
Even though LW's in-practice target audience is a demographic unusually susceptible to the meme that it's completely impossible to beat the market, investment is only one part of managing money. (And I wonder how many people with enough income to do so even invest in index funds.) Optimizing expenditures is another part; have there been any posts on how to, say, optimize diet for low cost with an at-least-superior-to-the-average-American level of nutrition? Or higher-level skills: how to cultivate the virtue of frugality and so on.
Replies from: Evan_Gaensbauer, Jackercrack↑ comment by Evan_Gaensbauer · 2014-10-24T09:25:14.502Z · LW(p) · GW(p)
I like the way you think. Less Wrong has a culture, and a jargon (i.e., technical or artificial language specific to its culture). I don't mean that as an insult; I use it, so I'll work with it in producing content regarding to this frugality side of personal finance. That is, I can term it as 'optimizing expenditures', or 'cultivating a (set of) habits'. That may quicken in the minds of Less Wrong users the relevance of this information to rationality.
Of course, what we may find in the course of exploring money management is that Less Wrong itself can improve upon the advice of these websites for ourselves. That would be interesting.
↑ comment by Jackercrack · 2014-10-27T17:54:22.441Z · LW(p) · GW(p)
optimize diet for low cost with an at-least-superior-to-the-average-American level of nutrition
Well there's the Soylent idea, thought I don't think it was from LW. Soylent being 100% of all required daily nutrients stored in powder format then used to make shakes. In theory after a number of iterations it should be the healthiest food possible for humans to consume as well as being fairly cheap.
↑ comment by Lumifer · 2014-10-22T15:29:47.555Z · LW(p) · GW(p)
I'm unsure why there isn't more about personal finance on Less Wrong,
For some reason a noticeable part of LW has decided that the answer to all personal finance questions is two words -- "index funds" -- and tends to be hostile to suggestions that finance is a bit more complex than that.
Note that "frugal living" and "personal finance" are quite different topics. EAs, for example, are interested in the former but not in the latter as they donate their free cash flow and so don't have to manage it.
I don't really see the early retirement movement being compatible with EA...
Replies from: Baisius, ChristianKl, army1987, ESRogs, Evan_Gaensbauer, RowanE↑ comment by Baisius · 2014-10-22T20:00:31.040Z · LW(p) · GW(p)
To me, it's more about financial independence than early retirement. Financial independence gives you the options to do a lot of different things; "retire" and volunteer for an effective charity, continue working and donate 100% of your income to charity, continue working and balloon your nest egg to establish a trust to be donated to an effective charity upon your death, etc. The knowledge that you are 100% financially independent gives tremendous security that (as well as it's other benefits, such as decreasing stress) allows someone to comfortably and without consideration give large amounts of money.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-22T20:43:00.816Z · LW(p) · GW(p)
it's more about financial independence than early retirement
In the context I treat them as synonyms.
allows someone to comfortably and without consideration give large amounts of money.
Ahem. That is an excellent way to stop being financially independent in short order.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-27T17:57:01.482Z · LW(p) · GW(p)
I believe that "giving large amounts of money without consideration" in this context does not include the part that you need for the financial independence.
In other words, if you need X money to be financially independent, and you have X+Y, you are free to spend up to Y in whatever way you wish, including e.g. donating the whole Y to a charity or investing them in a new project, even if for an average person spending Y this way would seem insane.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-27T18:11:46.380Z · LW(p) · GW(p)
if you need X money to be financially independent, and you have X+Y
If you're making money with the goal of being financially independent you're done when you have X so you can and should stop. Where does Y come from?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-28T00:28:00.803Z · LW(p) · GW(p)
I don't agree with the "should stop" part.
Until you reach X, you work because you have to. To some degree you are motivated by fear. You probably take jobs you wouldn't take if you were born in a billionaire family.
After you reach X, the fear motive is gone. But you can still do things for other reasons, for example because they are fun, or because you feel competitive. Some of those things may bring you more money.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-28T15:05:09.865Z · LW(p) · GW(p)
OK, so maybe you shouldn't stop, but if you're not primarily motivated by making money any more, the likelihood that whatever you do will incidentally bring you noticeably large amounts of money Y is not very high.
Replies from: Viliam_Bur, Baisius↑ comment by Viliam_Bur · 2014-10-29T16:38:17.956Z · LW(p) · GW(p)
There are different kinds of "motivation by money". Some people are in a situation where if they don't make enough money, their children will starve. Some people already have all they need, and more money is just some kind of "score" to measure how successful they are in their projects; to compete against other people in similar situation.
Some activities bring average money reliably. Some activities have a small chance of huge success, and a big chance of nothing. Not having to make money frees your hands to do the latter kind of activities, without putting your family in danger of starvation. For example, you can spend all your day writing a book, with the goal of becoming famous. If you fail, no problem. If you succeed, you can make a lot of money.
Yes, the probability of such outcome is small, because it is P(doing something like this if you already have enough money) × P(succeeding).
Replies from: Lumifer↑ comment by Lumifer · 2014-10-29T16:51:24.553Z · LW(p) · GW(p)
Yes, the probability of such outcome is small, because it is P(doing something like this if you already have enough money) × P(succeeding).
So, we agree that the probability is small.
And, actually, it's P(doing something like this if you already have enough money) × P(succeeding) × P(what you like to do has high-variance outcomes and could generate a lot of money). Maybe what you really like is just long walks on the beach :-)
↑ comment by ChristianKl · 2014-10-22T18:15:17.454Z · LW(p) · GW(p)
EAs, for example, are interested in the former but not in the latter as they donate their free cash flow and so don't have to manage it.
I don't think that the majority of people within the EA donate all the money that's free cash flow and save nothing.
Replies from: tog↑ comment by A1987dM (army1987) · 2014-10-25T19:06:36.307Z · LW(p) · GW(p)
For some reason a noticeable part of LW has decided that the answer to all personal finance questions is two words -- "index funds" -- and tends to be hostile to suggestions that finance is a bit more complex than that.
Isn't the fact that finance is complex the very reason why unless you're an expert you probably had better play it safe than try to outsmart the market and risk getting burned?
Replies from: Lumifer↑ comment by Lumifer · 2014-10-27T01:12:44.372Z · LW(p) · GW(p)
you probably had better play it safe
What makes you think that investing in what is typically large-cap US equity is "playing it safe"?
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-27T12:41:46.955Z · LW(p) · GW(p)
There are index funds that also include smaller-cap equity, non-US equity, and bonds. And even a large-cap US equity index fund is probably better than gambling except for the small minority of people who know what they're doing.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-27T15:02:26.679Z · LW(p) · GW(p)
There are index funds that also include smaller-cap equity, non-US equity, and bonds.
Of course, but LW rarely gets into specifics of which index funds other than prefer low-cost ones.
is probably better than gambling
Heh. Do you think there might be a fallacy involved in this argument?
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-29T12:39:35.508Z · LW(p) · GW(p)
is probably better than gambling
Heh. Do you think there might be a fallacy involved in this argument?
Sure, it's not like these are mutually exhaustive. Then again, hiding cash under your mattress probably isn't better than index funds either.
My point is not that investments betters than index funds can't exist, it's that it's hard for most people to know what they will be ahead of time.
Replies from: Lumifer↑ comment by Lumifer · 2014-10-29T15:56:50.790Z · LW(p) · GW(p)
My point is not that investments betters than index funds can't exist
An "index fund" is not an investment. It's a large class of very diverse investments with different characteristics.
Reading charitably, the advice the invest in an index fund really says "your investment portfolio should be diversified". That is generally true, but woefully inadequate as a sole guideline to figure out where to put your money.
↑ comment by ESRogs · 2014-10-24T18:03:31.804Z · LW(p) · GW(p)
EAs, for example, are interested in the former but not in the latter as they donate their free cash flow and so don't have to manage it.
I think this is a mischaracterization, as 1) I don't think giving everything above a certain threshold is a majority behavior (note that GWWC's pledge only requires you to give 10%), and 2) EA's discuss investing for the purposes of giving more later.
↑ comment by Evan_Gaensbauer · 2014-10-24T09:33:57.976Z · LW(p) · GW(p)
What I was trying to mean was that effective altruism might benefit from those who don't retire, per se, but become financially independent early in life, and can remain so for the remainder of their lives, so that they can spend the rest of their careers volunteering for effective causes and organizations. Thought I can't find the particular blog post right now, I recall Peter Hurford pondering that if he concluded doing direct work in effective altruism was the path for him, instead of earning to give, he might keep working a high-paying job for sometime regardless. That way, he could gain valuable experience, and use the money he earns to eventually become financially independent, i.e., 'retire early'. Then, when he is age forty or something, he can do valuable work as a non-profit manager or researcher or personal assistant for free.
I can't recall if he's the only person who has considered this career model, but maybe some should take a closer look at it. This is how early retirement beyond frugal living habits might benefit effective altruism.
Replies from: Lumifer, Baisius↑ comment by Lumifer · 2014-10-24T15:51:12.302Z · LW(p) · GW(p)
become financially independent early in life, and can remain so for the remainder of their lives, so that they can spend the rest of their careers volunteering for effective causes and organizations.
The problem is that you have to show this is better than just giving all your "excess" money to the effective causes right away and continuing to work in the normal manner.
Replies from: Evan_Gaensbauer, Baisius↑ comment by Evan_Gaensbauer · 2014-10-25T05:43:20.038Z · LW(p) · GW(p)
Well, nobody from within effective altruism has written much up about this yet. It's not something I'm considering doing soon. Until someone does, I doubt others will think about it, so it's a non-issue. If some take this consideration for their careers seriously, then that's a problem they'll need to assess, hopefully publicly so feedback can be given. At any rate, you make a good point, so I won't go around encouraging people to do this willy-nilly, or something.
↑ comment by Baisius · 2014-10-26T20:58:52.232Z · LW(p) · GW(p)
keep working a high-paying job for sometime regardless. That way, he could gain valuable experience, and use the money he earns to eventually become financially independent, i.e., 'retire early'. Then, when he is age forty or something, he can do valuable work as a non-profit manager or researcher or personal assistant for free.
This is a career path I am very seriously considering. At the very least, I will continue to invest/save my money, if for no other reason that it doesn't seem intuitively obvious to me that I should prefer saving 100 lives this year to 104 lives next year. Add to this that I expect the EA movement to more accurately determine which charities are the most effective in future years (MIRI is highly uncertain to be the most effective, but could potentially be much more effective) and subtract the fact that donations to current effective charities will potentially eliminate some low hanging fruit. After all of that, I suspect it is probably a little more optimal to save money and donate later than to donate now. However I still can't shake the feeling that I'm just writing reasons for my bottom line of not giving my money away. This is a difficult question that there have been a number of threads on, and I don't claim to have a good answer to it, only my answer.
↑ comment by tog · 2014-10-24T06:52:08.765Z · LW(p) · GW(p)
Oh, well, I've actually received requests to write something up, except for Less Wrong.
I think that'd be great Evan. In the UK I make extensive use of the excellent http://www.moneysavingexpert.com/ - I couldn't find anything similar for Canada alas. But there are a bunch more topics for this. One option would be for you to put what you write on a wiki (e.g. the nascent EA one) so that others could help build it up.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-10-24T09:37:08.162Z · LW(p) · GW(p)
I haven't read too much of these websites myself, but I intend to, as basically all my friends you know anyway are eager to have me write this up. If I do so, I'll make a separate version for the effective altruism forum. I invite you to collaborate or review, either. I'll let you know when I get started on this.
↑ comment by jpl68 · 2014-10-22T09:23:57.670Z · LW(p) · GW(p)
Are you asking why EAs aren't more concerned with frugality?
Replies from: Baisius↑ comment by Baisius · 2014-10-22T19:54:10.657Z · LW(p) · GW(p)
Yes.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-25T19:05:51.104Z · LW(p) · GW(p)
Well, Julia Wise and Jeff Kaufman are.
↑ comment by drethelin · 2014-10-28T05:31:57.794Z · LW(p) · GW(p)
Probably because it's largely composed of or at least represented by the kind of people who REALLY like living in places like NYC and the Bay Area, which are the opposite of frugal.
In regards to early retirement, there's something of an obsession with maximizing productivity as well as earning to give, both of which run counter to retirement.
Replies from: Baisius↑ comment by Baisius · 2014-10-28T07:08:52.190Z · LW(p) · GW(p)
Probably because it's largely composed of or at least represented by the kind of people who REALLY like living in places like NYC and the Bay Area, which are the opposite of frugal.
This is actually a point I have made to myself about the movement.
comment by Evan_Gaensbauer · 2014-10-22T04:15:42.923Z · LW(p) · GW(p)
The Rationalist Community: Catching Up to Speed
Note: the below comment is intended for my friend(s) who is/are not on Less Wrong yet, or presently, as an explanation of how the rationality community has changed in the interceding years between when Eliezer Yudkowsky finished writing his original sequences, and 2014. This is an attempt to bridge procedural knowledge gaps. Long-time users, feel free to comment below with suggestions for changes, or additions.
Off of Less Wrong, the perspective of the rationality community has changed in light of the research, and expansion of horizon, by the Center For Applied Rationality. A good start introduction to these changes is found in the essay Three Ways CFAR Has Changed My View of Rationality written by Julia Galef, the president of the CFAR.
On Less Wrong itself, Scott Alexander has written what this community of users has learned together in an essay aptly titled Five Years and One Week of Less Wrong.
The Decline of Less Wrong was a discussion this year about why Less Wrong has declined, where the rationalist community has moved, and what should, or shouldn't be done about it. If that interests you, the initial post is great, and there is some worthy insight in the comments as well.
However, if you want to catch up to speed right now, then check out the epic Map of the Rationalist Community from Slate Star Codex.
For a narrower focus, you can search the list of blogs on the Less Wrong wiki, which are sorted alphabetically by author name, and have a short list of topics each blog typically covers.
Finally, if you're (thinking of getting) on Tumblr, check out the Rationalist Masterlist which is a collated list of Tumblrs from (formerly) regular contributors to Less Wrong, and others who occupy the same memespace
Replies from: Alicorn, ruelian↑ comment by Alicorn · 2014-10-22T07:12:41.854Z · LW(p) · GW(p)
It's new, but it seems worth mentioning Rationalist Tutor specifically out of the tumblrs for newbies.
comment by Sean_o_h · 2014-10-25T10:32:31.224Z · LW(p) · GW(p)
A question I've been curious about: to those of you who have taken modafinil regularly/semi-regularly (as opposed to a once off) but have since stopped: why did you stop? Did it stop being effective? Was it no longer useful for your lifestyle? Any other reasons? Thanks!
Replies from: drethelin↑ comment by drethelin · 2014-10-28T05:24:09.645Z · LW(p) · GW(p)
I got more side effects when I took it regularly as opposed to taking it every now and then. Headaches and so on.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-10-28T21:58:03.716Z · LW(p) · GW(p)
Do you have an opinion on whether the side effects should be thought of as sleep deprivation?
Replies from: drethelin↑ comment by drethelin · 2014-10-29T18:03:08.376Z · LW(p) · GW(p)
Not really. I wasn't taking it at night to reduce sleep but when I got up in the morning to try and increase cognitive powers and whatnot.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-10-29T19:22:06.112Z · LW(p) · GW(p)
I probably should have added more detail to my question. A lot of people who take it in the morning report reduced sleep, maybe an hour less. It has a half-life of 16 hours, so that's not too surprising.
comment by pan · 2014-10-21T19:25:42.637Z · LW(p) · GW(p)
In an old article by Eliezer we're asked what we would tell Archimedes through a chronophone. I've found this idea to actually be pretty instructive if I instead ask what I would tell myself through a chronophone if I could call back only a few years.
The reason the chronophone idea is useful is because it forces you to speak in terms of 'cognitive policies' since if you use anything relevant to your own time period it will be translated into something relevant to the time period you're calling. In this way if I think about what I would tell my former self I think: 1) what mistakes did I make when I was younger? 2) what sort or cognitive policies or strategies would have allowed me to avoid those mistakes, and finally 3) am I applying the analogue of those strategies in my life today?
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-10-28T08:18:29.213Z · LW(p) · GW(p)
If you did this as a case study or thought experiment, and published this as a discussion post, that would be swell. Similar articles are written by other users of Less Wrong, and they're usually well-appreciated efforts, as far as I can tell. Your three questions are a good starting point, so I might write this as a post myself. Alternatively, if it's not worthy of it's own post, anyone doing this exercise on/for themselves should definitely share it in the (group) rationality diary.
comment by [deleted] · 2014-10-22T02:26:31.110Z · LW(p) · GW(p)
I am now in the Bay Area untill the 6th of November when I fly back to Europe.
Searching for new cool people to meet, especially from the rationalist community. Open to couch surfing and parties.
comment by the-citizen · 2014-10-20T10:38:15.270Z · LW(p) · GW(p)
So we have lots of guides on how to be rational... but do we have any materials that consider what makes a person decide to pursue rationality and consciously decide to adopt rationality as an approach to life?
Recently I was talking to someone and realised they didn't accept that a rational approach was always the best one, and it was harder than I expected to come up with an argument that would be compelling for someone that didn't think rationality was all that worthwhile... not neccessarily irrational, but just not a conscious follower/advocate of it. I think a lot of the arguments for it are actually quite philosophical or in some people's case mathematical. Got me thinking, what actually turns someone into a rationality fan? A rational argument? Oh wait....
I've got some ideas, but nothing I'd consider worth writing down at this stage... is there anything to prevent wheel reinvention?
Replies from: closeness, Emile, RowanE, closeness, ruelian, Metus, Jackercrack, ChristianKl, Richard_Kennaway↑ comment by closeness · 2014-10-20T12:36:46.345Z · LW(p) · GW(p)
People who look for ways to become more rational are probably far more rational than average already.
Replies from: None, None, the-citizen↑ comment by [deleted] · 2014-10-22T19:55:42.224Z · LW(p) · GW(p)
I would disagree and say that people who look for ways to "become rational" in the LessWrong sense are just exposed to a class of internet-based advice systems (like lifehacker and similar) that promote the idea that you can "hack" things to make them better. Rationality is the ultimate lifehack; it's One Weird Trick to Avoid Scope Insensitivity.
Outside of this subculture, people look for ways to improve all the time; people even look for ways to improve globally all the time. The way they do this isn't always "rational," or even effective, but if rationality is winning, it's clear that people look for ways to win all the time. They might do this by improving their communication skills, or their listening skills, or trying to become "centered" or "balanced" in some way that will propagate out to everything they do.
↑ comment by the-citizen · 2014-10-20T13:22:51.738Z · LW(p) · GW(p)
Agreed. So basically, what made them look?
Replies from: hyporational↑ comment by hyporational · 2014-10-21T04:27:50.192Z · LW(p) · GW(p)
Since they were more rational already they could observe the rational approach had better outcomes. Irrational people presumably can't do that. You'd have to appeal to their irrationality to make a case for rationality and I'm not sure how that'd work out.
↑ comment by RowanE · 2014-10-20T12:25:18.518Z · LW(p) · GW(p)
I expect this is mainly a disagreement about definitions? Many people think of "rationality" as referring to system-2 type thinking specifically, which isn't universally applicable and wouldn't actually be the best approach in many situations. Whereas the LessWrong definition is that Rationality is Systematized Winning, which may call for intuition and system-1 at times, depending on what the best approach actually is. With that definition given, I don't think selling "rationality" to people is something that needs to be done - they might then start dismissing the particular rationality technique you're trying to get them to use as "irrational", but presumably you're ready for that argument.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-20T13:22:00.627Z · LW(p) · GW(p)
So you mean the person who I was talking to had a different definition of rationality? I wonder whether most people feel the definition is quite subjective? That would actually be quite troubling when you think of it.
I actually instensely dislike that way of expressing it, mainly because argumentative competitiveness is a massive enemy of rationality. For me rationality probably comes down to instrumental truthiness :-)
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-20T13:39:28.538Z · LW(p) · GW(p)
I wonder whether most people feel the definition is quite subjective?
"subjective" comes with a bunch of connotations that aren't applicable.
If you look at the paper that defined evidence-based medicine you find that it talks about deemphasizing intuition. In the 20 years since that paper got published we learned a lot more about intuition and that intuition is actually quite useful. LessWrong Rationality is a 21st century ideology that takes into account new ideas. It's not what someone would have meant 20 years ago when he said "rationality" because certain knowledge didn't exist 20 years ago.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-21T07:48:54.852Z · LW(p) · GW(p)
OK but perhaps there is a core definition that defines what new aspects can be integrated.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-22T12:20:59.445Z · LW(p) · GW(p)
http://slatestarcodex.com/2014/03/13/five-years-and-one-week-of-less-wrong/ should be worth reading to get up to speed on the currrent LW ideology.
CFAR's vision page is also a good summary of what this community considers rationality to be about.
You will find that Scotts article that summarizes the knowledge that LW produced doesn't even use the word logic. The CFAR vision uses the word one time but only near the bottom of the article.
One of the core insights of LW is that teaching people who want to be rational to be rational isn't easy. We don't have an issue guide to rationality that we can give people and then they become rational.
When it comes to winning other people, most people do have goals that they care about. If you tell the body builder about the latest research on supplements or muscle building, then he's going to be interested. Have that knowledge makes him for effective at the goals that he cares about. For him that knowledge isn't useless nerdy stuff. As far as rationality is about winning, the body builder cares about winning in the domain of muscle building.
Of course you also have to account for status effects. Some people pretend to care about certain goals but are not willing to actually efficiently pursue those goals. There not any point where someone has to self identify as rationalist.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-24T12:29:07.617Z · LW(p) · GW(p)
Thanks that's interesting. Scott is always a good read.
Again, I'd have to disagree that the "winning" paradigm is useful in encouraging rational thought. Irrational thought can in many instances at least appear to be a good strategy for what the average person undestands as "winning", and it additionally evokes a highly competitve psychological state that is a a major source of bias.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-25T01:17:35.466Z · LW(p) · GW(p)
If you consider good strategies to be irrational than you mean something different with rational than what the term usually refers to on LW.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-25T07:08:34.786Z · LW(p) · GW(p)
A used car salesperson convincing themselves that what they're selling isn't a piece of crud is an example of where irrationality is a "good" (effective) strategy. I don't think that's what we are trying to encourage here. That's why I say instrumental truthiness - the truth part is important too.
I also maintain that focus on "winning" is psychologically in conflict with truth seeking. Politics = mind killer is best example.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-25T14:14:54.050Z · LW(p) · GW(p)
I think the orthodox LW view would be that this used car salesperson might have an immoral utility function but that he isn't irrational.
I also maintain that focus on "winning" is psychologically in conflict with truth seeking.
That basically means that sometimes the person who seeks the truth doesn't win. That outcome isn't satisfactory to Eliezer. In Rationality is Systematized Winning he writes:
If the "irrational" agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is "rational".
Of course you can define rationality for yourself differently but it's a mistake to project your own goals on others.
A recent article title Truth, it's not that great got 84% upvotes on LW.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-26T05:30:45.126Z · LW(p) · GW(p)
I am suprised that a significant group of people think that rationality is inclusive of useful false beliefs. Wouldn't we call LW an effectiveness forum, rather than a rationalist forum in that case?
That basically means that sometimes the person who seeks the truth doesn't win.
I think you're reading too much into that one quite rhetorical article, but I acknowledge he prioritises "winning" quite highly. I think he ought to revise that view. Trying to win with false beliefs risks not achieving your goals, but being oblivious to that fact. Like a mad person killing their friends because he/she thinks they've turned into evil dog-headed creatures or some such (exaggeration to illustrate my point).
Of course you can define rationality for yourself differently but it's a mistake to project your own goals on others.
Fair point. And maybe you're right I'm in the minority... I'm still not yet certain. I do note that upvotes does not indicate agreement, only a feeling that the article is an interesting read etc. Also, I note many comments disagree with article. It warrants further investigation for me though.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-10-26T09:20:19.526Z · LW(p) · GW(p)
I am suprised that a significant group of people think that rationality is inclusive of useful false beliefs.
Often they use “instrumental rationality” for that meaning and “epistemic rationality” for the other one. Searching this site for epistemic instrumental
returns some relevant posts.
↑ comment by closeness · 2014-10-20T11:08:38.343Z · LW(p) · GW(p)
I think this is very important, I myself noticed that when I was younger, the longer I was unemployed, the more I started reading about socialist ideas and getting into politics. Then when I started working again it went out the window and I moved on to learning about other things.
Similarly, maybe I'm here because I just happened to be in the mood to read some fan fiction that day?
↑ comment by ruelian · 2014-10-22T22:32:08.213Z · LW(p) · GW(p)
When explaining/arguing for rationality with the non-rational types, I have to resort to non-rational arguments. This makes me feel vaguely dirty, but it's also the only way I know of to argue with people who don't necessarily value evidence in their decision making. Unsurprisingly, many of the rationalists I know are unenthused by these discussions and frequently avoid them because they're unpleasant. It follows that the first step is to stop avoiding arguments/discussions with people of alternate value systems, which is really just a good idea anyway.
Replies from: Lumifer, None↑ comment by [deleted] · 2014-10-23T02:29:05.824Z · LW(p) · GW(p)
can we please call them Muggles or something?
Cultivating a group identity and a feeling of superiority to the outgroup will definitely be conducive to clear-headed analysis of tactics/strategies for winning regardless of their origins/thedish affiliations/signals, and to evaluation of whether aspects of the LW memeplex are useful for winning.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-24T12:16:16.651Z · LW(p) · GW(p)
Mudblood detected!!!
:-)
Seriously though, agree agree.
↑ comment by Metus · 2014-10-20T11:47:23.025Z · LW(p) · GW(p)
Got me thinking, what actually turns someone into a rationality fan?
I feel better about my actions when I can justify them with arguments.
But to be honest, I have never met someone who regards rationality as not worthwhile. Or maybe I have just forgotten the experience.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-20T12:03:03.550Z · LW(p) · GW(p)
Well it usually takes the form of people telling you that being highly rational is "over-analysing" or that logic is cold and ignores important emotional considerations of various kinds, or that focusing on rationality ignores the reality that people aren't machines or that they don't want to live such a cold and clinical life etc etc. Basically its just "I don't want to be that rational". So I wonder, what makes people honestly think "I want to be very rational"? (grammar aplogies lol)
Replies from: Metus, Evan_Gaensbauer↑ comment by Metus · 2014-10-20T12:08:07.412Z · LW(p) · GW(p)
Ah, I have met those kind of people. Usually I get the same feeling as when someone is debating politics, leading me to assume that the rejection of rationality is signaling belonging to a certain tribe, one where it is important that everyone feel good about themselves or such.
Personally, I was raised to think and think critically so I can't draw from personal experience. What did convince the ancient Greeks to embrace rationality, to start question the world around them? Maybe we should look there.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-20T12:21:53.932Z · LW(p) · GW(p)
Yeah its useless to try to rationally argue for rationality with someone that doesn't authentically accept the legitimacy of rationality in the first place. I guess all of us are like this to some degree, but some more than others for certain.
Not a bad suggestion. I know a little about the Ancient Greek philosophers, though nothing specific springs to mind.
↑ comment by Evan_Gaensbauer · 2014-10-28T08:25:01.061Z · LW(p) · GW(p)
I believe there are people like that, but how can we tell them apart from people who appropriately take into account their emotions in their decision-making and/or can't explain how or why they're rational, even though they really are?
Replies from: the-citizen↑ comment by the-citizen · 2014-10-29T12:07:30.564Z · LW(p) · GW(p)
I don't 100% follow your comment, but I find the content of those links interesting. Care to expand on that thought at all?
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-10-30T06:01:00.258Z · LW(p) · GW(p)
Sometimes we might really, actually be over-analyzing things, and what our true goals are may be better discovered by paying more attention to what System 1 is informing us of. If we don't figure this out for ourselves, it might be other rational people who tell us about this. If someone says:
"If you're trying to solve this problem, I believe you're over-analyzing it. Try paying more attention to your feelings, as they might indicate what you really want to do."
how are we supposed to tell if what they're saying is a:
- someone trying to genuinely help us solve our problem(s) in a rational way
or
- someone dismissing attempts at analyzing a problem at all?
It can only be one, or the other. Now, someone might not have read the Less Wrong, but that doesn't preclude them from noticing when we really are over-analyzing a problem. When someone responds like this, how are we supposed to tell if they're just strawmanning rationality, or really trying to help us achieve a more rational response?
This isn't some rhetorical question for you. I've got the same concerns as you, and I'm not sure how to ask this particular question better. Is it a non-issue? Am I using confusing terms?
Replies from: the-citizen↑ comment by the-citizen · 2014-11-01T06:03:58.922Z · LW(p) · GW(p)
I like the exporation of how emotions interact with rationality that seems to be going on over there.
For me over-analysis would be where further analysis is unlikely to yield practically improved knowledge of options to solve the problem at hand. I'd probably treat this as quite separate from bad analysis or the information supplied by instinct and emotion. In a sense then emotions wouldn't come to bear on the question of over-analysis generally. However, I'd heartily agree with the proposition that emotions are a good topic of exploration and study because they provide good option selection in certain situations and because knowledge of them might help control and account for emotionally based cognitive bias.
I guess the above would inform the question of whether the person you describe is rationally helping or just strawmanning. My sense is that in many cases the term is thrown around as a kind of defence against the mental discomfort that deep thought and the changing of ideas might bring, but perhaps I'm being too judgemental. Other times of course the person is actually identifying hand-wringing and inaction that we're too oblivious to identify ourselves.
In terms of identification of true goals, I wonder if the contextuality and changability of emotion would render it a relevent but ultimately unreliable source of deriving true goals. For example, in a fierce conflict its fairly tempting to perceive your goals as fundamentally opposed or opposite to your opponents, but I wonder if that's really a good position to form.
In the end though, people's emotions are relevent in their perception of their goals, so I suspect we do have to address emotions in the case for rationality.
Does CFAR have its own discussion forum? I can't see any so far? Do you know what CFAR thinks about the "winning" approach held by many LWers?
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2014-11-01T07:22:41.630Z · LW(p) · GW(p)
CFAR has its own private mailing list, which isn't available to individuals who haven't attended a CAR event before. As a CFAR alumnus, though, I can ask them your questions on your behalf. If I get a sufficient response, I can summarize their insight in a discussion post. I believe CFAR alumni are 40% active Less Wrong users, and 60% not. The base of CFAR, i.e. its staff, may have a substantially different perspective from its hundreds of workshop members that compose the broader community.
Replies from: the-citizen↑ comment by the-citizen · 2014-11-01T08:30:28.059Z · LW(p) · GW(p)
I think I'd be quite interested to know what % of CRAF people believe that rationality ought to include a component of "truthiness". Anything that could help on that?
↑ comment by Jackercrack · 2014-10-27T18:59:01.793Z · LW(p) · GW(p)
Let's see how basic I can go with an argument for rationality without using anything that needs rationality to explain. First the basic form:
Rationality is an effective way of figuring out what is and isn't true. Therefore rational people end up knowing the truth more often. Knowing the truth more often helps you make plans that work. Plans that work allow you to acquire money/status/power/men/women/happiness.
Now to dress it up in some rhetoric:
My friend, have you ever wished you could be the best you? The one who knows the best way to do everything, cuts to the truth of the matter, saves the world and then gets the girl/wins the man? That's what Rationalism looks like, but first one must study the nature of truth in order to cleave reality along its weaknesses and then bend it to your whims. You can learn the art a step stronger than science, the way that achieves the seemingly impossible. You can build yourself into that best you, a step at a time, idea upon idea until you look down the mountain you have climbed and know you have won.
There, I feel vaguely oily. Points out of 10?
Replies from: the-citizen↑ comment by the-citizen · 2014-10-29T11:55:08.454Z · LW(p) · GW(p)
I think I'm broadly supportive of your approach. The only problem I can see is that most people think its better to try to do stuff, as opposed to getting better at doing stuff. Rationality is a very generalised and very long-term approach and payoff. Still I'd not reject your approach at this point.
Another issue I find interesting is that several people have commented recently on LW that (instrumental) rationality isn't about knowing the truth but simply achieving goals most effectively. They claim this is the focus of most LWers too. As if "Truthiness" is only a tool that can be even be discarded when neccessary. I find that view curious.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-10-29T17:46:24.309Z · LW(p) · GW(p)
I'm not sure they're wrong to be honest (assuming an average cross section of people). Rationality is an extremely long term approach and payoff, I am not sure it would even work for the majority of people and if it does I'm not sure if it reaches diminishing returns compared to other strategies. The introductory text (sequences) is 9,000 pages long and the supplementary texts (kahneman, ariely ect) take it up to 11,000. I'm considered a very fast reader and it took me 3 unemployed months of constant reading to get through. For a good period of that time I was getting a negative return, I became a worse person. It took a month after that to end up net positive. I don't want to harp on about unfair inherent advantages, but I just took a look at the survey results from last year and the lowest IQ was 124.6. This stuff could be totally ineffective for average people and we would have no way of knowing. Simply being told the best path for self improvement or effective action by someone who was a rationalist or just someone who knows what they're doing, a normal expert in whatever field may well be more optimal for a great many people. Essentially data-driven life coaching. I can't test this hypothesis one way or the other without attempting to teach an average person rationalism and I don't know if anyone has done that, nor how I would find out if they had.
So far as instrumental rationality not being in core about truth, to be honest I broadly agree with them. There may be a term in my utility function for truth but it is not a large term, not nearly so important as the term for helping humanity or the one for interesting diversions. I seek truth not as an end in itself, but because it is so damn useful for achieving other things I care about. If I were in a world where my ignorance would save a life with no downside while my knowledge had no longterm benefit then I would stay ignorant. If my ignorance was a large enough net benefit to me and others, I would keep it. In the arena of CEO compensation for example increased transparency leads to runaway competition between CEOs to have the highest salary, shafting everyone else. Sure, the truth is known but it has only made things worse. I'm fairly consequentialist like that.
Note that in this situation I'd still call for transparency on war crimes, torture and so on. The earlier the better. If a person knows that their actions will become known within 5 years and that it will effect them personally that somewhat constrains their actions against committing an atrocity. The people making the decisions obviously need accurate data to make said decisions with in all cases but the good or damage caused by the public availability of that data is another thing entirely. Living in a world where everyone was rationalists and the truth never caused problems would be nice, but that's the should-world not the is-world.
It so happens that in this world we live using these brains we have, seeking the truth and not being satisfied with a lie or a distortion is an extremely effective way to gain power over the world. With our current hardware truth seeking may be the best way to understand enough to get things done without self-deception, but seeking the truth itself is incidental to the real goal.
Replies from: the-citizen↑ comment by the-citizen · 2014-11-01T05:44:33.127Z · LW(p) · GW(p)
Thanks for the interesting comments. I've not been on LW for wrong and so far I'm being selective about which sequences I'm reading. I'll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that's rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we're oblivious that we've screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T10:17:03.247Z · LW(p) · GW(p)
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you've started down the path. So far as I can tell stopping halfway through is... risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I'm starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn't expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Replies from: the-citizen↑ comment by the-citizen · 2014-11-01T15:07:23.092Z · LW(p) · GW(p)
Well I've done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I've not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I've read. My main concern is this truth as incidental thing. I just posted on this topic: http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-01T15:46:34.139Z · LW(p) · GW(p)
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
You may also want to read the tragedy of group selectionism.
Replies from: the-citizen↑ comment by the-citizen · 2014-11-03T04:25:46.035Z · LW(p) · GW(p)
Thanks for the group selection link. Unfortunately I'd have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who's traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven't really got a reliable map as to nature/nuture of altruism either, so I suspect the field will "evolve" further.
I've read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Replies from: Jackercrack↑ comment by Jackercrack · 2014-11-03T10:44:11.928Z · LW(p) · GW(p)
Really? I was not aware of that trend in the field, maybe I should look into it.
Well, at least I understand you now.
↑ comment by ChristianKl · 2014-10-20T12:11:41.205Z · LW(p) · GW(p)
So we have lots of guides on how to be rational...
Do we? I don't think that's the case. We know that being rational is quite hard and we don't have a good guide to circumvent most cognitive biases.
Recently I was talking to someone and realised they didn't accept that a rational approach was always the best one
You don't have to go very far for that viewpoint. Robin Hanson did voice it lately.
Got me thinking, what actually turns someone into a rationality fan?
To me that label rather rings alarm bells than any positive associations. Being a fan is something quite different than actually being rational.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-20T12:18:40.969Z · LW(p) · GW(p)
Well thats semantics in a pretty casual post. Still the link is interesting, thanks. I wonder if anyone has offered a counter-argument along the lines of "rationality is a muscle, not a scarce resource". But what do you do with someone who doesn't even think that, but just thinks logic is something for nerds?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-20T13:28:57.304Z · LW(p) · GW(p)
Well thats semantics in a pretty casual post.
No, it's substantial criticism. "Rationality fan" brings up in me the image of a person who aspires to be a Vulcan and who cares about labels instead of caring about outcomes.
The person who deconverted to theism and now makes atheism his new religion without really adopting good thinking habits.
But what do you do with someone who doesn't even think that, but just thinks logic is something for nerds?
Even the body builder who doesn't consider logic to be very important and a subject for nerds might be interested in information from scientific studies about the effects of the supplements he takes.
Replies from: the-citizen↑ comment by the-citizen · 2014-10-21T07:40:32.573Z · LW(p) · GW(p)
Ok feel free to mentally replace my language with more sensible language. This was just a quick post in the open thread. Thanks for your substantial if somewhat contrarian comment.
↑ comment by Richard_Kennaway · 2014-11-11T10:06:17.824Z · LW(p) · GW(p)
There are also lots of guides on how to be fit. Can we find out and learn from what makes a person decide to pursue fitness?
comment by MaximumLiberty · 2014-10-20T15:47:31.556Z · LW(p) · GW(p)
Does anyone have actual data over whether people working with words on computers is impaired or assisted by (a) music with lyrics or (b) instrumental music without lyrics? (I'd also be curious about the effect on people who have to work with numbers, but that's not relevant to me.)
Max L.
Replies from: zedzed, palladias, None, MaximumLiberty, polymathwannabe, army1987, Gunnar_Zarncke↑ comment by zedzed · 2014-10-20T18:47:33.427Z · LW(p) · GW(p)
Salame and Baddeley (1989): Music impairs short-term memory performance, vocal moreso than instrumental.
Jones and Macken (1993) [pdf] has things to say.
↑ comment by palladias · 2014-10-21T15:49:39.720Z · LW(p) · GW(p)
Anecdotally, I have much more trouble writing if I don't have music on. And I frequently listen to musicals.
(Things I write while listening to musicals: email, blog posts, my book).
Replies from: None↑ comment by [deleted] · 2014-10-22T09:13:13.224Z · LW(p) · GW(p)
Introspectively I always felt that music helps me get into a focused state but I always wondered whether it has any effects. Over the course of May 2014 I collected some data on my own writing performance in different circumstances when I had a lot of written work to complete (a bit over100 hours spent on writing in that month).
Every 30min I took a break and gave a 1-10 rating of the quality of the work I had completed in that period, and brief notes about anything else that might be notable. I admit that self-rating is rather arbitrary but simply word count wouldn't suffice as I was also editing, consulting sources and other tasks related to writing at various times. And of course these results may not generalise to anyone besides myself (and indeed for my own purposes I should do a replication next time I have a huge writing crunch). Mean self-rating of "quality":
* Office, no music: ___ 5.4 (10.5hr)
* Office, instrumental: 5.8 (21.5hr)
* Office, vocal: ______ 5.9 (20hr)
* Library, no music ___ 4.2 (5.5hr)
* Home, instrumental __ 5.3 (23hr)
* Home, vocal _________ 5.5 (21.5hr)
The mean ratings above conceal a lot of variability; the only reliable effect (Wilcoxon-Mann-Whitney test) was that the university library is a horrible place for me to get anything done (No music: Library < Home and Library < Office). No surprise there - as in my undergraduate studies the library seems mostly to be a place people go to avoid doing work. The apparently lower mean in "Office, no music" was driven by a couple of outliers related to distraction by other people.
main musical styles (not possible to analyse due to variability): old thrash metal, new doom metal, psychedelic folk music, rockabilly, bluegrass, shoegaze, bebop/hard bop, J-pop, person with guitar.
final note: I do not tend to notice the details of lyrics unless i am paying very close attention to the music, even for highly lyrical music I still mostly focus on the instrumental parts.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-10-22T22:00:52.474Z · LW(p) · GW(p)
Interesting. Did you score the text immediately or later? Because if you did score during listening to music your score may likely be influenced by the music. And music does affect mood and thus ratings.
↑ comment by [deleted] · 2014-10-20T21:14:44.177Z · LW(p) · GW(p)
The people at Focus At Will use instrumental music with certain qualities to "habituate" the listener to improve focus.
They have a ton of references, which may help in your search.
This isn't really my area, I just use the service and it seems to help --- at the very least, it acts as a trigger for me to enter a "time to do work" mode.
Replies from: None↑ comment by [deleted] · 2014-10-20T21:20:47.137Z · LW(p) · GW(p)
A quick search in those references for "lyric" turns up only this paper from 2011: A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics
Abstract:
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants' self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects' selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
↑ comment by MaximumLiberty · 2014-10-20T22:25:47.242Z · LW(p) · GW(p)
Thanks to both zedzed and Troubadour for helping confirm what I hoped would not be confirmed. Time to turn off Metallica at work.
Focus@Will seems interesting. I downloaded the app and am giving it a try. Its a little hard to figure out what they are saying the increase in focus might be. Still, it is just 5%, that's enough to pay their annual subscription.
Max L.
↑ comment by polymathwannabe · 2014-10-20T15:53:48.246Z · LW(p) · GW(p)
It happens to me. Music with lyrics makes my reading/writing less efficient.
↑ comment by A1987dM (army1987) · 2014-10-25T18:38:10.143Z · LW(p) · GW(p)
I've heard this effect mentioned several times, but if it applies to me is not strong enough to be obvious. (Then again, it's not like I tried to test it statistically.)
Possibly, at certain times of the day music might make me more productive, by making it easier for me to stay awake.
(OTOH if I listen to music while concentrating on something else I usually can't remember any of the lyrics afterwards.)
↑ comment by Gunnar_Zarncke · 2014-10-20T15:57:04.202Z · LW(p) · GW(p)
I remember that there are studies that show that music impears cognitive tasks (at least learning related). But I can't name them off the top of my head. If you are lucky I will come back with the refs later.
I'd bet gwern can provide better refs too.
comment by [deleted] · 2014-10-21T22:43:29.226Z · LW(p) · GW(p)
What parallels exist between AI programming and pedagogy?
Today, I had to teach my part-timer how to delete books from our inventory. This is a two-phase process: delete the book from our inventory records then delete the book from our interlibrary loan records. My PTer is an older woman not at all versed in computers, so to teach her, I first demonstrated the necessary steps, then asked her to do it while I guided her, then asked her to do it alone. She understood the central steps and began to delete books at a reasonable rate.
A few minutes in, she hit the back button one too many times and came upon a screen that was unfamiliar to her. The screen had buttons leading back to the interface she needed to use. They were very clearly labeled. But she could not understand the information in the labels, either because she had shut down all "receiving" from the direction of the screen in a panic or because she did not want to try for fear of "messing the computer up."
Helping her with this made me think of the problems AI programmers have. They cannot tear levers from their mind and give that set of inferences to an AI wholesale. They cannot say "the AI will KNOW that, if it hits back once too many times, to just hit the button that says 'Delete Holdings.' After all, its job is to delete holdings so it knows that the 'Delete Holdings' interface is the one it needs." Just like my PTers, in order to make that inference, the AI must be able to receive information about this new surrounding, process that information, and infer from it how to obtain its goal (i.e. getting back to 'Delete Holdings').
What sort of lessons and parallels could be drawn from AI programming that would be useful in pedagogy? I will admit I am ignorant of AI theory or practice save what I have picked up from the Sequences. But, the overlaps seems worth exploring. Indeed, I suspect others have explored it before me. The Sequences are certainly didactic. I also wonder if teaching (especially teaching those who are technologically illiterate) would be a useful experience to those planning to work in AI programming and ethics.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-10-22T22:43:52.694Z · LW(p) · GW(p)
I keep thinking about the relations between machine learning and human learning, esp. teaching children a lot. Basically all results in one field carry over to some degree to the other. Some things only apply on the neuronal level. Others only in very specific settings.
Some random pages to follow: http://lesswrong.com/lw/jol/rethinking_education/ Vygotsky's http://en.wikipedia.org/wiki/Zone_of_proximal_development
comment by ChristianKl · 2014-10-20T23:22:36.972Z · LW(p) · GW(p)
After writing the Anti Mosquito thread we went into a discussion into other species to eliminate. While doing that bed bugs came into my attention. I do have rashes that look like the ones shown in the bed bug article on Wikipedia.
Today I searched there are indeed bed bugs.
Does anybody have experiences of getting rid of them?
Replies from: Vaniver, hyporational, philh, btrettel↑ comment by Vaniver · 2014-10-21T14:40:41.894Z · LW(p) · GW(p)
Does anybody have experiences of getting rid of them?
As mentioned by btrettel, I had some a few months ago that, as far as I can tell, were totally wiped out by one thermal remediation, which seems to be the mirror of hyporational's suggestion. Basically, they heated up the apartment enough that all the bedbugs and their eggs died, and this took ~8 hours and was expensive. I found the extermination company on Yelp.
I have several bookshelves with lots of books, and the bedbugs apparently like to crawl all over the place--there was a husk at the corner between the wall and the ceiling--and so the exterminator was pretty insistent that I go with the thermal remediation (which would kill them everywhere) instead of using pesticides, which would have to be applied everywhere to be as effective.
↑ comment by hyporational · 2014-10-21T04:13:00.361Z · LW(p) · GW(p)
People in Finland freeze their sheets and mattresses in the winter and it seems to do the trick. I'm not sure if this helps you now unless you have a huge freezer.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-21T15:15:28.797Z · LW(p) · GW(p)
While continuing reading I found that the thing I spotted in my home black and therefore not real bed bugs so I'm not certain anymore that it's bed bugs.
In any case it's a sign that I need to clean more ;)
When winter is coming I will probably move my sheets and mattresses outside to freeze.
Replies from: Vaniver, hyporational↑ comment by Vaniver · 2014-10-21T16:49:59.305Z · LW(p) · GW(p)
While continuing reading I found that the thing I spotted in my home black and therefore not real bed bugs so I'm not certain anymore that it's bed bugs.
Most exterminators will check out your residence and give you a quote for free; they're probably much better at identifying (and finding) pests than you are.
Replies from: Lumifer↑ comment by hyporational · 2014-10-21T16:28:36.169Z · LW(p) · GW(p)
I did some quick googling and most sources seem to say that you need at least -18C for 24 hours to kill them. I'm not sure if this means that they don't die in higher temperatures at all or if you just need more time.
Another trick people in Finland use is sauna, 60C kills them.
Replies from: Baughn↑ comment by philh · 2014-10-20T23:46:36.515Z · LW(p) · GW(p)
Sympathies.
In my case, spraying all over the room several times got them down to a tolerable level. By tolerable I mean that I was sleeping with no duvet or pillow so that they would have nowhere near me to hide, and often performing less-thorough sprays, and as far as I could tell, I wasn't getting bitten any more - so given several months, they would have just died off. I was still seeing one or two a week, though, and the spray probably wasn't too good for my health. That lasted maybe a month or so, then my landlord decided to just replace the bed, and I think I only saw one after that. (I've since moved out.)
comment by James_Miller · 2014-10-20T23:12:38.261Z · LW(p) · GW(p)
How bad is having rs1333049(C,C) if you have no other risk factors, including family history, for heart disease? It is supposedly associated with a 1.9x risk for coronary artery disease.
comment by D_Malik · 2014-10-22T07:33:53.215Z · LW(p) · GW(p)
It seems likely that you could get much of the benefit of cryopreservation for a fraction of the cost, without actually getting your head frozen, by just recording your life in great detail.
A while back, I started tracking e.g. every time I switch between windows, or send out an HTTP request, etc. - not with this in mind, but just so I can draw pretty graphs. It doesn't seem that it would be beyond a superintelligent AI to reconstruct my mind from this data. For better fidelity, maybe include some brain scans and your DNA sequence.
And this sort of preservation might be more reliable than cryopreservation in many ways - frozen brains would be destroyed by a nuclear war, for instance, whereas if you put a hard disk in a box and buried it in a desert somewhere that would probably stay safe for a few millennia. To be more sure, you might even launch such a "horcrux" into space, where pre-singularity people won't get their grubby monkey fingers on it.
If the entire internet were backed up in this way, that might be a lot of people effectively preserved.
Thoughts?
(Also, upon doing something like this, you should increase your belief that you're in an ancestor simulation, since you've just made that more feasible.)
(Also, this would go badly in the case of a "valley of bad utility functions" fooming AI.)
Replies from: skeptical_lurker, ChristianKl, Gunnar_Zarncke, James_Miller, Richard_Kennaway, Evan_Gaensbauer, Richard_Kennaway↑ comment by skeptical_lurker · 2014-10-22T22:35:07.757Z · LW(p) · GW(p)
I've heard this idea before, and it has never seemed convincing. Suppose you managed to record one useful bit per second, 24/7, for thirty years. That's approximately one billion bits. There are approximately 100 billion neurons, each with many synapses. How many polynomials of degree n can fit m points for n>m? Infinity many.
Its actually worse than this, because even if you record orders or magnitude more data than the brain contains, perhaps by recording speech and video, then maybe you could recreate the speech and movement centres of the brain with some degree of accuracy, but not recover other areas that seem more fundamental to your identity, because the information is not evenly distributed.
Its easy to get into a 'happy death spiral' around superintelligence, but even godlike entities cannot do things which are simply impossible.
I suppose it might be worth recording information about yourself on the basis of low cost and a small chance of astronomically large payoff, and regardless it could be useful for data mining or interesting for future historians. But I can't see that it has anywhere near the chance of success of cryonics.
Incidentally, a plastinisied brain could be put in a box and buried in a random location and survive for a long time, especially in Antarctica.
Replies from: gwern↑ comment by gwern · 2014-10-23T01:55:03.556Z · LW(p) · GW(p)
That's approximately one billion bits. There are approximately 100 billion neurons, each with many synapses. How many polynomials of degree n can fit m points for n>m? Infinity many.
That's true but irrelevant and proves too much (the same point about the 'underdetermination of theories' also 'proves' that induction is impossible and we cannot learn anything about the world and that I am not writing anything meaningful here and you are not reading anything but noise).
There's no reason to expect that brains will be maximally random, much reason to expect that to be wrong, and under many restrictive scenarios, you can recover a polynomial with n>m - you might say that's the defining trait of a number of increasingly popular techniques like the lasso/ridge regression/elastic net, which bring in priors/regularization/sparsity to let one recover a solution even when n<p (as it's usually written). The question is whether personality and memories are recoverable in realistic scenarios, not unlikely polynomials.
On that, I tend to be fairly optimistic (or pessimistic, depending on how you look at it): humans seem to be small.
When I look at humans' habits, attitudes, political beliefs, aesthetic preferences, food preferences, jobs, etc - the psychological literature says to me that all of these things are generally stable over lifetimes, small to largely heritable, highly intercorrelated (so you can predict some from others), and many are predictable from underlying latent variables determining attitudes and values (politics in particular seems to have almost nothing to do with explicit factual reasoning; religion and atheism I also suspect to be almost entirely determined by cognitive traits). Stereotypes turn out to be very accurate in practice. On top of that, our short-term memories are small, our long-term memories are vague and limited and rewritten every time they're recalled, and false memories are easily manufactured; we don't seem to care much about this in practice, to the point where things like childhood amnesia are taken completely for granted and not regarded as dying. Bandwidth into the brain may be large calculated naively, but estimated from how much we actually understand and can retain and make choices based on, it's tiny. The heuristics & biases and expertise literatures imply we spend most of the time on autopilot in System I. Then we have evidence from brain traumas: while some small lesions can produce huge changes (pace Sacks), other people shrug off horrific traumas and problems like hydrocephaly without much issue, people come out of comas and being trapped under ice without their personalities usually radically changing, people struck by lightning report cognitive deficits but not loss of memory or personality changes...
(I think most people have experienced at least once the sudden realization of déjà vu, that they were doing the exact same thing or having the exact same conversation as they had in the past; or even (like myself) wrote a whole comment rebutting an old blog post in their head only to discover after reading further that they had already posted that comment, identical except for some spelling and punctuation differences.)
No, as much as humans may flatter ourselves that our minds are so terribly complex and definitely way more intricate than a cat's and would be infinitely difficult to recover from a damaged sample, I suspect it may turn out to be dismayingly simple for superintelligences to recover a usable version of us from our genome, writings, brain, and knowledge of our environment.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-10-23T07:54:05.508Z · LW(p) · GW(p)
I suspect it may turn out to be dismayingly simple for superintelligences to recover a usable version of us from our genome, writings, brain, and knowledge of our environment.
I think an important point may be to distinguish between producing a usable version of us that functions in a very similar way, and producing a version similar enough to be the 'same' person, to preserve continuity of conciousness and provide immortality, if indeed this makes sense. Perhaps it doesn't, maybe the Buddhists are correct, the self is an illusion and the question of whether a copy of me (of varying quality) really is me is meaningless.
Anyway, I don't deny that it would be possible to create someone who is extremely similar. People are not randomly sprinkled through personspace, they cluster, and identifying the correct cluster is far simpler. But my intuitions are that the fidelity of reconstruction must be much higher to preserve identity. Comas do not necessary involve a substantial loss of information AFAIK, but wrt more traumatic problems I am willing to bite the bullet and say that they might not be the same person they were before.
As you say, some lesions cause bigger personalty changes than others. But it seems to me that its easy to gather information about superficial aspects, while my inner monologue, my hopes and dreams and other cliches , are not so readily apparent from my web browsing habits. Perhaps I should start keeping a detailed diary.
Of course, you might derive some comfort from the existence of a future person who is extremely similar but not the same person as you. But I'd like to live too.
So to summarise, I don't think the brain is maximally random, but I also don't think orders of magnitude of compression is possible. If we disagree, it is not about information theory, but about the more confusing metaphysical question of whether cluster identification is sufficient for continuity of self.
And thanks for the reply, its been an interesting read.
↑ comment by ChristianKl · 2014-10-22T11:00:04.785Z · LW(p) · GW(p)
It doesn't seem that it would be beyond a superintelligent AI to reconstruct my mind from this data.
A superintelligent AI still suffer from Garbage In/Garbage out. It depends how good you consider a replication of you to have to be to be you.
From the perspective of a superintelligent AI the AI also might consider it to be an ethical obligation to reanimate cryopreserved people but not consider it to be an obligation to reconstruct people based on data.
The comments on http://lesswrong.com/lw/1ay/is_cryonics_necessary_writing_yourself_into_the/ might be worth reading on the issue.
↑ comment by Gunnar_Zarncke · 2014-10-22T22:07:02.989Z · LW(p) · GW(p)
I considered training a simple AI in mimicking my cognitive habits and responses. The simplest form would be a chat-bot trained on all my spoken and written words. Archiving lots of audios/videos could also help.
↑ comment by James_Miller · 2014-10-22T20:22:56.535Z · LW(p) · GW(p)
If true cryonics would still offer the advantage of strongly signaling that you wanted to be brought back, and putting you into a community committed to bringing back members of the community.
A more extreme version of what you suggest is that a future friendly super intelligence might bring to life every possible human mind that wanted to exist.
↑ comment by Richard_Kennaway · 2014-10-22T13:04:08.326Z · LW(p) · GW(p)
If the entire internet were backed up in this way, that might be a lot of people effectively preserved.
Thoughts?
I think it has as much chance of success as the ancient Egyptians' practice of mummification.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-10-22T22:51:45.327Z · LW(p) · GW(p)
Is there any chance that mummification preserves enough information for revival?
Replies from: gwern↑ comment by gwern · 2014-10-23T01:25:02.496Z · LW(p) · GW(p)
Given the part where they stir up and scoop out the brains, I would be extremely surprised if anything could recover them from their bodies (barring some sort of bizarre Tiplerian 'create all possible humans using infinite computing power' scenario).
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-10-23T08:05:08.149Z · LW(p) · GW(p)
Ok, I should have remembered that. But the Egyptians were not the only people who practised mummification, as well as accidental mummification. Any chance of them surviving? What about Lenin's embalmed body?
Replies from: gwern↑ comment by gwern · 2014-10-26T22:33:57.298Z · LW(p) · GW(p)
It's been a very long time since I read Lenin's Embalmers, but the brain seems to be in pretty bad shape these days:
No one seems to know what's happened to Lenin's heart, but Soviet ideologists were sure that his brain was something special. They brought in a renowned German scientist to examine it for clues to the great man's genius, but nothing came of it. The brain is still kept at a Moscow institute. "But it's not easy to see it," Zbarsky said. "It's mostly dissected."
Or http://bookhaven.stanford.edu/2010/09/the-curious-and-complicated-history-of-lenins-brain/
↑ comment by Evan_Gaensbauer · 2014-10-28T08:48:48.135Z · LW(p) · GW(p)
Mentioning the possibility of my mind being recreated by a superintelligence after my death, you had my curiosity[1], but with 'drawing pretty graphs' you have my attention. I want to draw pretty graphs about my activity too. How do you do this? I want to do it for and to myself. Feel free to send me a PM about it, if you don't want to continue the conversation here.
[1] gwern's comment below in the parallel child thread helped.
↑ comment by Richard_Kennaway · 2014-10-22T13:02:26.114Z · LW(p) · GW(p)
If the entire internet were backed up in this way, that might be a lot of people effectively preserved.
Thoughts?
I think it has about as much chance of success as the ancient Egyptians' practice of mummification.
comment by iarwain1 · 2014-10-20T22:22:31.182Z · LW(p) · GW(p)
1) What course of study / degree program would you recommend to someone interested in (eventually) doing research for effective altruism or global priorities (e.g. Givewell, FHI, etc.)?
2) Are there other similar fields? Requirements: research, focused on rational decision-making, mostly or entirely altruistic
3) What are the job opportunities like in these fields?
Replies from: zedzed, ChristianKl, So8res↑ comment by zedzed · 2014-10-20T23:03:06.642Z · LW(p) · GW(p)
Holden Karnofsky of Givewell discusses some of this in his EA summit talk. There's no simple answer, but a short one is "get big". Near as I can tell, the best way to do this is develop rare and valuable skills that interest you, a la So Good They Can't Ignore You.
Personally, I think math and computer science are good places to start. Both are rare and valuable (especially taken together). If you have aptitude and interest (as I estimate you do), start there. For math, step 1 is to get through calculus. You'll get different opinions for CS; I'm personally a fan of SICP, but that assumes calculus. Fortunately, we've compiled a list of programming resources.
And then things that strike your interest. I'm learning psychology, writing, and economics, not because I think they're the rarest or most valuable skills, but because they're at least somewhat uncommon and at least somewhat valuable and I really enjoy learning them, and the combination of math/CS/psych/writing/econ is sufficiently novel that I should be able to do useful things that wouldn't happen otherwise. Holden discusses reasons for choosing things that interest you/things you have aptitude for, rather than the most tractable problem, in the video linked above.
Good luck!
Replies from: ChristianKl, iarwain1↑ comment by ChristianKl · 2014-10-22T22:28:41.135Z · LW(p) · GW(p)
It's at 1:15:00 in the summit talk. He lists three main criteria for people to choose what they do early in your career:
Personal Development Potential
Potential to make Contacts
Potential to gain power, status and freedom
↑ comment by iarwain1 · 2014-10-21T01:25:04.706Z · LW(p) · GW(p)
Good links and thoughts, as usual.
step 1 is to get through calculus.
Working on it :). (To explain, zedzed is helping me study algebra with an aim to get through calculus. I'm on the last chapter of the textbook we're working through.)
↑ comment by ChristianKl · 2014-10-20T22:51:55.517Z · LW(p) · GW(p)
If you look at the give well job description for a research analyst (http://www.givewell.org/about/jobs/research-analyst) it doesn't mention that GiveWell is interested that people who apply have a degree.
If that's where you want to go applying directly to GiveWell would be the straightforward course of action.
Given FHI's academic nature they probably do prefer people with degrees but I think FHI doesn't want specific degrees but wants to hire people with expertise that they currently don't have so they should be pretty open.
↑ comment by So8res · 2014-10-28T17:43:22.525Z · LW(p) · GW(p)
Can "research" include heavy math?
Replies from: iarwain1↑ comment by iarwain1 · 2014-10-28T19:17:26.278Z · LW(p) · GW(p)
Yes, but preferably not only focused on heavy math.
Replies from: So8rescomment by the-citizen · 2014-10-20T11:45:41.237Z · LW(p) · GW(p)
I posted If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid it? on the LW subreddit recently, in case anyone is interested. I'm not sure how many people read the subreddit. Is this something I should post on here?
comment by [deleted] · 2014-10-26T17:00:00.037Z · LW(p) · GW(p)
I make a public vow not to watch twitch till 2015.
Replies from: Jayson_Virissimo, DanielLC↑ comment by Jayson_Virissimo · 2014-10-27T04:03:35.312Z · LW(p) · GW(p)
Do you have a mechanism for trusted third-parties to audit (not necessarily publicly) your adherence to your vow? If not, your should consider it. Similarly: I won't post, comment, or tweet on Facebook or Twitter between October 6th and the end of the year.
Replies from: None↑ comment by DanielLC · 2014-10-27T19:12:29.595Z · LW(p) · GW(p)
Why? That seems like an odd vow.
Replies from: Nonecomment by Gunnar_Zarncke · 2014-10-23T15:40:47.767Z · LW(p) · GW(p)
Is is acceptable for LWers to have more than one account? I'm considering to create a more anonymous account for asking and discussing possibly more controversal or out-of-character topics or topics I wouldn't want to see associated with my name.
What do you think about this?
Replies from: NancyLebovitz, Username, shminux, ChristianKl, Jiro, Evan_Gaensbauer↑ comment by NancyLebovitz · 2014-10-23T16:18:00.785Z · LW(p) · GW(p)
If you don't tell us (sorry possibly too late), we aren't likely to find out, unless your alternate account is so obnoxious that it gets investigated. In this community, that's a damned high threshold.
Seriously, I don't think it's a problem. Nobody objected to Clippy probably being a regular member, though there were efforts to find out who it was out of curiosity.
↑ comment by Username · 2014-10-23T22:50:35.342Z · LW(p) · GW(p)
Depending on your use case, you might also just want to use this account (for which the password is 'password').
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-10-24T06:10:08.735Z · LW(p) · GW(p)
Interesting. I always thought that that is a peculiar username. But for such a use case it makes sense.
Waht is the intended use of that account?
Replies from: Username↑ comment by Shmi (shminux) · 2014-10-26T22:47:55.211Z · LW(p) · GW(p)
I think there is nothing wrong with using something upfront and obvious like throwaway_. Or any other name, as long as you preface your question with "this is a throwaway", as is common on Reddit.
↑ comment by ChristianKl · 2014-10-23T16:16:13.109Z · LW(p) · GW(p)
I think the case law on the issue is that, nobody speaks up when people do this and mark it specifically.
On the other hand do not use both accounts within one discussion. Don't use both accounts to vote on the same thread.
Given that you have written the thread, it's however likely a bad idea for you to start a more anonymous account right now ;) if you care about keeping secrets from people on LW and not only the casual browser.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-10-23T21:16:57.491Z · LW(p) · GW(p)
On the other hand do not use both accounts within one discussion. Don't use both accounts to vote on the same thread.
This is a very resonable recommendation.
Given that you have written the thread...
I considered this beforehand of course. Even if I did I would have plausible deniability, which is what counts. Even if regular LWers suspected something the pretense could be held. See also http://lesswrong.com/lw/24o/eight_short_studies_on_excuses/
↑ comment by Evan_Gaensbauer · 2014-10-28T09:36:44.128Z · LW(p) · GW(p)
You may recall I had a previous user account not linked to my real name: eggman. I started using one with my real name since I intend to post to Less Wrong and the effective altruism forum more frequently in the future, and it works better to have both accounts linked to my name if people want to contact me. Since I already happen to have access to two accounts, I was thinking of using 'eggman' for the same purposes. Now, obviously, I've just stated I was, or am, eggman, or whatever. So, it isn't anonymous anymore. However, it's more anonymous, as it doesn't use the same name that's on my government-issued I.D.
More than using it because I don't want other users on Less Wrong to know it's me, I might not want it searchable, or linked to me public identity, for others off of Less Wrong.
So, I consider this acceptable.
If you want to make it more acceptable...
Consider stating with the profile, or at the beginning of each comment, that you're a regular user of Less Wrong who is using an anonymous sockpuppet account not because you want to troll, but because you don't want your name linked to discussiong more controversial topics. I believe the genuine merit of how you discuss such topics would quickly dismiss speculation that you are just a troll.
If you don't do the above, and never express that the anonymous account you're using is used by a regular user of Less Wrong going under their real name, none of us would actually tell. We wouldn't know you had two accounts. I'd be surprised if someone is such an inquisitor that they snoop profiles to ensure someone made a credible introduction of themselves in a welcome thread, and reports them to a moderator if they don't.
If the topics you'd be discussing on Less Wrong are controversial among the Less Wrong community itself, that might be another matter. That might be playing with fire, and in that case I'd just caution you be more careful, lest you cause harm to Less Wrong as a discussion board.
↑ comment by Gunnar_Zarncke · 2014-10-28T17:39:13.073Z · LW(p) · GW(p)
I noticed your post and wondered about your opinion on this. Thanks for sharing it.
I think it is a good idea to state the intention clearly - possibly on each post. But I wonder whether that colors the respones it gets. Though maybe the effect is positive even.
My reason I use my real name is comparable to yours: I want to see my real name attached to my postings.
comment by Ritalin · 2014-10-24T20:11:00.319Z · LW(p) · GW(p)
Employing one's rational skills in extremely stressful or emotional situations, specifically extreme infatuation:
Today at the market, while waiting on the queue, I recognized an ex-lover of mine. One I had never gotten over. I dared not speak her name. I knew, with absolute certainty, that I would have absolutely no control over what I would said to her, if I didn't shut down entirely, standing there with my mouth open, my breath held, and a cacophony of conflicting thoughts and emotions on my mind.
I knew that, if, against all probability, she decided thereafter to renew contact with me, all of my priorities, all of my wants, all of my existence, would become subordinate to hers. I'd be looking forward to her texts like a drowning man looks forward to air. Her approval would bless me, her anger would damn me.
This is obviously wrong. No human being should lose judgement and freedom so absolutely to another. It's not right that all one's system of ethics, ambitions, values, priorities, wants, needs, principles and morals... it's not right that it shifts and solidifies around two supreme tenets:
- Making the other (I hesitate to call them "beloved") happy.
- Being with that other, as closely as possible.
What does the research say? What is the common wisdom in this community? How does one deal with this kind of extreme emotion?
Replies from: ahbwramc, Tripitaka, ChristianKl, Evan_Gaensbauer↑ comment by ahbwramc · 2014-10-25T00:31:32.937Z · LW(p) · GW(p)
I can empathize to an extent - my fiance left me about two months ago (two months ago yesterday actually, now that I check). I still love her, and I'm not even close to getting over her. I don't think I'm even close to wanting to get over her. And when I have talked to her since it happened, I've said things that I wish I hadn't said, upon reflection. I know exactly what you mean about having no control of what you say around her.
But, with that being said...
Well, I certainly can't speak for the common wisdom of the community, but speaking for myself, I think it's important to remember that emotion and rationality aren't necessarily opposed - in fact, I think that's one of the most important things I've learned from LW: emotion is orthogonal to rationality. I think of the love I have for my ex-fiance, and, well...I approve of it. It can't be really be justified in any way (and it's hard to even imagine what it would mean for an emotion to be justified, except by other emotions), but it's there, and I'm happy that it is. As Eliezer put it, there's no truth that destroys my love.
Of course, emotions can be irrational - certainly one has to strive for reflective equilibrium, searching for emotions that conflict with one another and deciding which ones to endorse. And it seems like you don't particularly endorse the emotions that you feel around this person (I'll just add that for myself, being in love has never felt like another persons values were superseding my own - rather it felt like they were being elevated to being on par with my own. Suddenly this other person's happiness was just as important to me as my own - usually not more important, though). But I guess my point is that there's nothing inherently irrational about valuing someone else over yourself, even if it might be irrational for you.
Replies from: Ritalin↑ comment by Ritalin · 2014-10-25T12:02:42.029Z · LW(p) · GW(p)
Mostly I resent the fact that my mind becomes completely clouded, like I'm on some drug.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-27T18:03:18.403Z · LW(p) · GW(p)
Replies from: DanielLC↑ comment by DanielLC · 2014-10-27T19:11:43.590Z · LW(p) · GW(p)
I don't think naturally producing a hormone counts as being on drugs. If it did, that would mean that everyone is on tons of drugs all of the time.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-28T00:24:07.966Z · LW(p) · GW(p)
Some people seem to get higher dose of internally produced drugs than others.
Replies from: Ritalin↑ comment by Ritalin · 2014-10-30T16:10:49.324Z · LW(p) · GW(p)
I suppose that's what they call "being more emotional"?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-30T17:03:17.936Z · LW(p) · GW(p)
Probably one of those words that could mean many things:
a) a higher dose of hormones;
b) greater awareness of your internal state; or
c) an exaggerated reaction to the same dose of hormones.
↑ comment by Ritalin · 2014-10-30T23:42:37.612Z · LW(p) · GW(p)
Measuring the difference between those three is hardly trivial, though. Can't they be considered the same for all practical purposes?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-10-31T08:57:59.542Z · LW(p) · GW(p)
In short them, yes. I long term, some people would benefit from awareness-increasing techniques, such as meditation or therapy, while other people would benefit from changing their behavior.
↑ comment by Tripitaka · 2014-10-25T23:51:43.821Z · LW(p) · GW(p)
The question "is heartbreak the way humans experience it right now a good thing" is one of the more complex questions about the human condition,yes. My mental modell of all that is kinda like the following:
On an neurochemical level, the way "love" stimulates the reward-centers has been likened to "cocaine".Its an extremely strong conditioning, addiction even. So of course your brain wants to satisfy that condition by all means possible. If we have a look at popular culture, its kinda expected to have extreme reactions to heartbreak: people fall into depression, start rationalizing all kinds of really crazy behaviour (stalking, death threats, lifechanging roadtrips) etc etc.
To avoid all that you have to thoroughly impress on your emotional side that its over: thats why some people do the whole "burn everything that connects me with her", others just overwrite that old emotion with new (irrational)emotions like anger, hold a symbolic funeral, repeat it to youself everyday in a mirror etc.
Unfortunately I am not aware of studies about optimal treatment of heartbreak, but common wisdom is: NO contact at all in the beginning, allow yourself to grieve, find solace with friends/familiy, and somehow redefine your sense of selfesteem- take up painting/coding/comething you have always wanted to do. If one wanted to go the rational route: research neurochemistry, find out wether its really like cocaine-addiction, do whatever helps with cocaine-withdrawal. (or the other most closely related drug-withdrawal).
Replies from: ChristianKl, Ritalin↑ comment by ChristianKl · 2014-10-26T14:34:39.276Z · LW(p) · GW(p)
If one wanted to go the rational route: research neurochemistry, find out wether its really like cocaine-addiction, do whatever helps with cocaine-withdrawal.
Why do you label that process of researching neurochemistry rational?
Replies from: Tripitaka↑ comment by Tripitaka · 2014-10-26T21:52:31.311Z · LW(p) · GW(p)
Well OPs stated goal is to end the strange behaviour they have around their ex, which takes away their agency. While a common problem within humans, it appears to be solved mostly with time- eg it is unsolved. We have some (bad) data available that this is actually very similar to some kinds of addiction. And while certainly addiction is nowhere near 100% curable, (or we would have heard of that by now) my prior for "having found some better than placebo treatments for one of the major drug addiction (cocaine)" is 70-80 percent. So I do give "investigate this line of thinking, speak with experts" at least a high enough investeded-time/chance of success- ratio to be worth considering. That was my thought process for using rational, is the explanation satisfying?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-26T21:59:41.547Z · LW(p) · GW(p)
I asked for researching neurochemistry not about researching cocaine treatment.
Replies from: Tripitaka↑ comment by Tripitaka · 2014-10-26T23:36:06.370Z · LW(p) · GW(p)
Bad data. I have not read the original research study whose findings were later likened to those of cocain, and am a bit suspicious how similar they actually are. "study the neurology" instead of "neurochemistry" would be more accurate, I guess.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-27T15:16:36.892Z · LW(p) · GW(p)
I still see no valid argument for that claim, that you can get significant knowledge about the issue to judge whether or not trying one of the addition treatment exercises is likely to be helpful.
↑ comment by Ritalin · 2014-10-26T10:07:20.840Z · LW(p) · GW(p)
The main problem, as far as I'm concerned, isn't heartbreak itself, but the way I enter an altered state around her. To put it simply, I can't think straight. It's like being intoxicated, or in terrible pain. Getting over an ex is tough. But right now I'm more interested in getting over my feelings when around a loved one, rather than becoming paralyzed and my mind becoming blank.
Replies from: Tripitaka↑ comment by Tripitaka · 2014-10-26T12:37:35.877Z · LW(p) · GW(p)
Sorry I failed to make myself clear. To put it simply back: it feels as if you are in pain or intoxicated, because thats exactly what it is, http://www.pnas.org/content/108/15/6270.short for example. Your system 1 is in desperate need to get its fix OR stop the hurting, even if system 2 is fine. The obvious way to combat it and your accompanied loss of agency is to precommit in some way to stop being around them, but also to ignore their wishes in the future. The way this happens for a lot of people is rationalizing undesired qualities to their expartners, having strong peer pressure etc. Because system 1 is so strong on this front, depending on your own stability, it can actually be dangerous to fight it too much with system 2. For the whole system 1 against system 2, mindfulness meditation is useful.
↑ comment by ChristianKl · 2014-10-25T17:30:26.810Z · LW(p) · GW(p)
You can write down your own goals to make them clearer to you. If you are clear about what you want to do it's harder for someone else to give you other goals then when you are empty.
There are various kind of things you can do to learn emotional control. I remember times in the past where strong emotions could cloud my mind but after doing a lot of meditation that's not true for me anymore. In the absence of ugh-fields or a lot of unknowns strong emotions make me think clearly and I can still follow rules based heuristics.
The most charged emotional situation I can think of that likely would have freaked a lot of people out was when it was past midnight and I was walking alone and a guy grabbed me and told me: "Give me 5 Euros or I'll kill you"
To get to something more speculative, I have the idea that love is a lot of conditioning. If everytime you think about X, you feel good, the next time you think about X you will feel even more good. A bit unpredictability thrown in generally increases the effect.
If you repeat that a thousand times you get a pretty strong stimulus. Almost wireheading ;)
Of course there are additional effects that comes with physical intimacy. Speaking about them would be even more speculative.
Replies from: Ritalin↑ comment by Ritalin · 2014-10-25T20:41:01.794Z · LW(p) · GW(p)
I have trouble parsing this... could you rephrase it in a more practically-oriented way?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-26T12:31:59.780Z · LW(p) · GW(p)
I don't think doing interventions in highly charged emotional issue like this in a practically-oriented way is well done via text.
Text is much better for discussing the topic on a more abstract level. Having abstract concepts to orient yourself in a situation can help.
I gave one practical suggestion, meditation. To be more practical: Find a local meditation group with an experienced teacher and attend it regularly.
One of the exercises that CFAR does is comfort zone extension. That can also help. If you often do that kind of exercises you train yourself to be still operational under strong emotions.
↑ comment by Evan_Gaensbauer · 2014-10-28T09:21:53.895Z · LW(p) · GW(p)
What is the common wisdom in this community? How does one deal with this kind of extreme emotion?
It's not great data, but I recall an analogous discussion on Less Wrong several months ago.
I participated in the discussion of a post covering subject matter similar to what you're thinking about: is love a good idea?. It seems to me the original poster was trying to make something like a hypothetical apostasy of falling in love, with mixed results. I had a lot of sympathy for him, as trying to become a rational apostate of 'falling in love' seems a challenge fraught with pitfalls, and yet one somebody might find tempting to pursue such a path after having love rend one's emotions so. The original poster admitted he hadn't been in a relationship as significant as, for example, yours, so there might be limited value in that perspective. Still, I feel like the rest of us were able to clarify his somewhat muddled thinking in the comments, so you might find solace there. Additionally, you might feel like sending some participant in that discussion a private message if what they wrote reaches out to you.
comment by DataPacRat · 2014-10-20T11:07:57.103Z · LW(p) · GW(p)
Does anyone here know anyone (who knows anyone) in the publishing industry, who could explain exactly why a certain 220,000-and-counting-word RationalFic manuscript is unpublishable through the traditional process?
Replies from: Toggle, IlyaShpitser, garabik, Kaj_Sotala, James_Miller, ChristianKl, Baughn↑ comment by Toggle · 2014-10-20T19:41:08.383Z · LW(p) · GW(p)
I'm fairly well-informed on this subject- I've had one published science fiction author as a housemate, another as a good friend, and I'm on a first-name basis with multiple editors at Tor.
You will find it very challenging to get direct feedback from any professionals in the industry at this stage, short of relationships like personal friendship. This is because at any given time, there are tens of thousands of unpromising authors making exactly that request.
If this is your first novel, or even your third, don't expect too much. The bar for minimum quality is extremely high, and author skill does not peak at a young age. If you're still early in the process, and you're still enjoying the practice, keep writing your second and third and eighth books while you look around for your first to be published. As a general rule of thumb, if you don't have a novel that's now vaguely embarrassing to you, then you probably aren't good enough yet. Do not put all your eggs in one basket by writing one very long series; try out a variety of settings, and experiment with your craft.
Often, it is heard that writing short stories to build up a reputation first is a good way to break in to the industry. This is false.
Be aware that no matter which route you take, multiple years will pass before your book is accepted. Be aware that when your first book is published, you will not be paid enough to live on.
Rather than chasing publishers immediately, the first thing you need (need) is a good agent. Being accepted by an agent is a kind of 'slush pile lite' challenge- agents usually have their own slush piles and their own interns to read through them, but their overall volume and turnaround time is much more manageable. You're also much more likely to get real feedback from an agent, explaining any potential problems that they see with your work. Another advantage of having multiple novels written is that you can send a particular novel to a particular agent, depending on their stated preferences. These can be quite specialized- gay and lesbian characters in fantasy settings, hard sf alternate history military adventures- depending on the goals of the agent in question, and it helps to maximize the number of niches that you accidentally fall in to by writing a variety of stories. Make sure that you are aware of your chosen agent's reputation, since there are predatory impostors. Once you have this agent, you will be able to bypass the publisher's slush pile entirely, and your chances improve dramatically.
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T23:01:07.508Z · LW(p) · GW(p)
a novel that's now vaguely embarrassing to you
I wonder if a novel-length piece of fanfiction starring a bovine secret agent counts...
a good agent
I know even less about agents than I do about slush piles - I don't know where to even begin looking for a list of them, whether there are agent-focused online forums or subreddits, agent review sites, or what-have-you. Where might I start looking to discover agents' reputations?
Replies from: Toggle, None↑ comment by Toggle · 2014-10-20T23:39:37.835Z · LW(p) · GW(p)
This resource seems quite good. It gives a few websites that compile lists, but your first step is going to be a bookstore- go find books that are likely to appeal to the same sorts of people as your own, and look inside them. Agents aren't usually listed in the title pages or published information, but it's good form to mention them in the acknowledgments, so that's where you'll get your initial list of names.
I wonder if a novel-length piece of fanfiction starring a bovine secret agent counts...
Ha! Possibly. Are you now skilled enough to rewrite it, better, in 30,000 words without losing anything?
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T23:57:00.572Z · LW(p) · GW(p)
Are you now skilled enough to rewrite it
If I had a reason to, yep.
better
I think I could manage that.
in 30,000 words
... tricky.
without losing anything?
I don't think I can shrink it by a factor of 9 without losing quite a lot - even summarizing just the best bits might take more than that.
Replies from: Toggle↑ comment by Toggle · 2014-10-21T00:39:19.629Z · LW(p) · GW(p)
One of the most common signs of an author that has yet to mature is a conspicuously low density of language (especially so in fan fiction). I actually wouldn't be surprised if you could cut it to a ninth, although I suppose a third would be a bit more realistic without my having actually seen it. If you want to try this out without taking on an unreasonably large project, try cutting your old blog posts in half. Just as an example, I pulled a random paragraph from S.I. (which I might have mangled due to a lack of context):
"I never actually caught sight of Charles - he seemed to either be running errands, or hanging out with a few other guys aiming to create some sort of "Last of the Summer Wine" pastiche. After the second ladder crash, I suspected he married into the House household simply to have ready access to medical care."
"Charles was nowhere, probably off playing 'Last of the Summer Wine' with his buddies. No surprise- after the latest ladder crash, I'd bet he married a House for the insurance."
All this is just a heuristic, of course. The ability to compress language doesn't make you a good author, it's just something that most good authors can do.
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-21T00:48:06.187Z · LW(p) · GW(p)
The ability to compress language
If I was given a goal of cutting my verbiage in half, I think I can do that reasonably well. The question is, what's the meta-heuristic here? When should an authour go to the effort of aiming for shortened prose as opposed to longer text?
Replies from: Toggle, Antiochus↑ comment by Toggle · 2014-10-21T01:10:08.918Z · LW(p) · GW(p)
The reason that you want to be able to compress language is, in a broader sense, to be able to use words with extreme precision. An author that can do this is in a good position to decide whether they should, but someone that defaults to the more expansive writing is probably not using individual words conscientiously.
Replies from: gwern↑ comment by gwern · 2014-10-21T02:18:54.534Z · LW(p) · GW(p)
is probably not using individual words conscientiously.
I would go further: an author who has not edited down their prose to something tighter and with more bang for the buck is probably too in love with their writing to have carefully edited or considered all the other aspects of their story, such as the plot, pacing, content, or character voices.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-10-22T14:52:59.545Z · LW(p) · GW(p)
Any thoughts or resources about the right amount of redundancy?
↑ comment by [deleted] · 2014-10-21T00:08:50.602Z · LW(p) · GW(p)
It's a yearly publication. You can probably find the latest copy in the nearest library of any size. If not, they can certainly loan it from another library.
It lists publishing houses, agents, online markets, magazines, everything useful to a writer looking to get published.
I would suggest grabbing a copy and just surfing through it. It's a great start.
↑ comment by IlyaShpitser · 2014-10-20T11:18:29.366Z · LW(p) · GW(p)
May I suggest talking to scifi/fantasy author community (they know quite a bit about this, and often struggle to publish). Like piloting and academia, demand for these sorts of jobs far outstrips supply, so most people will struggle and make a poor living.
Replies from: NancyLebovitz, DataPacRat↑ comment by NancyLebovitz · 2014-10-20T15:29:47.425Z · LW(p) · GW(p)
There isn't a single author community, but Making Light has both editors and authors as hosts and commenters.
If you want to make some personal connections, it's a good place if your personality is a good fit for the community. (Translation: I'd call the community informally rationalist, with a high tolerance for religion. Courtesy is highly valued.)
I looked at the beginning of your novel, and the prose is engaging-- I think it would appeal to people who like Heinlein.
↑ comment by DataPacRat · 2014-10-20T11:37:57.369Z · LW(p) · GW(p)
Do you have any particular locations for this 'scifi/fantasy author community'?
most people will struggle and make a poor living.
I don't expect to make a single cent out of this story; in that sense, I'm writing it to improve my skills for whatever I write next. (Tsuyoku naritai!) But I'm writing it even more because I just want to write it.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-20T13:54:02.740Z · LW(p) · GW(p)
I don't expect to make a single cent out of this story
Then why should anybody expect to make a single cent out of publishing your story? If you don't believe why should others?
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T14:00:15.563Z · LW(p) · GW(p)
I don't expect to make a single cent out of this story...
... because I don't even have the generic mailing address to a minor publishing house's slush pile, let alone the social network and connections that would let me sidestep the ordinary process and get in touch with a human willing to spend more than thirty seconds glancing at yet another novel.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-20T15:05:06.296Z · LW(p) · GW(p)
Why do you choose that route over going to fanfiction.org if you think you have low chances of getting published?
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T15:17:10.050Z · LW(p) · GW(p)
I may not be able to get /this/ novel published; but the skills I develop as I work on it, and the various lessons I learn in the process, seem likely to be useful for /future/ stories.
What made me start thinking in terms of paper publishing at all was this comment.
↑ comment by garabik · 2014-10-20T20:29:08.255Z · LW(p) · GW(p)
If you haven't yet, read http://www.antipope.org/charlie/blog-static/2010/04/common-misconceptions-about-pu-1.html by Charles Stross.Quite a good description of how book publishing works.
↑ comment by Kaj_Sotala · 2014-10-21T19:59:02.647Z · LW(p) · GW(p)
Is there a good reason to go through a publisher these days? At least assuming that you're not certain you'll get a big publisher who's really enthusiastic about marketing you?
Yes, if you manage to find a publisher they'll get your book in bookstores and maybe do some marketing for you if you're lucky, but as others in the thread have indicated, getting through the process and into print may take years - and unless you manage to get a big publisher who's really invested in your book, the amount of extra publicity you're likely to get that way will be quite limited.
Instead you could put your books up on Amazon as Kindle and CreateSpace versions: one author reports that on a per-unit basis, he makes three times more money from a directly published ebook priced at $2.99 than he would from a $7.99 paperback sold through a publisher, and almost as much as he would from a $25 hardcover. When you also take into account the fact that it's a lot easier to get people to buy a $3 book than a $25 or even a $8 book, his total income will be much higher. As a bonus, he gets to keep full rights to his work and do whatever he wants with them. Also they can be on sale for the whole time that one would otherwise have spent looking for a publisher.
One fun blog post speculates that the first book to earn its author a billion dollars will be self-published ebook. Of course you're not very likely to earn a billion dollars, but the same principles apply for why it's useful to publish one's work that way in general:
Prediction #1: The first B-book will be an e-book.
The reason is that you can’t have great sales without great distribution. There are roughly a billion computers on the planet connected to the internet and all of them can read e-books in numerous formats using free software. There are roughly four billion mobile devices, and most of those will soon be able to read e-books.
The sales channel for e-books is growing rapidly and has global reach. That’s why the first B-book will be in e-format. [...]
Prediction #2: The first B-book will be self-published.
Self-publishing is the best way to get the royalty rate high enough and the retail price low enough to make the B-book a reality.
The fact is that most publishers aren’t going to price your e-book at $2.99 or $3.99. They’ll want it at $9.99 or $12.99, which is probably too high for the market. And they’ll pay you only 25% royalties on the wholesale price, which is too low. If you want an aggressively priced e-book and a high royalty rate, you’ll almost certainly need to publish it yourself.
I feel like if you want money, you should go for self-publishing. If you're more interested in getting a lot of readers, you should again go for self-publishing. Of course the most likely outcome for any book is that you won't get much of either, but at least self-publishing gives you better odds than a traditional publisher. (Again, with a possible exception for the case where you get a big publisher to put up a massive marketing campaign for you.)
Replies from: NancyLebovitz, John_Maxwell_IV, ChristianKl↑ comment by NancyLebovitz · 2014-10-22T14:55:00.246Z · LW(p) · GW(p)
I don't have links handy, but I've seen essays by authors which say that self-publishing and using a publisher both have advantages and drawbacks, and those authors are using both methods.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-10-28T08:15:06.721Z · LW(p) · GW(p)
Smashwords is pretty nice; it lets you quickly spray your self-published book to various digital ebook stores all over the internet.
↑ comment by ChristianKl · 2014-10-22T12:33:30.348Z · LW(p) · GW(p)
Book stores do have professional editors most self published books don't have editors. It seems like the post that motivated DataPacRat to start this thread was partly about editoring.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-10-22T14:00:06.872Z · LW(p) · GW(p)
You can always purchase editing services separately.
↑ comment by James_Miller · 2014-10-20T18:57:34.896Z · LW(p) · GW(p)
The first question a publisher asks is "what shelf would it go on in the book store?"
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T22:37:01.499Z · LW(p) · GW(p)
In this particular case: "Science Fiction". I don't know of many stores that subdivide SF&F more than that.
Replies from: James_Miller↑ comment by James_Miller · 2014-10-20T23:09:06.099Z · LW(p) · GW(p)
Then you need to be pitching it to publishers as science fiction not RationalFic.
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T23:20:11.764Z · LW(p) · GW(p)
Agreed. It's generally just to the crowd here that I pitch it as RatFic.
↑ comment by ChristianKl · 2014-10-20T13:05:54.149Z · LW(p) · GW(p)
Basically because no one that you showed the manuscript thinks he can make money with it.
Manuscripts often get rejected by a bunch of publishers till one wants to publish it. On the other hand few publisher have an idea that they want to publish a RationalFic.
Replies from: None↑ comment by [deleted] · 2014-10-20T13:42:43.634Z · LW(p) · GW(p)
^^^ There is your answer.
A publisher may reject a manuscript based on some ideological or cultural qualm, but at the end of the day, the publisher's main question is going to be "Can I sell this?" If you want to get a manuscript published, you have to do two (overly simplified things): make it worth publishing and find someone whose market would be interested in the ideas expressed there in. IlyaShpitser's suggestion of looking into scifi/fantasy is a good one.
Also a quick couple of notes. First off, I don't know if this is true of every publisher, but you probably would do better if you knocked off that "-and-counting" portion of the length. Believe me, publishers receives gobs of letters about manuscripts that are unfinished "but will be masterpieces." Have a product. Show them the product. You need to have leverage with a publisher and being able to slam a finished story down and say, "This is what I have for you. This is good. This is what you need. You can buy it now or I will look elsewhere." That is powerful. Though I would not suggest actual engaging in the hyperbole I just used. That was example only. The point is, have a product, not an idea.
Second, I would not try to sell it as a RationalFic. Sell it as a story. Again, you can also sell it as scifi/fantasy, but mainly do so within those communities/publishing houses that cater to that. Coming to a non-genre publisher and saying, "I have a rational fiction story about the singularity" will not set off their "50,000 advance copies" antennae. Instead, give them a summary of the story. I don't necessarily mean a dry summary. Just some idea of what you have, why it would be interesting to readers, and, subtly, how it would make the publisher cash.
Remember, publishing is not an art form or an intellectual process. It's not academia. It's a business. In publishing, you don't talk about artistic merits or themes or prescient issues unless that's what the publisher wants to hear. Talk about business, talk about what interests the publisher, talk about how you (and you alone) meet those interests. It may feel like you are cheapening the intended impact of your work, but getting published is modern day patronage. You have to approach it as business.
Good luck! Keep at it. Remember: Stephen King got so many rejection letters that they eventually weighed down the nail he stuck them on and tore it from the wall. So don't let one rejection get you down.
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T14:06:54.536Z · LW(p) · GW(p)
if you knocked off that "-and-counting" portion of the length.
No worries; I'm not expecting I'll have the opportunity to even try to submit it to an editor before I finish.
I would not try to sell it as a RationalFic. Sell it as a story.
Again, no worries; I only mentioned it being a RatFic to tailor my post to the audience of this particular community, and would similarly tailor it as, say, "SF" or "hard SF" to people who are more familiar with those terms.
a summary of the story.
Initial thought on a generic pitch: "Present-day guy wakes up in the future, gets turned into a talking rabbit, and tries to figure out what the bleep happened to the world while he was out."
Good luck!
Thank you kindly. :)
Keep at it.
No worries on that score - I'm already writing a novel even when I have no measurable hope of getting it on paper, and my related skills are only going to improve from here. (At least until /I/ get hit by a truck and cryo-preserved, but that's another matter... ;) )
Replies from: Vulture↑ comment by Vulture · 2014-10-20T18:36:34.879Z · LW(p) · GW(p)
a summary of the story. Initial thought on a generic pitch: "Present-day guy wakes up in the future, gets turned into a talking rabbit, and tries to figure out what the bleep happened to the world while he was out."
You might already know this, but just to be sure: that there is a synopsis, not a summary.
↑ comment by Baughn · 2014-10-20T11:11:18.634Z · LW(p) · GW(p)
Um - which one, precisely? I might want to read it.
Replies from: DataPacRat↑ comment by DataPacRat · 2014-10-20T11:36:12.977Z · LW(p) · GW(p)
S.I., which can be read and commented on starting at https://docs.google.com/document/d/1AU8o3wSAiufh-Eg1FtL-6656dNvbCFILCi2GbeESsb4/edit . (I plan on eventually giving it a permanent home at http://www.datapacrat.com/SI/ , but I'm currently focusing on writing the thing.)
comment by zedzed · 2014-10-20T09:04:48.853Z · LW(p) · GW(p)
To what degree can I copy/paste from Google Docs when creating an article?
Edit: Google Docs -> article is sketchy, though not impossible if you're willing to put in time reformatting.
Followup: are articles usually written in the editor that comes up when you click "create a new article"?
Replies from: Emile, DanielLC, sixes_and_sevens↑ comment by Emile · 2014-10-20T12:14:13.395Z · LW(p) · GW(p)
Depends on what you want to do, if you want to keep your google doc formatting (including which lines are title, bulleted lists, links, etc.) then your result will probably look weird and ugly on lesswrong.
The best would be to copy-paste from google doc but to paste with Ctrl-shift-V (or equivalent), which in most browsers pastes the raw text, and then redoing the necessary formatting in the LW article editor. This will be a bit of a pain for links, bolded/italics parts, quotes, etc. since you'll have to redo them (so it's best not to do them in the first place in google docs).
↑ comment by sixes_and_sevens · 2014-10-20T09:18:39.266Z · LW(p) · GW(p)
What are the problems that motivate you to ask this question? Formatting errors? Repetitive strain injury from clicking?
Replies from: zedzed↑ comment by zedzed · 2014-10-20T10:19:14.509Z · LW(p) · GW(p)
No problems. I just have a strong preference for composing in Google Docs, but I'm unsure how well it's going to transfer.
Also, because Google Docs lends itself to collaboration exceptionally well, being able to go Google Docs -> LW article smoothly has implications.
Replies from: ZankerH↑ comment by ZankerH · 2014-10-20T10:32:42.283Z · LW(p) · GW(p)
As long as you're familiar with Markdown, you're probably better off running it through a plaintext editor first to eliminate any possibility of formatting errors.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-20T12:07:04.912Z · LW(p) · GW(p)
LW comments are in markdown but discussion and main posts are in some form of html.
comment by Scott Garrabrant · 2014-10-23T17:54:28.930Z · LW(p) · GW(p)
I posted a new math puzzle. I really like this one.
comment by Elo · 2014-10-23T00:22:30.543Z · LW(p) · GW(p)
is there a part of the sequences that discusses celebrating failures? or acknowledging failures?
Replies from: polymathwannabe, Wes_W↑ comment by polymathwannabe · 2014-10-23T01:20:08.305Z · LW(p) · GW(p)
Only acknowledgment:
↑ comment by Wes_W · 2014-10-23T08:27:28.429Z · LW(p) · GW(p)
The Sin of Underconfidence seems relevant, although it takes a slightly different angle on the topic.
comment by Metus · 2014-10-21T20:47:18.984Z · LW(p) · GW(p)
Finding out where to donate is exhausting.
There are a couple of organisations affiliated with LW or organsiations that are inspired by the memespace. A remotely exhaustive list would be CFAR, MIRI, FHI, GiveWell. Which ones did I forget? Further, there are more traditional, gigantic organisations like the various organs of the UN or the catholic church. Finally, there are organisations like Wikipedia or the Linux foundation. In this jungle how should I found out where to donate my personal marginal monetary unit?
I posit that I should not. In no possible way am I qualified to judge that but I know just enough economics to claim that a mild amount of diversification should be better on aggregate than any kind of monoculture. GiveWell does some of this work of evaluating charities, but if everyone was donating to GiveWell instead of some to other charities I am sure those other causes would suffer quite some. Or is GiveWell intended as the universal charity and I should just not worry about where my money exactly will go except for the eventual internet outrage?
The dream is a one-click solution: This is how much money I am willing to give, have an organisation take it and distribute optimally relative to some chosen measure. Is GiveWell this?
Replies from: Evan_Gaensbauer, ChristianKl↑ comment by Evan_Gaensbauer · 2014-10-22T02:23:17.139Z · LW(p) · GW(p)
Disclosure: I made a $1000 unrestricted donation to Givewell in June 2014.
Givewell's donation portal allows you to donate to any of their three currently top recommended charities, or to donate to Givewell directly. If donating to Givewell directly one of two things can happen.
i) You make a donation which restricts Givewell to allotting that money to one of it's top recommended charities as it sees fit to have the financial needs of those organizations met.
ii) You make an unrestricted donation, which allows Givewell to use your donation to support its own operating cost. Since Givewell apparently receives sufficient funding to do their typical work, your donation, i.e., the marginal dollar, effectively funds the Open Philanthropy Project, formerly Givewell Labs. This is Givewell's joint investigative research venture with Good Ventures, a foundation worth hundreds of millions of dollars; their research right now is into global catastrophic risks, policy reform, and improving scientific research. This is the ambitious research the rationality community looks forward toward, and was profiled as such by Holden Karnofsky at the 2014 Effective Altruism Summit.
↑ comment by ChristianKl · 2014-10-22T18:13:29.595Z · LW(p) · GW(p)
GiveWell does some of this work of evaluating charities, but if everyone was donating to GiveWell instead of some to other charities I am sure those other causes would suffer quite some.
Following the Kantian maxim isn't good in this case. In effective altruism there's a concept called "room for funding". If you would have 10 billion $ and seek a target, GiveWell wouldn't be able to use that money as effectively as it uses a marginal dollar at the moment.
At the present moment simply going with GiveWell is a good option if you don't want to spend much time. They provide you with proven courses that can put your money to good use.
It's also possible that you see something in your community that would be done if there funding but nobody stepped up to pay the bill. Paying for a entry on meetup.com for a local effective altruism group might be an example of that category.
comment by solipsist · 2014-10-21T19:18:46.579Z · LW(p) · GW(p)
Does anyone know of a compelling fictional narrative motivating the CHSH inequality or another quantum game?
I'm looking for something like:
Earth's waging a two-front war on opposite ends of the galaxy. The aliens will attack each front with either Star Destroyers or Nebula Cleanser, randomly and independently. The generals of the eastern and western fronts must coordinate their defense plans, or humanity will fall.
There are two battle plans: Alpha and Bravo. If the aliens at either front attack with Star Destroyers, generals must both choose the same battle plan. If, however, both fronts are attacked with Nebula Cleansers, the generals must choose opposite battle plans.
The emergency battle plans from Earth have been sent at 99% of the speed of light in opposite directions to the two fronts, hundreds of light years away. The plans will arrive with mere days to spare -- there will no time for the generals to coordinate with each other. If the two fronts battle plans are classical, what are the best odds of the generals coordinating and saving Earth? What if the plans contain quantumly entangled particles?
(only, hopefully, with deeper characters, etc.)
comment by torekp · 2014-10-20T21:24:47.079Z · LW(p) · GW(p)
Topical (well, kinda) and hilarious: Welcome to Life: the singularity, ruined by lawyers
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-10-21T03:38:34.751Z · LW(p) · GW(p)
comment by [deleted] · 2014-10-28T07:51:33.981Z · LW(p) · GW(p)
Thought of the day: I think mathematical-Platonist "discovery" is what some form of mathematical-constructivist "computation", most likely a stochastic search problem, feels like from the inside. After all, our intellectual faculties were tuned by evolution to locate physically real objects in physically real spaces, so repurposing the same cognitive machinery for "locating" an object for an existence proof would feel like locating an object in a space, even if the "space" and "object" are just mental models and never really existed in any physical sense.
comment by Ixiel · 2014-10-23T19:51:50.554Z · LW(p) · GW(p)
Last time I asked there was no way to spend money to get the main sequences in a neatly bound book. I suspect this is still the case. Would anyone be willing to make this happen for money? I don't know what all is required, but I suspect some formatting and getting the ok from EY. I want two for myself (one for the shelf and one to mark all to hell) and a few for gifts, so some setup where I can buy as needed is preferable (like Lulu.com but I'm not picky about brand) and printed-up stapled pages don't work. Maybe $100 for the work and $100 to EY/Miri? Flexible on price, and if that's way off no offense intended. And of course if not being on dead trees was a principled decision I respect that.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-10-23T21:18:33.793Z · LW(p) · GW(p)
There seems to be an effort underway to get the sequences into a book format which includes an editor going over everything.
http://lesswrong.com/lw/h7t/help_us_name_the_sequences_ebook/
http://lesswrong.com/lw/jc7/karma_awards_for_proofreaders_of_the_less_wrong/
The quality of the project is likely higher than someone doing $100 of work to compile a book for Lulu.
The core dilemma is whether it's worthwhile to wait for that project to come to a conclusion or whether it's better to create a separate version from the one MIRI is working. The sequences are licensed as CC-by so there nothing stopping anybody from making a book out of them.
And of course if not being on dead trees was a principled decision I respect that.
I highly doubt that. For the kind of ethical framework that Eliezer presents such a decision would be strange.
Replies from: Ixiel↑ comment by Ixiel · 2014-10-23T21:24:10.495Z · LW(p) · GW(p)
Ha, shoulda checked my original request 'cause that sounds really familiar now that you say it. I hate duplicating efforts more than I hate waiting, and if more informed people than I think that it needs editing I'd believe it. So many Christmas presents though... :)
comment by Paul Crowley (ciphergoth) · 2014-10-23T19:07:51.288Z · LW(p) · GW(p)
Steve Fuller decides to throw away the established meaning of the phrase "existential risk" and make up one that better suits his purposes, in Is Existential Risk an Authentic Challenge or the Higher Moral Evasion?. I couldn't finish it.
Replies from: ChristianKl, gjm↑ comment by ChristianKl · 2014-10-25T18:37:25.774Z · LW(p) · GW(p)
I couldn't finish it.
Then why do you post the link to it?
Replies from: satt↑ comment by satt · 2014-10-27T04:48:19.689Z · LW(p) · GW(p)
Brainstorming possible reasons off the top of my head:
- attempting to compensate for bad feelings left by the article, by soliciting sympathy/agreement/commiseration about how the article's crap
- similarly but more broadly, initiating a round of social bonding based on booing the article (and/or Steve Fuller)
- making a conveniently Google-able note of the article for future personal reference
- publicly warning the rest of us of a bad article which might call for a response
- contributing to LW's collective memory (in case e.g. a broader discussion of Steve Fuller's work kicks off here in future)
↑ comment by ChristianKl · 2014-10-27T15:21:49.544Z · LW(p) · GW(p)
Brainstorming possible reasons off the top of my head:
The fact that I can brainstorm possible reasons doesn't imply that I know the reason. Asking people for the reasons of their actions is helpful for having rational discourse.
Replies from: satt↑ comment by gjm · 2014-10-23T19:29:43.220Z · LW(p) · GW(p)
Steve Fuller writes a wrongheaded fuzzyminded self-indulgent article full of bloviating wankery. Also in today's news: Thomas Keller cooks a tasty meal, Bill Gates gives some money to charity, and a Republican congressman criticizes Barack Obama.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-10-26T22:43:26.455Z · LW(p) · GW(p)
Downvoted for abysmally low level of discourse.
Replies from: gjm↑ comment by gjm · 2014-10-27T01:21:04.772Z · LW(p) · GW(p)
I completely agree that my comment was of low quality and its present score of -3 seems pretty reasonable.
I'm worried that one aspect of its intent may have been misunderstood. (It's entirely my fault if so.) Specifically, it could be read as mocking ciphergoth for, I dunno, not appreciating how consistently useless Steve Fuller's writings are or something. I would like to put it on the record that nothing like that was any part of my intention. ciphergoth, if you happen to be reading this and read my earlier comment as hostile, please accept my apologies for inept writing. My intended tone was more like "yeah, I agree, isn't it terrible? But he's always like that" rather than "duh, so what else is new? what kind of an idiot are you for thinking he might be worth bothering with?".
For the avoidance of doubt, this isn't an attempt to argue against the downvotes -- they're deserved, it was a crappy comment, and I'm sorry -- but merely to clear up one particular misunderstanding that, if it's occurred, might have worse consequences than losing a few karma points. (Namely, annoying or even upsetting someone I have no wish to annoy or upset.)
shminux: I'm not sure that "low level of discourse" actually tells me anything -- pretty much every good reason for downvoting a comment comes down to "low level of discourse" in some sense. In this instance I'm pretty confident I grasp all the things that were wrong with what I wrote, but if you were intending to provide useful feedback (rather than, e.g., to say "boo!" a bit louder than a downvote does on its own) then a little more specificity would have gone a long way.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-10-27T15:50:16.007Z · LW(p) · GW(p)
feedback: "wrongheaded fuzzyminded self-indulgent article full of bloviating wankery" is a stream of content-free insults and is out of place on this site. (tumblr would be a better fit.) Your second sentence was not much better.
comment by rkdj · 2014-10-21T09:15:19.533Z · LW(p) · GW(p)
What does it mean to optimize the world, assuming the Many Worlds theory is true?
Replies from: lmm↑ comment by lmm · 2014-10-21T11:49:51.920Z · LW(p) · GW(p)
Increase the probability-weighted average of your utility function over Everett branches.
Replies from: philh, Jinoccomment by wobster109 · 2014-10-20T22:30:54.957Z · LW(p) · GW(p)
This interesting article turned up on Wait But Why: http://waitbutwhy.com/2014/10/religion-for-the-nonreligious.html#comment-264276
A lot of it reads a lot like stuff on here. Here's a quote: "On Step 1, I snap back at the rude cashier, who had the nerve to be a dick to me. On Step 2, the rudeness doesn’t faze me because I know it’s about him, not me, and that I have no idea what his day or life has been like. On Step 3, I see myself as a miraculous arrangement of atoms in vast space that for a split second in endless eternity has come together to form a moment of consciousness that is my life…and I see that cashier as another moment of consciousness that happens to exist on the same speck of time and space that I do. And the only possible emotion I could have for him on Step 3 is love."