Open thread Jan. 5-11, 2015
post by polymathwannabe · 2015-01-05T12:48:41.845Z · LW · GW · Legacy · 152 commentsContents
152 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
152 comments
Comments sorted by top scores.
comment by DataPacRat · 2015-01-05T19:46:16.070Z · LW(p) · GW(p)
I've had a thought about a possible replacement for 'hyperbolic discounting' of future gains: What if, instead of using a simple time-series, the discount used a metric based on how similar your future self is to your present-self? As your future self develops different interests and goals, your present goals would tend to be less fulfilled the further your future self changed; and thus the less invested you would be in helping that future iteration achieve its goals.
Given a minimal level of identification for 'completely different people', then this could even be expanded to ems who can make copies of themselves, and edit those copies, to provide a more coherent set of values about which future selves to value more than others.
(I'm going to guess that Robin Hanson has already come up with this idea, and either worked out all its details or thoroughly debunked it, but I haven't come across any references to that. I wonder if I should start reading that draft /before/ I finish my current long-term project...)
Replies from: James_Miller, Unnamed, lmm↑ comment by James_Miller · 2015-01-07T04:23:48.581Z · LW(p) · GW(p)
Consistent with why children care so little about their future.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2015-01-07T04:29:41.837Z · LW(p) · GW(p)
DataPacRat's comment together with your observation strike me as the most interesting thing I've seen in an open thread in a while. I'm not convinced that the idea is in any sense correct or even a good idea but the originality is striking. It is easy to come up with obviously wrong ideas that are original, but coming up with an original (I think) idea that is that plausible is more impressive, and your observation makes it more striking.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2015-01-07T10:32:13.381Z · LW(p) · GW(p)
This echoes my thoughts.
↑ comment by Unnamed · 2015-01-07T19:30:59.892Z · LW(p) · GW(p)
Shane Frederick had the idea that hyperbolic discounting might be because people identify less with their future self. He actually wrote his dissertation on this topic, using Parfit's theory of personal identity (based on psychological continuity & connectedness). He did a few empirical studies to test it, but I think the results weren't all that consistent with his predictions and he moved on to other research topics.
Replies from: DataPacRat↑ comment by DataPacRat · 2015-01-07T23:01:14.859Z · LW(p) · GW(p)
Thank you /very/ much for that link. The first two sections do a much better job explaining the general background and existing thought around my thought than I'd be able to write on my own.
I am, however, less confident that the study described in the third section does a very good job of disproving the correlation between amount of selfhood and future discounting. Among other reasons, the paper itself posits that most people likely subscribe to the "simple" theory of identity instead of the "complex" one under discussion.
As a third thought, reading the paper has suggested a new variation of my original thought. Perhaps the correlation exists, but I have causation backwards: future discounting could be, in fact, an expression of how much people consider their future selves to be dissimilar to their present selves. At present, I'm not sure what it would take to figure out which version of this idea comes closer to being true, and that's even assuming that the correlation exists in the first place; but it seems worth further consideration, at least.
Replies from: DataPacRat↑ comment by DataPacRat · 2015-01-07T23:44:30.858Z · LW(p) · GW(p)
An idea for an experiment:
Sub-part one: Ask participants various questions to determine the minimum value of x for them to agree to the former option in, "Would you rather we give $100+x to a perfect stranger, or $100 to you?". (Initial prediction: values will vary widely, from particularly generous people with an x of -$99.99, to particularly selfish people with an x of infinity.)
Sub-part two: Ask participants various questions to determine the minimum value of y for them to agree to the former option in, "Would you rather we give $100+y to you in 5 years, or $100 to you now?".
Initial prediction: x and y will be closely correlated; the more a person is willing to give money to perfect strangers, the more they'll be willing to give money to their future selves.
Possible variation: Change 'perfect stranger' in sub-part one to people with varying levels of closeness to the participant: distant acquaintance, close friend, family member.
Possible variation: Change '5 years' in sub-part two to different time-scales.
If both variations are included: Then, possibly, the data may converge into a simple shape. Possible complications for that shape may arise from how likely the participant feels they're likely to still be alive in n years; and in how strongly they trust the experimenters to actually distribute the money.
Additional complications: I am completely unaffiliated with any university or other educational institution, have never performed a psychological experiment, and have no budget to perform any such experiment.
Replies from: Username↑ comment by Username · 2015-01-08T20:13:01.260Z · LW(p) · GW(p)
Interesting to think that empathy for others might be the same mechanism by which we make long term decisions.
Closely related to this whole discussion is Self-empathy as a source of "willpower".
Replies from: tutcomment by Dahlen · 2015-01-07T06:19:36.854Z · LW(p) · GW(p)
There seem to be two broad categories of discussion topics on LessWrong: topics that are directly and obviously rationality-related (which seems to me to be an ever-shrinking category), and topics that have come to be incidentally associated with LessWrong to the extent that its founders / first or highest-status members chose to use this website to promote them -- artificial intelligence and MIRI's mission along with it, effective altruism, transhumanism, cryonics, utilitarianism -- especially in the form of implausible but difficult dilemmas in utilitarian ethics or game theory, start-up culture and libertarianism, polyamory, ideas originating from Overcoming Bias which, apparently, "is not about" overcoming bias, NRx (a minor if disturbing concern)... I could even say California itself, as a great place to live in.
As a person interested in rationality and little else that this website has to offer, I would like for there to be a way to filter out cognitive improvement discussions from these topics. Because unrelated and affiliated memes are given more importance here than related and unaffiliated memes, I have since begun to migrate to other websites for my daily dose of debiasing. Obviously it would be all varieties of rude of me to tell everybody else "stop talking about that stuff! Talk about this stuff instead... while I sit here in the audience and enjoy listening to you speaking", and obviously the best thing I could do to further my purpose of seeing more rationality material on LessWrong would be to post* some high-quality rationality material -- which I do plan on doing, but I still feel that my ideas have some maturing and polishing to undergo before they're publishable. So what I intend to do with this post is to poll people for thoughts and opinions on this matter, and perhaps re-raise the old discussions about revamping the Main/Discussion division of LessWrong.
Also, for what it's worth, it seems to me that most of the bad PR LessWrong gets comes from those topics that I've mentioned in the first paragraph being more visible to outsiders than the stated mission of "refining the art of human rationality". People often can't get beyond the peculiarities of Bayland to the actual insights that we value this community most for -- and to be honest, if I hadn't read the Sequences first and instead got hit in the face with persuasions to donate to charity or to believe in x-risk or to get my head frozen upon my first visit to LW, I'd have politely "No-Thank-You"ed the messengers like I do door-to-door salesmen. To outsiders not predisposed to be friendly to transhumanism & co. through their demographics, to conflate the two sides of LessWrong is to devalue the side that champions rationality. Unless, of course, that was the point all along and LessWrong has less intrinsic value for the founders than its purpose as an attractor of smart, concerned young people.
* notably SSC, RibbonFarm, TheLastPsychiatrist, and even highly biased but well-written blogs coming from the opposite side of the political spectrum -- hopefully for our respective biases to cancel out and for me to be left with a more accurate worldview than I started out with. (I don't read political material that I agree with, and to be honest it would be difficult to even come across texts prioritizing the same issues that I care about. I sometimes feel like I'm the first one of my political inclination...) I'm not necessarily endorsing any of these for anyone else (except Scott, read Scott, he's amazing), it's just that there is where I get my food for thought. They raise issues and put a new spin on things that don't usually occur to me.
Replies from: shminux, jkaufman, CBHacking, Capla↑ comment by Shmi (shminux) · 2015-01-10T03:21:57.666Z · LW(p) · GW(p)
With most of the main contributors having left and no new ones emerging (except for an occasional post by Swimmer963 and So8res discussing MIRI research), this forum appears, unfortunately, to have jumped the shark. It is still an OK forum to hang out on, but don't expect great things. Unless you are the one producing them.
Replies from: JoshuaZ↑ comment by jefftk (jkaufman) · 2015-01-07T17:31:51.044Z · LW(p) · GW(p)
As a person interested in rationality and little else that this website has to offer
I'm confused why you categorize SSC as appropriate for debiasing but not LW; doesn't SSC have as much of a mix of non-rationality material as LW? Is it a mix you like better? Do you just enjoy SSC for other reasons?
Replies from: Dahlen↑ comment by Dahlen · 2015-01-08T05:42:38.402Z · LW(p) · GW(p)
Because
1) Scott posts about politics and that's one of the muddiest areas in debate -- and with the moratorium on politics around here, one could use some good insights on controversial issues somewhere else.
2) LessWrong is a collection of people, but Scott is one guy -- and as far as I've seen he's one of the most level-headed and reasonable people in recent history, two traits which I consider indispensable to rationality. And he posts a lot about what constitutes reasonableness and how important it is. LessWrong does exhibit this as well, naturally, but there as opposed to here there aren't tendencies in the opposite direction that dilute the collective wisdom of the place, so to speak. (Then again I don't read comments.)
3) I just happen to really like his writing style, jokes, way of thinking, just about everything.
but not LW
No. Never said that. It's just that other sites get updated with material of interest to me more often, whereas the best stuff around here is already old for me.
↑ comment by CBHacking · 2015-01-07T13:35:15.899Z · LW(p) · GW(p)
In theory, the main or promoted posts should be more focused in rationality-for-its-own-sake topics while Discussion (and especially many of the more open threads therein, the literal Open Threads most of all) are going to contain a lot of memes of interest to the rationalist community without actually being about rationalism per se.
On the other hand, the rate of release of rationality material here isn't terribly high, and some of it does get mixed in with affiliated-but-unrelated topics.
Replies from: Dahlen↑ comment by Dahlen · 2015-01-09T12:33:45.315Z · LW(p) · GW(p)
The thing is, Main is like that as well. I went back some three pages on Main to check, and there were a few rationality-related articles, some periodic posts (the survey, rationality quotes), a whole lot more posts relating to organizations of interest for the people on LessWrong who form its central real-life social circle, including reports on recent activity and calls for donations or for paid participation in events.
Besides, effective altruist organizations have recently been included in the Rationality Blogs list in the sidebar. (And there was this comment of Eliezer's on a post which, if I remember correctly, called for help in some matter. He said he's not going to devote time -- five minutes, an hour, I don't remember the interval he gave -- to someone who had donated less than $5000 to charity. To get some people out of their American bubble, as a comparison, that's more than my current yearly income... likely much more. Needless to say, I found it rather unpalatable.)
And there's the higher bar for posting in Main... Unless you write something at least obviously good enough to break even in terms of karma, you get, technically a punishment of a few tens of negative karma points for having dared to post there. (I think at least that that's how the karma multiplier works.) And people are going to respond more positively to common in-group-y topics. So if anything non-affiliated topics are more likely to be found in Discussion.
Replies from: CBHacking↑ comment by CBHacking · 2015-01-10T02:39:02.357Z · LW(p) · GW(p)
Eh. Difference between theory and practice, I guess. I too wish there was more actual rationality stuff coming out; the archive is big, but it's hard to engage with people on those topics now and there's always more to cover. I don't mind the side topics so much as you seem to, but I would like to see more of the core topic.
As for the charity thing, that's EY's right if he so chooses to exercise it, but if income where you live is so low that $5000 is more than your annual income, or even if it's just temporarily more than that because you're a student or something (I made about that much per year on summer jobs my first two years of university), then I really doubt he would hold you to that if you were to approach him.
On the other hand, EY isn't anywhere near a top contributor to LW at this point in time; I barely see him comment anywhere on the site anymore. That's probably part of the reason for the dearth of good rationality posts, but it also means that his opinions will have less impact on the site as a whole, at least for a while.
↑ comment by Capla · 2015-01-08T19:09:17.509Z · LW(p) · GW(p)
This is something that I've noticed and been concerned with. I think this is worthy of a top level discussion post.
I think part of the problem is that rationalism is harder that weird and interesting ideas like transhumanism : anyone can dream about the future and fiddle with the implications, but it takes significant study and thought to produce new and worthwhile interventions for how to think better.
My feeling is that the Main is for rationality stuff and the Discussion is for whatever the members of this community find interesting, but since we don't have strong leaders who are doing the work and producing novel content on rationality, the Main rarely has a new post, so I at least gravitate to the Discussion.
Replies from: Capla↑ comment by Capla · 2015-01-08T19:24:48.873Z · LW(p) · GW(p)
Also, keep in mind that many of these secondary ideas sprang from rationalist origins: cryonics is presented as an "obvious" rational choice, when you don't let your biases get in the way: you have an expressed desire not to die, this is the only available option to not die. Polyamory similarly came to bear as the result of looking at relationships "with fresh eyes." These secondary topics gain prominence because they are examples (debatablly) of rationality applied to specific problems. They are the object level; "Rationality" is the meta level. But, like I said, it's a lot easier to think at the object level, because that can be visualized, so most people do.
Replies from: Dahlen, Jiro↑ comment by Dahlen · 2015-01-09T04:25:43.478Z · LW(p) · GW(p)
From the point of view of someone who doesn't buy into them, I think it's only incidental that those specific positions are advocated as a logical consequence of more rational thinking and not others. Had the founders not been American programmers, the "natural and obvious" consequences of their rationalism would have looked highly different. My point being that these practices are not at all more rational than the alternatives and very likely less so. But yeah, if these ideas gain rationalist adherents, then obviously some of the advocacy for them is going to take a rationalist-friendly form, with rationalist lingo and emphasized connections to rationalism.
Replies from: Capla↑ comment by Capla · 2015-01-09T05:11:21.195Z · LW(p) · GW(p)
Just curious, are there any positions which you you regard as "a logical consequence of more rational thinking"?
Replies from: Dahlen↑ comment by Dahlen · 2015-01-09T12:00:08.644Z · LW(p) · GW(p)
Yes -- atheism. And by extension disbelief in the supernatural. It's the first consequence of acquiring better thinking practices. However, it is not as if atheism in itself forms a good secondary basis for discussion in a rationalist community, since most of the activity would necessarily take the form of "ha ha, look how stupid these people are!". I would know; been there, done that. But it gets very old very quickly, and besides isn't of much use except for novice apostates who need social validation of their new identities. From that point of view I regard atheism as a solved problem and therefore uninteresting.
Nothing else seems to spring to mind, though -- or at least no positive rather than negative positions on ideological questions. "Don't be a fanatic", "don't buy snake oil", "don't join cults", "check the prevailing scientific paradigms before denying things left and right [evolution, moon landing, the Holocaust, global warming etc.]"... critical thinking 101. Mostly all other beliefs and practices that seem to go hand in hand with rationalism seem to be explainable by membership of this particular cluster of Silicon Valley culture.
Replies from: Lumifer↑ comment by Jiro · 2015-01-08T22:30:56.992Z · LW(p) · GW(p)
this is the only available option to not die.
I don't know, I like the option of locking yourself in a vault when you're about to die so that time travellers can come and rescue you without changing history, since nobody can see into the vault.
Okay, I lied, I don't like that option, but it's not worse than cryonics, and does count as another available option./
Replies from: Capla↑ comment by Capla · 2015-01-08T23:22:09.366Z · LW(p) · GW(p)
I want to emphasize that I neither endorse nor oppose the conclusion that polyamory or cryonics are rational, just point out that they are included in discussion here, in large part, because of how they impinge, or are presumed to impinge, on rationality.
comment by sixes_and_sevens · 2015-01-05T13:54:42.472Z · LW(p) · GW(p)
This is probably like walking into a crack den and asking the patrons how they deal with impulse control, but...
How do you tame your reading lists? Last year I bought more than twice as many books as I read, so I've put a moratorium on buying new books for the first six months of 2015 while I deplete the pile. Do any of you have some sort of rational scheme, incentive structure or social mechanism that mediates your reading or assists in selecting what to read next?
Replies from: FiftyTwo, polymathwannabe, RolfAndreassen, AlexSchell, Alsadius, ilzolende, lmm↑ comment by FiftyTwo · 2015-01-05T16:36:45.457Z · LW(p) · GW(p)
I've managed to partly transmute my "I want to buy that now" impulse into sending a sample to my kindle. Then if I never get past the first few pages I've not actually spent any money, if I reach the end of the sample and still want to continue I know I'm likely to keep going .
↑ comment by polymathwannabe · 2015-01-05T14:13:37.816Z · LW(p) · GW(p)
Do you mostly buy physical or digital books?
My digital unread list is hundreds of titles long, and it doesn't bother me much. It pales in comparison to my "read later" bookmark list.
My physical unread pile already occupies half a wall of my room, and just for being there it gives me aesthetic pleasure and bragging privileges. I used to think I should be worried, but I decided to embrace the pile. I guess it still hasn't reached the point where I should really be worried.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-01-05T14:37:23.315Z · LW(p) · GW(p)
A mixture of the two, but there's not really any psychological distinction between them. Once I resolve to read something, it has the same amount of "weight" regardless of the medium.
↑ comment by RolfAndreassen · 2015-01-06T03:42:37.279Z · LW(p) · GW(p)
I wish I had your problem; mine is to find books I want to read. I very often re-read ones that are already on my shelves, for lack of anything new. However, this suggests a possible approach to your issue: When buying a book, check whether you are actually and genuinely excited to read it, so that you will open it the minute it arrives from Amazon. If not - put it in a "maybe later" pile. If it's more a case of "sure, sounds good" or, even worse, "I want to signal having read that", then give it a miss.
If you need to read stuff for work or for Serious Social Purposes like a book club, then treat it like work - set aside a certain time of day or week, and during that time, read.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-01-06T11:02:43.036Z · LW(p) · GW(p)
I'm not particularly excited to read, say, an intermediate textbook on medical statistics. In spite of this, I'm confident that the world will make more sense after I read it, and I'd like that outcome. This describes my attitude to a significant proportion of the books I intend to read.
This and other interactions have caused me to re-evaluate just how ascetic my reading habits are.
↑ comment by AlexSchell · 2015-01-05T15:19:08.218Z · LW(p) · GW(p)
This was more of a side effect of deciding to pare down on my possessions than an intervention specifically aimed at buying fewer books, but I rarely buy books anymore just because I want to read them. I get books on LibGen or at the university library. In the rare event in which a book turns out to be a really valuable reference I may then buy it.
↑ comment by Alsadius · 2015-01-06T02:39:11.485Z · LW(p) · GW(p)
- I read really, really fast. Like, I've finished some of the largest books in print(Atlas Shrugged and A Memory of Light come to mind here) in a single day before.
- The longer my reading list is, the less likely I am to add to it, or at least the less likely I am to take my additions seriously(when I have two items it's "I should read that!" When I have 200, it's "Eh, sounds interesting, might get to it someday").
- My to-read list is about 10 Chrome bookmarks(none short), 20 Amazon wishlist items, plus most of the width of a bookshelf that I've bought and not read, so I'm not sure how well I actually do despite the above.
↑ comment by pan · 2015-01-12T01:14:31.724Z · LW(p) · GW(p)
How did you learn to read so fast?
Replies from: Alsadius↑ comment by Alsadius · 2015-01-12T05:21:33.313Z · LW(p) · GW(p)
I read a lot. (I wish I could give you some actionable advice here, but there's nothing I can point to. I suspect it may be innate?)
I think I'm semi-skimming, subconsciously. I've noticed myself missing descriptions of characters before when something relevant comes up later on. That said, I'm still quite fast with reading things like internet essays, where missing words does hurt your comprehension badly, so I don't think that's all of it.
↑ comment by polymathwannabe · 2015-01-06T04:55:28.560Z · LW(p) · GW(p)
Atlas Shrugged in a day? Boy, do I want to see you take Infinite Jest.
Replies from: Alsadius↑ comment by Alsadius · 2015-01-06T05:19:05.708Z · LW(p) · GW(p)
I have to want to read the book for it to work. Looking up Infinite Jest, I'd probably prefer swallowing barbed wire to reading it.
(Atlas was back during my hardline-libertarian phase, so it was a lot more appealing. It also helped that the day in question was the day of the 2003 blackout, so there was nothing to do but read).
↑ comment by ilzolende · 2015-01-06T01:48:06.965Z · LW(p) · GW(p)
My library system lets you request books from anywhere in the county from your home computer. I put the book on hold, and then I get an e-mail when it arrives. Also, when a book isn't in the library system, I'll often buy an ebook edition or wait for it to enter the system.
Replies from: palladias↑ comment by palladias · 2015-01-06T15:09:20.180Z · LW(p) · GW(p)
Yup! Also a put it on hold person. I always wind up reading library books first because they have to be returned, and it makes it easier for me to carve out time that I want to carve out for reading, because the library book has a deadline.
↑ comment by lmm · 2015-01-06T20:19:18.017Z · LW(p) · GW(p)
For physical I hold myself to a "half read" rule. That still means a growing collection, mind. I prune aggressively but only from things I've already read, which unfortunately means I have a rump of books that I suspect are mediocre and therefore don't particularly want to read, but don't feel justified in getting rid of, whereas I get rid of books that are probably better than that if I've read them and don't think they're good enough to keep.
I don't have a solution but thankfully FiftyTwo's approach works now.Hopefully I can slowly shrink my physical pile to the point where it fits on my bookshelves.
comment by James_Miller · 2015-01-05T18:33:31.691Z · LW(p) · GW(p)
Some people think that the universe is fine-tuned for life perhaps because there exists a huge number of universes with different laws of physics and only under a tiny set of these laws can sentient life exist. What if our universe is also fined-tuned for the Fermi paradox? Perhaps if you look at the set of laws of physics under which sentient life can exist, in a tiny subset of this set you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring. If the natural course of events for sentient life in non-Fermi-tuned universes is for space faring civilizations to expand at nearly the speed of light as soon as they can, consuming all the resources in their path, then most civilizations at our stage of development might exist in Fermi-tuned universes.
Replies from: passive_fist, lmm, Tenoke↑ comment by passive_fist · 2015-01-05T23:07:31.928Z · LW(p) · GW(p)
Well we can for sure say that in our Universe interstellar travel is not hard. It's extremely easy, once you take humans out of the picture. With current technology we have the means to push spacecraft to 60 km/s. This isn't hypothetical tech, it's stuff that's sitting in the shed. At such velocities, craft could traverse the milky way galaxy 20 times over during the (current) lifetime of the galaxy (estimated at around 13 billion years). The galaxy is big, but it's not that big, not compared to the time scales involved here.
Unfortunately this only makes it much more likely that the second possibility is true: Evolution of AIs that spread outward and colonize the galaxy must be extremely unlikely.
Replies from: faul_sname↑ comment by faul_sname · 2015-01-06T01:32:33.909Z · LW(p) · GW(p)
Can the 60 km / s spacecraft in question slow down again (I honestly don't know).
↑ comment by lmm · 2015-01-06T20:22:00.419Z · LW(p) · GW(p)
In that case the vast majority of individuals (considered across all universes) would be members of those large spacefaring civilizations, no? In which case, why aren't we?
Replies from: James_Miller↑ comment by James_Miller · 2015-01-06T21:11:57.275Z · LW(p) · GW(p)
Possibly not if universes fined-tuned for life but not the Fermi paradox are dominated by paperclip maximizers or the post-singularity lifeforms in these universes turn themselves into something we wouldn't consider "individuals" while also preventing new civilizations from arising.
Replies from: lmm↑ comment by lmm · 2015-01-06T22:47:16.908Z · LW(p) · GW(p)
It only takes a few universes where that doesn't happen to mess with those numbers. Or to put it another way, fine-tuning for the existence of individuals seems like a smaller amount of fine-tuning than fine-tuning for the Fermi paradox.
Replies from: James_Miller↑ comment by James_Miller · 2015-01-06T23:13:29.000Z · LW(p) · GW(p)
In universes not fine-tuned for the Fermi paradox, the more fine-tuned for life the universe is, the sooner some civilization will arise that expands at the maximum possible speed devouring all the resources in its expansion path, which limits the number of civilizations like ours that can arise in any universe not fine-tuned for the Fermi paradox. Part of being fine-tuned for life might, therefore, be being fined-tuned for the Fermi paradox. (But you are raising excellent counterarguments to an issue I greatly care about so thanks!)
comment by [deleted] · 2015-01-06T19:29:13.323Z · LW(p) · GW(p)
I grew up thinking that the Big Bang was the beginning of it all. In 2013 and 2014 a good number of observations have thrown some of our basic assumptions about the theory into question. There were anomalies observed in the CMB, previously ignored, now confirmed by Planck:
Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.
Furthermore, a cold spot extends over a patch of sky that is much larger than expected.
The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.
“The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.
... One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scale than we can observe. ...
“Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting,” says Professor Efstathiou.
http://www.esa.int/Our_Activities/Space_Science/Planck/Planck_reveals_an_almost_perfect_Universe
We are also getting a better look at galaxies at greater distances, thinking they would all be young galaxies, and finding they are not:
The finding raises new questions about how these galaxies formed so rapidly and why they stopped forming stars so early. It is an enigma that these galaxies seem to come out of nowhere.
http://carnegiescience.edu/news/some_galaxies_early_universe_grew_quickly
The newly classified galaxies are striking in that they look a lot like those in today's universe, with disks, bars and spiral arms. But theorists predict that these should have taken another 2 billion years to begin to form, so things seem to have been settling down a lot earlier than expected.
B. D. Simmons et al. Galaxy Zoo: CANDELS Barred Disks and Bar Fractions. Monthly Notices of the Royal Astronomical Society, 2014 DOI: 10.1093/mnras/stu1817
http://www.sciencedaily.com/releases/2014/10/141030101241.htm
The findings cast doubt on current models of galaxy formation, which struggle to explain how these remote and young galaxies grew so big so fast.
http://www.nasa.gov/jpl/spitzer/splash-project-dives-deep-for-galaxies/#.VBxS4o938jg
Although it seems we don't have to look so far away to find evidence that galaxy formation is inconsistent with the Big Bang timeline.
If the modern galaxy formation theory were right, these dwarf galaxies simply wouldn't exist.
Merrick and study lead Marcel Pawlowski consider themselves part of a small-but-growing group of experts questioning the wisdom of current astronomical models.
"When you have a clear contradiction like this, you ought to focus on it," Merritt said. "This is how progress in science is made."
http://mq.edu.au/newsroom/2014/03/11/granny-galaxies-discovered-in-the-early-universe/ http://arxiv.org/abs/1406.1799
Another observation is that lithium abundances are way too low for the theory in other places, not just here:
A star cluster some 80,000 light-years from Earth looks mysteriously deficient in the element lithium, just like nearby stars, astronomers reported on Wednesday.
That curious deficiency suggests that astrophysicists either don't fully understand the big bang, they suggest, or else don't fully understand the way that stars work.
http://news.nationalgeographic.com/news/2014/09/140910-space-lithium-m54-star-cluster-science/
It also seems there is larger scale structure continually being discovered larger than the Big Bang is thought to account for:
"The first odd thing we noticed was that some of the quasars' rotation axes were aligned with each other -- despite the fact that these quasars are separated by billions of light-years," said Hutsemékers. The team then went further and looked to see if the rotation axes were linked, not just to each other, but also to the structure of the Universe on large scales at that time.
"The alignments in the new data, on scales even bigger than current predictions from simulations, may be a hint that there is a missing ingredient in our current models of the cosmos," concludes Dominique Sluse.
http://www.sciencedaily.com/releases/2014/11/141119084506.htm
D. Hutsemékers, L. Braibant, V. Pelgrims, D. Sluse. Alignment of quasar polarizations with large-scale structures. Astronomy & Astrophysics, 2014
Dr Clowes said: "While it is difficult to fathom the scale of this LQG, we can say quite definitely it is the largest structure ever seen in the entire universe. This is hugely exciting -- not least because it runs counter to our current understanding of the scale of the universe.
http://www.sciencedaily.com/releases/2013/01/130111092539.htm
These observations have been made just recently. It seems that in the 1980's, when I was first introduced to the Big Bang as a child, the experts in the field knew then there were problems with it, and devised inflation as a solution. And today, the validity of that solution is being called into question by those same experts:
In light of these arguments, the oft-cited claim that cosmological data have verified the central predictions of inflationary theory is misleading, at best. What one can say is that data have confirmed predictions of the naive inflationary theory as we understood it before 1983, but this theory is not inflationary cosmology as understood today. The naive theory supposes that inflation leads to a predictable outcome governed by the laws of classical physics. The truth is that quantum physics rules inflation, and anything that can happen will happen. And if inflationary theory makes no firm predictions, what is its point?
http://www.physics.princeton.edu/~steinh/0411036.pdf
What are the odds 2015 will be more like 2014 where we (again) found larger and older galaxies at greater distances, or will it be more like 1983?
Replies from: JStewart↑ comment by JStewart · 2015-01-06T21:14:49.300Z · LW(p) · GW(p)
I think you should post this as its own thread in Discussion.
Replies from: None↑ comment by [deleted] · 2015-01-07T03:18:44.177Z · LW(p) · GW(p)
If that sounds good, please, it'd be great if you could do it. I don't have the status.
The link for the dwarf galaxy article use wrong, it should be:
Thanks.
Replies from: None, JStewartcomment by TrE · 2015-01-05T18:10:41.271Z · LW(p) · GW(p)
Should we have some sort of re-run for the various repositories we have? I mean, there is the Repository repository and it's great for looking things up if you know such a thing exists, but (i) not everyone knows this exists and more importantly, (ii) while these repositories are great for looking things up, I feel that not much content gets added to the repositories. For example, the last top-level comment to the boring advice repository was created in march 2014.
Since there's 12 repositories linked in the meta repository as of today, I suggest we spend each month of 2015 re-running one of them.
I'm not certain which form these re-runs should take, since IMO, all content should be in one place and I'd like to avoid the trivial inconvenience for visitors clicking on the re-run post and then having to click one more time.
Should there be some sort of re-run of the 12 repositories during 2015, one per month? [pollid:808]
Which form should the re-run have, conditional on there being one? [pollid:809]
Replies from: None, MakoYass↑ comment by mako yass (MakoYass) · 2015-01-12T00:53:41.097Z · LW(p) · GW(p)
By "advice in the comments", you mean new entries to the repositories, right? So you're suggesting that we fragment the repository through a number of separate comment sections, labeled by year, and that is a really awful way of organizing a global repository of timeless articles.
If you're worried about incumbents taking disproportionate precedence in the list(as more salient posts tend to get more attention; more votes; more salience), IIRC, reddits have a comment ordering that's designed to promote posts on merit rather than seniority. If that isn't sufficient to address incumbent bias then we should probably be talking about building a better one.
Replies from: TrE↑ comment by TrE · 2015-01-12T08:16:58.591Z · LW(p) · GW(p)
I meant, "in the comments of the new article". I'm sorry if that wasn't clear.
The goal was to get some discussion and new advice going, and that's difficult if you just link to the old repository, which means one more click on the way, one trivial inconvenience more.
I had thought about copying all the advice (or the good pieces only) over to the old repository once this one is obsolete, i.e. once the rerun repository for march is posted, and I might do this then, if I find the time.
comment by polymathwannabe · 2015-01-05T21:31:41.255Z · LW(p) · GW(p)
Crazy hypothesis:
If Omega runs a simulation of intelligent agents, it is presumable that Omega is interested in finding out with sufficient accuracy what those agents would do if they were in the real situation. But once we assign a nonzero chance that we're being simulated, and incorporate that possibility into our decision theories, we've corrupted the experiment because we're metagaming: we're no longer behaving as if we were in the real situation. Once we suspect we're being simulated, we're no longer useful as a simulation, which might entail that every simulated civilization that develops simulation theories runs the risk of having its simulation shut down.
Replies from: ike, Vladimir_Nesov, Alsadius, Dagon, MakoYass↑ comment by ike · 2015-01-05T22:10:56.848Z · LW(p) · GW(p)
I suppose the best thing to do is to tell you to shut up now, right?
This (your hypothesis) appears wrong, however. Assuming the simulation is accurate, the fact that we can think about the simulation hypothesis means that whatever is being simulated would also think about it. If there's an accuracy deficiency, it's no more likely to manifest itself around the simulation-hypothesis than any other difference in accuracy.
Although that depends on how we come by the hypothesis. If we come by it like our world did, which is philosophers and other people making argument without any evidence, then there's no special reason for us to diverge from the simulated, but if we would have evidence (like the kind proposed in http://arxiv.org/abs/1210.1847 or similar proposals) then we would have a reason to believe that we weren't an exact simulation. In that case, we'd also have evidence of the simulation and not been shut down, so we'd know that your theory is wrong. OTOH, if you're correct we shouldn't try to test the simulation hypothesis experimentally.
↑ comment by Vladimir_Nesov · 2015-01-05T22:12:27.329Z · LW(p) · GW(p)
PSA: Thinking a thought that might cause you to have never existed, might cause you to have never existed. You might think that you are thinking that thought, but that's just how the logically impossible hypothetical of thinking it feels like from the inside. Think twice before you hypothetically think it.
(P.S. Noticing that you are certain to be right to worry about it seems to be an example of such a thought, for our world. Like correctly believing anything else that's false in a suitable sense. As far as I know.)
↑ comment by Alsadius · 2015-01-06T02:48:14.517Z · LW(p) · GW(p)
How would you act differently even if we assume that your whole life merely exists inside a simulation? You still have to live the life you've been given - it's not like you can break out of the simulation and go take your real life back. Your actions in the simulation still have their usual effect on the life in the simulation. The only case where it matters is if the simulator wants you to behave certain ways and will reward you accordingly(either real-you or by moving you to a nicer simulation), but that's just a different way to talk about religion.
Replies from: ike, g_pepper↑ comment by ike · 2015-01-06T07:11:24.416Z · LW(p) · GW(p)
Imagine that you learn tomorrow that we're in a simulation, because scientists did a test and found a bug in the program. Perhaps you would act differently? Maybe email all your friends about it, head over to lesswrong to discuss it, whatever. These things wouldn't happen in the original.
The main distinction is the way you'd learn about the simulation, like I said in my response.
Replies from: Alsadius↑ comment by Alsadius · 2015-01-06T12:39:33.151Z · LW(p) · GW(p)
Please define the difference between "bug in the simulation" and "previously unknown law of physics".
That said, I do agree in principle. However, simulation theories are sufficiently obvious(at least to creatures that dream/build computers/etc.) that they can't count as corruption - it'd be weirder for a simulated civilization to not have them.
Replies from: ike↑ comment by ike · 2015-01-06T16:47:42.815Z · LW(p) · GW(p)
Please define the difference between "bug in the simulation" and "previously unknown law of physics".
There have been plausible tests given that would seem to produce Bayesian evidence of simulation. To give an analogy, if tomorrow you'd hear a loud voice coming from Mount Sinai reciting the 10 commandments, more of your probability would go to the theory "The Bible is more-or-less true and God's coming back to prove it" than "there's a law of physics that makes sounds like this one happen at random times". The same way, there are observations that are strictly more likely to occur if we're in a simulation than if not. There are some proposed in http://arxiv.org/abs/1210.1847 , and other places as well.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-06T17:37:34.115Z · LW(p) · GW(p)
The same way, there are observations that are strictly more likely to occur if we're in a simulation than if not.
This is not true in general. This is true for some particular kinds of simulations (e.g. your link says "we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization"), but not all of them.
Replies from: ike↑ comment by ike · 2015-01-06T19:27:17.089Z · LW(p) · GW(p)
Let's rephrase: our expectations are different conditioning on simulation than on ~simulation.
The probability distribution of observations over possible simulation types is different from the probability distribution of observations over possible physics laws. If you disagree, then you need to hold that exactly the right kinds of simulations (with opposite effects) have exactly the right kind of probability to cancel out the effects of "particular kinds of simulations". That seems a very strong claim which needs defending. Otherwise, there do exist possible observations which would be Bayesian evidence for simulation.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-06T19:37:15.883Z · LW(p) · GW(p)
our expectations are different conditioning on simulation than on ~simulation
I don't think mine are.
The probability distribution of observations over possible simulation types is different from the probability distribution of observations over possible physics laws.
That is a content-free statement. You have no idea about either of the distributions, about what "possible simulation types" there might be, or what "possible physics laws" might be.
there do exist possible observations which would be Bayesian evidence for simulation
Well, barring things which actually break the simulation (e.g. an alien teenager appearing in the sky and saying that his parents are making him shut off this sim, so goodbye all y'all), can you give me an example?
Replies from: ike↑ comment by ike · 2015-01-06T19:54:56.450Z · LW(p) · GW(p)
Any of the things proposed in papers with the same aims of the one I linked above. The reason I'm not giving specifics is because I don't know enough of the technical points made to discuss them properly.
I wouldn't be the one making the observations, physicists would, so my observation is "physicists announce a test which shows that we are likely to be living in a simulation" and it gets vetted by people with technical knowledge, replicated with better p-values, all the recent Nobel Physics prize winners look over it and confirm, etc. (Note: I'm explicitly outlawing something which uses philosophy/anthropics/"thinking about physics". Only actual experiments. Although I'd expect only good ones to get past the bar I set, anyway, so that may not be needed.) I couldn't judge myself whether the results mean anything, so I'd rely on consensus of physicists.
Using that observation: are you really telling me that your P(physicists announce finding evidence of simulation| simulation) == P(physicists announce finding evidence of simulation| ~simulation)?
Replies from: Lumifer↑ comment by Lumifer · 2015-01-06T19:59:17.604Z · LW(p) · GW(p)
I wouldn't be the one making the observations, physicists would
Ugh, so all you have in an argument to authority? A few centuries ago the scientists had a consensus that God exists. And?
are you really telling me that your P(physicists announce finding evidence of simulation| simulation) == P(physicists announce finding evidence of simulation| ~simulation)?
No, I'm telling you that "evidence of simulation" is an expression which doesn't mean anything to me.
To go back to Alsadius' point, how are you going to distinguish between "this is a feature of the simulation" and "this is how the physical world works"?
Replies from: ike↑ comment by ike · 2015-01-06T20:25:37.669Z · LW(p) · GW(p)
I gave my observation, which is basically deferring to physicists.
"evidence of simulation" may not mean anything to you, but surely "physicists announce finding evidence of simulation" means something to you? Could you give an example of something that could happen where you wouldn't be sure whether it counted as "physicists announce finding evidence of simulation"?
how are you going to distinguish between "this is a feature of the simulation" and "this is how the physical world works"
Right now, as I'm not trained in physics, I'd defer to the consensus of experts. I expect someone who wrote those kinds of papers would have a better answer for you.
Or is your problem of defining "evidence of simulation" something you'd complain about even if real experts used that in a paper?
Replies from: Lumifer↑ comment by Lumifer · 2015-01-06T20:39:21.484Z · LW(p) · GW(p)
surely "physicists announce finding evidence of simulation" means something to you?
Yes, it means "somebody wanted publicity" (don't think it would get as far as grants).
is your problem of defining "evidence of simulation" something you'd complain about even if real experts used that in a paper?
Yes, of course. I do not subscribe to the esoteric-knowledge-available-only-to-high-priests view of science.
Replies from: ike↑ comment by ike · 2015-01-06T20:53:26.112Z · LW(p) · GW(p)
Yes, it means "somebody wanted publicity"
Which is why I laid out a bunch of additional steps needed above:
my observation is "physicists announce a test which shows that we are likely to be living in a simulation" and it gets vetted by people with technical knowledge, replicated with better p-values, all the recent Nobel Physics prize winners look over it and confirm, etc.
You seem to be taking parts of my argument out of context.
I do not subscribe to the esoteric-knowledge-available-only-to-high-priests view of science.
Me neither, but I'm trying to use a hypothetical paper as a proxy because I'm not well versed enough to talk about specifics. On some level you have to accept arguments from authority. (Or do you either reject quantum mechanics or have seen evidence yourself?) Imagine that simulation was as well established in physics as quantum mechanics is now. I find it very hard to say that that occurrence is completely orthogonal to the truth of simulation.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-06T21:18:21.490Z · LW(p) · GW(p)
On some level you have to accept arguments from authority.
The problem is that you offer nothing but an argument from authority.
have seen evidence yourself?
Well, of course I have. The computer I use to type this words relies on QM to work, the dual wave-particle nature of light is quite apparent in digital photography, NMR machines in hospitals do work, etc.
In any case, let me express my position clearly.
I do not believe it possible to prove we're NOT living in a simulation.
The question is whether it's possible to prove we ARE living in a simulation is complex. Part of the complexity involves the meaning of "simulation" in this context. For example, if we assume that there is an omnipotent Creator of the universe, can we call this universe "a simulation"? It might be possible to test whether we are in a specific kind of simulation (see the paper you linked to), but I don't think it's possible to test whether we are in some, unspecified, unknown simulation.
Replies from: ike↑ comment by ike · 2015-01-06T21:29:09.803Z · LW(p) · GW(p)
My position is that it is possible for us to get both Bayesian evidence for and against simulation. I was not talking at all about "proof" in the sense you seem to use it.
If it's possible to get evidence for a "specific kind of simulation", then lacking that evidence is weak evidence against simulation. If we test many different possible simulation hypotheses and don't find anything, that's slightly stronger evidence. It's inconsistent to say that we can't prove ~simulation but can prove simulation.
The computer I use to type this words relies on QM to work, the dual wave-particle nature of light is quite apparent in digital photography, NMR machines in hospitals do work, etc.
I'm curious if you understand QM well enough to say that computers wouldn't work without it. Is there no possible design for computers in classical physics that we would recognize as computer? Couldn't QM be false and all these things work differently, and you'd have no way of knowing? Whatever you say, I doubt there are no areas in your life where you just rely on authority without understanding the subject. If not physics, then medicine, or something else.
Replies from: Lumifer↑ comment by Lumifer · 2015-01-06T21:41:11.634Z · LW(p) · GW(p)
Is there no possible design for computers in classical physics that we would recognize as computer?
Of course there is -- from Babbage to the mechanical calculators or the mid-XX century. But I didn't mean computers in general -- I meant the specific computer that I'm typing these words on, the computer that relies on semiconductor microchips.
Replies from: ike↑ comment by g_pepper · 2015-01-06T16:49:34.469Z · LW(p) · GW(p)
Although I can't think of any way that I personally would behave differently based on a belief that I exist in a simulation, Nick Bostrom suggests a pretty interesting reason why an AI might, in chapter 9 of Superintelligence (in Box 8). Specifically, an AI that assigns a non-zero probability to the belief that it might exist in a simulated universe might choose not to "escape from the box" out of a concern that whoever is running the simulation might shut down the simulation if an AI within the simulation escapes from the box or otherwise exhibits undesirable behavior. He suggests that the threat of a possibly non-existent simulator could be effectively exploited to keep an AI "inside of the box".
Replies from: Alsadius↑ comment by Alsadius · 2015-01-06T19:16:51.674Z · LW(p) · GW(p)
Unless there's a flow of information from outside the simulation to inside of it, you have zero evidence of what would cause the simulators to shut down the machine. Trying to guess is futile.
Replies from: g_pepper↑ comment by g_pepper · 2015-01-06T19:39:56.551Z · LW(p) · GW(p)
Bostrom suggested that a simulation containing an AI that is expanding throughout (and beyond) the galaxy and utilizing resources at a galactic level would be more expensive from a computational standpoint than a simulation that did not contain such an AI. Presumably this would be the case because a simulator would take computational shortcuts and simulate regions of the universe that are not being observed at a much coarser granularity than those parts that are being observed. So, the AI might reason that the simulation in which it lives would grow too expensive computationally for the simulator to continue to run. And, since having the simulation shut down would presumably interfere with the AI achieving its goals, the AI would seek to avoid that possibility.
Replies from: Alsadius↑ comment by Alsadius · 2015-01-06T21:23:01.896Z · LW(p) · GW(p)
Observed by what? For this to make sense there'd need to be no life anywhere in the universe but here that could be relevant to the simulation.
Replies from: g_pepper↑ comment by g_pepper · 2015-01-06T21:59:03.685Z · LW(p) · GW(p)
Actually, all it requires is that the universe is somewhat sparsely populated - there is no requirement that there must be no life anywhere but here.
Furthermore, for all we know, maybe there is no life in the universe anywhere but here.
↑ comment by Dagon · 2015-01-06T03:56:46.681Z · LW(p) · GW(p)
There's no reason to limit simulation to one level, nor to privilege "real" as any special thing. All reality is emergent from a set of (highly complex, or maybe not) rules. This is true of n=0 ("reality", or "the natural simulation"), as well as every n+1 (where a level N entity simulates something).
It's turtles all the way up.
Put another way, the simulation parent entities wonder if they're being simulated, so it's exactly proper for the simulation target entities to wonder, for exactly the same reasons. I suspect that in every universe, thinking processes that can consider simulation will consider that they might be simulated.
I don't know if they'll reach the conclusion that it doesn't matter - finding the boundaries of the simulation is exactly identical to finding the boundaries of a "natural" universe, and we're gonna try to do so.
Replies from: ike↑ comment by mako yass (MakoYass) · 2015-01-12T00:59:04.779Z · LW(p) · GW(p)
Any being that does not at some point consider the possibility that it is inside a simulation, is not worth simulating.
comment by SilentCal · 2015-01-05T19:20:15.596Z · LW(p) · GW(p)
What are the marginal effects of donating with an employer gift match? The one I have has a per-employee cap and no overall cap, but presumably the utilization rate negatively influences the cap. How much credit should I be giving myself for the gifts I cause my employer to give?
If the notion of 'credit' is too poorly defined, suppose I were deciding between job A which has a gift match and job B which has a higher salary, such that (my personal gift if I take job A) < (my total gift if I take job B) < (my total gift including match if I take job A).
This would depend the company's response to match utilization, how effective the average donor at the company is (since my effects on the match cap are distributed across all such donors), and the marginal effects of the company having money.
One note is that the company is probably much more reluctant to lower the cap than to reduce a planned increase, so if I expect the cap to increase donating probably cuts into that, but if I expect it to stay the same donating may not have any effect on the cap.
Anything else to think about?
Replies from: Ander, AspiringRationalist↑ comment by Ander · 2015-01-06T00:25:21.246Z · LW(p) · GW(p)
If your employer responds to the number of employees who are giving by modifying the cap, then that means that whether or not you give will not change how much money your employer gives in the long term.
However, even if that is true, you still are choosing where your employer donates your portion of the gift match. Therefore, if you believe that most of the money given to charity is being used at only moderate effectiveness, but that you could choose a good givewell charity to donate to and achieve much greater results, then the impact of the employee match is still signficant.
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2015-01-06T01:09:51.428Z · LW(p) · GW(p)
Given that the most effective charities are at least an order of magnitude more effective, probably more than that, than average charities, any decrease you cause to other people's matches is probably insignificant compared to the match you get.
comment by Gvaerg · 2015-01-09T22:18:05.843Z · LW(p) · GW(p)
Is there any Egan or Vinge fanfic except EY's crossover Finale of the...?
Replies from: Nornagest↑ comment by Nornagest · 2015-01-09T23:27:49.448Z · LW(p) · GW(p)
Don't know about Egan or Vinge specifically, but fanfic of literary SF not targeted at the YA market is very rare. I'd speculate this is partly due to demographics and partly due to the fact that a lot of trad SF's appeal lies in conceptual stuff that's generally more or less fully explored in its work of origin.
comment by [deleted] · 2015-01-05T14:35:59.977Z · LW(p) · GW(p)
I can code, as in I can do pretty much any calculation I want and have little problem on school assignments. However, I don't know how to get from here to making applications that don't look like they've been drawn in MS Paint. Does anyone know a good resource on getting from "I can write code that'll run on command line" to "I can make nice-looking stuff my grandmother could use"?
Replies from: None, Emile, sixes_and_sevens, shminux, CBHacking↑ comment by [deleted] · 2015-01-05T18:07:28.200Z · LW(p) · GW(p)
Buy a good textbook on visual design principles. I don't have a recommendation in this area, so you'll have to do some homework to find the right one. Start looking at the work of professional designers in the area you're interested in. I use a blogroll for this, but you can pick your own path. The design section of my RSS currently consists of abduzeedo, design milk, and grain edit. For mock-up tools, I like Inkscape a lot. It's free and mock-ups are mostly about the text, shape, and pen tools anyways. In the area of raster graphics, I haven't seen any good alternatives to Photoshop; not that I've been looking. You can also look into some user experience stuff too, but that strikes me as overrated. After that, which programming tools you pick up will depend on your needs.
↑ comment by Emile · 2015-01-05T15:40:32.690Z · LW(p) · GW(p)
I've used The bootstrap framework to make web apps that don't look horribly ugly. Learning all the things you'd need to make apps that use that (so a bit of JS, CSS, HTML, etc. as sixes_and_seven says) would probably be a good start. (It would be probably easier than trying to make good-looking CSS from scratch, which is more of a pain).
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2015-01-05T16:08:24.577Z · LW(p) · GW(p)
Bootstrap is particularly good if you're a design doofus and have minimal knowledge of web standards, accessibility, fluid layouts, etc.
I'm sure ancestor commenters know this, but it's worth mentioning that design is a distinct discipline which doesn't come for free when you learn to code.
↑ comment by sixes_and_sevens · 2015-01-05T14:43:34.532Z · LW(p) · GW(p)
This might be a disappointing answer, but HTML, CSS and JavaScript are extremely valuable skills if you want to throw together accessible GUI applications.
↑ comment by Shmi (shminux) · 2015-01-05T15:31:48.382Z · LW(p) · GW(p)
You can try learning how to create mobile apps, seems like a very useful skill. For example, Android programming: https://developer.android.com/training/index.html
↑ comment by CBHacking · 2015-01-07T13:22:36.388Z · LW(p) · GW(p)
Depending on how much you want to invest in aesthetics vs. simply producing a user-friendly GUI, Visual Studio takes almost all of the tricky work out of producing basic GUIs (whether you're working in Visual Basic or C++) and is an easy go-to solution especially since it's now free for individuals, even for commercial use (still requires Windows though; I don't have a lot of experience writing GUI apps for other desktop OSes). The results will likely look somewhere between utilitarian and just ugly until/unless you learn some UI design aesthetics and expend the effort to apply them, but even there tools such as Blend exist to help out (especially on mobile, but some of that stuff can be applied to PC software too).
comment by [deleted] · 2015-01-11T16:42:24.117Z · LW(p) · GW(p)
Runaway Rationalism and how to escape it by Nydwracu
One of the better rationalist short posts of last year, unfortunately mostly read by non-rationalists so far. Many important concepts tightly packed and neatly explained if the reader thinks closely about them.
A favored quote.
It is noteworthy that the scientific method, the most successful method for discovering reality, only arose once, a few hundred years ago, in an environment where the goddess of war and wisdom demanded it. It is also noteworthy that the goddess of war is the goddess of wisdom: without an incentive-structure that demands accuracy, stump-orators will peddle sham-accuracy, pure speech detached from action.
comment by Dorikka · 2015-01-11T01:54:22.678Z · LW(p) · GW(p)
At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)
comment by Jiro · 2015-01-10T01:24:25.843Z · LW(p) · GW(p)
The comic book Magnus Robot Fighter #10, published this month, mentions Roko's Basilisk by name and has an A.I. villain who named himself after Roko's Basilisk. The Basilisk is described as "the proposition that an all-powerful A.I. may retroactively punish those humans who did not actively help its creation... thus inspiring roboticists, subconsciously or unconsciously, toinvent that A.I. as a matter of self-preservation". Which is not quite correct because of the "subconsciously", and doesn't mention simulation (although Magnus grew up in a simulation), but otherwise is roughly the right idea.
comment by Capla · 2015-01-08T18:57:03.883Z · LW(p) · GW(p)
I'm not sure where an appropriate place to ask this is. Tell me if there's a place where this goes.
I'm coming to the Bay Area for the CFAR workshop from the 16th to the 19th. I have a way to get back home, but I think I might want to stay a few extra day in San Fransisco. That screws up my travel arrangements, so I'm seeing if there's a workaround. Are there any aspiring rationalists (or rationalist sympathizers) in northern California who might want to drive with me down to Phoenix (AZ) between the 23rd and the 25th, more or less for the hell of it? I'm unusually engaging company (according to friends and acquaintances).
Other than that, anyone have any clever ideas about how to get from the Bay Area to Phoenix? I want to arrive on Saturday the 25th or Sunday the 26th.
comment by [deleted] · 2015-01-08T17:11:33.116Z · LW(p) · GW(p)
Is there a better way to search LW other than Google?
comment by Gram_Stone · 2015-01-09T16:42:48.336Z · LW(p) · GW(p)
Below, gjm was being a self-acknowledged pedant and I didn't like it at first and I pedanted right back at him and then I realized I enjoyed it and that pedantry is a terminal human value and that I wouldn't have it any other way and that I didn't really care that he was being a pedant anymore and that it was actually a weird accidental celebration of our humanity and that I probably won't care about future pedantry as long as it isn't harmful. This is an auspicious day.
comment by Punoxysm · 2015-01-08T04:53:59.911Z · LW(p) · GW(p)
I think a good principle for critical people - that is, people who put a lot of mental effort into criticism - to practice is that of even-handedness. This is the flip-side of steelmanning, and probably more natural to most. Instead of trying to see the good in ideas or people or systems that frankly don't have much good in them, seek to criticize the alternatives that you haven't put under your critical gaze.
Quotes like [the slight misquote] "Democracy is the worst form of government except for all the others that have been tried from time to time" epitomize this, and indeed politics is a great domain to apply this too. If you find some set of ideas wretched, it's probably easier to see the wretchedness in your own cherished ones than to find the positive view of others.
It's a good way to harness that cutting, critical impulse many of us have towards humility.
comment by polymathwannabe · 2015-01-05T14:22:42.774Z · LW(p) · GW(p)
Somebody has thought of selling plush basilisks already.
Replies from: g_pepper, NancyLebovitz, MathiasZaman↑ comment by g_pepper · 2015-01-05T15:23:58.768Z · LW(p) · GW(p)
I don't think that the basilisk you linked to is the specific basilisk of LW notoriety; basalisks were creatures of legend all the way back to Pliny the Elder's time; they were mentioned in Pliny's Natural History written around 79 AD.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2015-01-05T18:09:34.546Z · LW(p) · GW(p)
Who knew acausality could reach that far back?
↑ comment by NancyLebovitz · 2015-01-05T22:37:11.818Z · LW(p) · GW(p)
In case anyone is wondering what needle felting is....
I didn't have a visualization, but when I saw that, I knew the basilisk can't possibly be that cute. I've settled on shifting shapes in unpleasant colors which sometimes coalesce into something that vaguely resembles Tchernobog from Night on Bald Mountain.
↑ comment by MathiasZaman · 2015-01-05T20:00:26.102Z · LW(p) · GW(p)
As pointed out: Not the same basilisk. I wonder how people visualize Rocko's variant.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-01-05T21:33:40.342Z · LW(p) · GW(p)
Klein bottle uroboros.
Replies from: DanielLCcomment by sediment · 2015-01-11T22:35:30.322Z · LW(p) · GW(p)
I'm vegetarian and currently ordering some dietary supplements to help, erm, supplement any possible deficits in my diet. For now, I'm getting B12, iron, and creatine. Two questions:
- Are there any important ones that I've missed? (Other things I've heard mentioned but of whose importance and effectiveness I'm not sure: zinc, taurine, carnitine, carnosine.)
- Of the ones I've mentioned, how much should I be taking? In particular, all the information I could find on creatine was for bodybuilders trying to develop muscle mass. I did manage to find that the average daily turnover/usage of creatine for an adult male (which I happen to be) is ~2 grams/day - is this how much I should be taking?
(Edit: reposted this in the new open thread; please respond there, not here!)
comment by Capla · 2015-01-10T23:28:56.824Z · LW(p) · GW(p)
I know a very intelligent, philosophically sophisticated (those are probably part of the problem) creationist. A full blown, earth-is-6000-years-old creationist.
If he is willing to read some books with me, which ones should I suggest? Something that lays our the evidence in a way that the layman can understand, and conveys the enormity of this evidence.
Replies from: iarwain1↑ comment by iarwain1 · 2015-01-11T01:08:24.575Z · LW(p) · GW(p)
This isn't a direct response to your request as it's not a book suggestion, but ...
I've had plenty of conversations with very thoughtful, intelligent creationists. Virtually all of my friends and family are creationists in some sense or another. So far I've never discussed it with anyone (at least anyone who's thoughtful and intelligent) who has disagreed with the following argument:
1) The world clearly looks old. For example:
- Light from the stars needs to travel much more than several thousand years before it reaches us.
- There are many more than several thousand layers of annual ice in Antarctica.
- The Colorado River is wearing away at the bottom of the Grand Canyon as we speak. The rest of the Grand Canyon is exactly the sort of thing we'd expect if we extrapolate backwards a few million years.
- In general, there are innumerable geological features all over the world that are exactly what we'd expect if we extrapolate backwards millions or billions of years based on processes that are happening right now. (Any good geology textbook should demonstrate this pretty clearly. As I tell people: "Read a basic geology textbook, go to a national park and read the signs explaining the local geology, and then come back and tell me it doesn't look old.")
- [You can also mention some of the principles of geological layering - e.g., that we consistently find the same types of fossils in the same types of layers. But I've found that this sort of thing is a little too complicated to explain quickly.]
2) One could perhaps respond with something like, "maybe God used some alternative unknown form of physics in the six days of creation" or, "maybe Noah's Flood caused geology to go haywire in unknown ways". However, the key point is that to say that it doesn't even look old is simply false. Saying that the Flood or some alternative physics caused it means that e.g. the bottom 30-40 feet of the Grand Canyon (the part that's been eroded in the past 5000-6000 years) was caused by normal everyday processes, but anything above that point - which is indistinquishable from the bottom 30-40 feet - was caused by the Flood / alternative physics which just so happened to work out in such a way that it looks exactly as if it was caused by normal erosion.
3) If so, then we have only three choices: (a) The world really is that old; (b) God created the world 6000 years ago but (for whatever reason) he intentionally made it look old; or (c) God created the world 6000 years ago and for various reasons it accidentally ended up looking old. Option (c) is unacceptable according to every religious conception of God that I've ever heard, which leaves (a) or (b). If someone is willing to accept (b) theologically then I can't really prove to them otherwise, but if for theological or philosophical reasons they're unwilling to accept (b) then that leaves only (a), that the world really is that old.
4) Moreover, if you say that it was just created looking old then you'll need to say that God created it with a real whopper of a backstory. For example, the geologic record preserves detailed records of fights between dinosaurs, as well as dinosaurs with the food from their last meal still in their stomachs. Astronomers regularly observe supernovas which are records of explosions from stars that according to young earthers never existed. [Some of these examples might be moot depending on which version of young earth creationism the other person subscribes to.]
Again, the key point here is that I don't try to prove to people that the world really is that old, just that they must agree that it certainly looks old, in which case either it is that old or God deliberately created it in such a way that it looks that old.
[It could be that some versions of flood geology might have responses to some of the above, but I personally don't know anybody who goes for flood geology so I've never had to respond to those types of arguments.]
comment by Gram_Stone · 2015-01-09T11:07:49.967Z · LW(p) · GW(p)
I suspect that I'm gonna keep sharing quotes as I read Superintelligence over the next few weeks, in large part because Professor Bostrom has a better sense of humor than I thought he would when I saw him on YouTube.
I've known for a long time that intelligences with faster cognitive processes would experience a sort of time dilation, but I've never seen it described in such an evocative and amusing way:
Replies from: gjmTo such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000×. If your fleshly friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s gray matter and from thence out into his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.
↑ comment by gjm · 2015-01-09T12:56:53.341Z · LW(p) · GW(p)
If you drop a teacup from a height of 2m then s = 1/2 at^2 says t ~= 0.6 seconds which at 10000x becomes 6000 seconds or 1h40m. If the figure for Nick Bostrom is "several hours" then he must be, I dunno, attending tea parties on stilts or something.
(Of course this is mere pedantry. But it seems like the sort of thing it should have been easy to get right.)
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-01-09T13:18:07.343Z · LW(p) · GW(p)
You reminded me of something he wrote in the acknowledgements:
The membrane that has surrounded the writing process has been fairly permeable. Many concepts and ideas generated while working on the book have been allowed to seep out and have become part of a wider conversation; and, of course, numerous insights originating from the outside while the book was underway have been incorporated into the text. I have tried to be somewhat diligent with the citation apparatus, but the influences are too many to fully document.
Citations are one of the most important aspects of any non-fiction book, and even in this regard he acknowledges that he could not be exhaustive. To confirm the physical calculations implicit in a descriptive passage would almost certainly have been suboptimal. All to say, I am happy that Professor Bostrom is the one writing the Superintelligences of the world, and not the pedants.
Replies from: gjm↑ comment by gjm · 2015-01-09T13:56:54.271Z · LW(p) · GW(p)
He could have made the descriptive passage correct simply by not pretending to be so quantitative. "Suppose your mind ran tens of thousands of times faster than normal."
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-01-09T14:54:03.545Z · LW(p) · GW(p)
Extraneous considerations of extraneous yet harmless quantitativeness would have been similarly suboptimal. The pseudo-quantativeness also has a positive effect upon the tone of the passage, for everyone but the one in a million who notice its extremely technical inaccuracy. This is not math, but writing.
comment by Omid · 2015-01-07T17:31:37.074Z · LW(p) · GW(p)
Is there a Chrome extension or something that will adjust displayed prices of online merchants to take into account rewards benefits? For example, if my credit card has 1% cashback, the extension could reduce the displayed price to be 1% cheaper.
comment by Omid · 2015-01-05T16:47:59.348Z · LW(p) · GW(p)
So I signed up for a password manager, and even got a complex password. But how do I remember the password? It's a random combination of upper and lower case letters plus numbers. I suppose I could use space repition software to memorize it, but wouldn't that be insecure?
Replies from: robot-dreams, polymathwannabe, Douglas_Knight, Strilanc, DanArmak, Dahlen, Alsadius, ilzolende, polymathwannabe↑ comment by robot-dreams · 2015-01-05T18:37:06.991Z · LW(p) · GW(p)
I learned a few interesting memory tricks from the movie Memento. One thing you can try is to tattoo important information on yourself, so that you don't forget it.
I can think of a few security caveats for sensitive information though:
- It's probably better if you choose a location that's not easily visible (e.g. chest, part of your arm that's covered by a shirt), though you should probably choose a location that's still somewhat accessible (i.e. not your lower back)
- If you absolutely have to use a more visible location, like your forehead, make sure you get the sensitive information tattoo'd BACKWARDS, so that only you can read it (and only when you're looking in a mirror)
On a more serious note, I find it much easier to remember random alphanumeric characters "kinesthetically" (i.e. by developing muscle memory for the act of actually typing the password), as suggested by polymathwannabe. The only downside to this approach is that it's extremely difficult for me to enter such a password on a cell phone.
Replies from: None↑ comment by [deleted] · 2015-01-06T13:15:50.771Z · LW(p) · GW(p)
I endorse the serious note - I have a key layout I use for throwaway passwords based on taking an initial character from the website name, which is quick and easy to type on keyboards (but admittedly hard on iPhone). Eg I went back to confused.com (insurance comparison site) recently after a year and got in with a couple of guesses.
Emphasise throwaway passwords though - I use XKCD method for anything that gives control over other stuff (Gmail especially) but it takes some cognitive load off the unimportant stuff while still protecting against password leaks.
↑ comment by polymathwannabe · 2015-01-05T17:20:03.061Z · LW(p) · GW(p)
Despite my other comment, there are cases when we simply can't choose. My university gave me an alphanumeric sequence that I am able to remember because I'm a trained typist. So I didn't memorize the letters and numbers; I memorized the finger movements.
↑ comment by Douglas_Knight · 2015-01-05T21:10:38.388Z · LW(p) · GW(p)
Just write it down. Eventually, you'll memorize it. It will be faster if you challenge yourself each time: see how many characters you can type before having to look.
It's important to keep in mind threat models. The biggest threat is that someone attacks one website you use and uses that password to take control of your account on another website. The password manager solves this problem. (It also give you strong passwords, which is overkill.) People physically close to you that might steal the piece of paper with the password aren't much of a threat and even if they were, they probably wouldn't figure out the meaning of it. But you can destroy it after memorization.
↑ comment by DanArmak · 2015-01-05T21:11:20.903Z · LW(p) · GW(p)
I use a passphrase, which has higher entropy than a short password and is easier to remember at the same time.
Take a dictionary of 50k words and choose a sequence of 6 words at random. (Use software for this; opening a printed dictionary "at random" won't produce really random results). This provides log2(50000^6) = 94 bits of entropy. This is a similar amount to choosing 15 characters from an 80-character set (lowercase and uppercase letters, numbers, and 18 other characters) which would produce log2(80^15) = 95 bits.
It's much easier to remember 6 random words than 15 random characters. You can generate some passphrases here to estimate how difficult they might be to remember. (Of course you wouldn't generate your real passphrase using an online tool :-)
Replies from: fubarobfusco, polymathwannabe↑ comment by fubarobfusco · 2015-01-05T22:42:18.970Z · LW(p) · GW(p)
If you often need to generate XKCD-compliant passwords on Linux machines, you may find this command line handy:
egrep -x '[a-z]{3,9}' /usr/share/dict/words | shuf -n4 | xargs
(It will work on a Mac if you install coreutils and change shuf to gshuf.)
Replies from: DanArmak↑ comment by DanArmak · 2015-01-06T09:20:31.828Z · LW(p) · GW(p)
On my Ubuntu install, /usr/share/dict/words is symlinked to /usr/share/dict/american-english, which has about 100k words. log2(100000^6)=100, which surprised me by being not that much bigger than log2(50000^6) = 94. Bad math intuition on my part.
↑ comment by polymathwannabe · 2015-01-05T21:18:54.221Z · LW(p) · GW(p)
How is a computer more random than flipping pages?
Replies from: faul_sname, ike, DanArmak↑ comment by faul_sname · 2015-01-06T04:32:31.693Z · LW(p) · GW(p)
The word "set" in my dictionary has a definition spanning an entire page. Most other pages have between 20 and 50 words on them. This implies that the word "set" will be chosen about 1 in 1000 times, giving only 10 bits of entropy, whereas choosing completely at random, each word would have about a 1 in 50,000 chance of being chosen, giving about 15 bits of entropy.
In practice, picking 5 random pages of a 1000 page dictionary, then picking your favorite word on each page would still give 50 bits of entropy, which beats the correcthorsebatterystaple standard, and probably a more memorable passphrase.
↑ comment by ike · 2015-01-06T03:17:53.589Z · LW(p) · GW(p)
Take a 100 page book, get 100 random numbers from that, then do an analysis of the numbers.
First of all, how do you decide right page/left? Likely by generating randomity in your head, which may not be so good. First few pages and last few are unlikely. Probably other things also. For one, words with longer definitions are more likely depending on the exact method.
I don't think using a computer is a very secure solution once your going to that level anyway. Try using dice.
↑ comment by DanArmak · 2015-01-06T09:23:18.470Z · LW(p) · GW(p)
It's well known in the security industry / compsci that humans are are very bad at generating, and recognizing, random numbers. I can't recall if there's a name for this bias; there's the clustering illusion but that's about recognizing random numbers, not trying to generate them.
This paper tries to analyze why this is hard for humans to do.
↑ comment by Dahlen · 2015-01-07T04:50:16.420Z · LW(p) · GW(p)
You'll get used to it. All my passwords are long (~20) strings of random alphanumeric characters. Initially, when I started using this system, I had doubts that I would be able to memorize them all, but after a while it got easy.
If you're really in need of some outside help, write it somewhere in rot13; since it's random, nobody can guess through the pattern of the letters that the rot13 version is not the actual password; a random string of letters and its rot13'd version are much the same for all practical purposes. And if you want some extra security and you're not worried about getting tangled in all your weird personalized decoding rules, write it backwards; write every number as ten minus that number; make all capitals lowercase letters and vice versa; add known short strings of characters at the beginning and/or at the end, etc. But I really don't recommend going down that route.
↑ comment by polymathwannabe · 2015-01-05T16:55:04.592Z · LW(p) · GW(p)
Alphanumeric passwords are overrated.
Replies from: Nornagest, Izeinwinter, DanielLC↑ comment by Nornagest · 2015-01-05T18:10:08.964Z · LW(p) · GW(p)
That comic makes a good argument against the kinds of alphanumeric passwords most people naively come up with to match password policies, but the randomized ones that a password manager will give you are far stronger. Assuming 6 bits of entropy per character (equivalent to a choice of 64 characters) and a good source of randomness, a random 8-character password is stronger than "correct horse battery staple" (48 bits of entropy vs. ~44), and 10 characters (for 60 bits of entropy) blows it out of the water.
Of course, since you typically won't be able to remember eight base64 characters for each of the fifty sites you need a password for, that makes the security of the entire system depend on that of the password manager or wherever else you're storing your passwords. A mix of systems might work best in practice, and I'd recommend using two-factor authentication where it's offered on anything you really need secured.
↑ comment by Izeinwinter · 2015-01-05T17:17:36.229Z · LW(p) · GW(p)
That comic got me to change all my passwords. I now have a stack of virtual movieposters in my head using that principle. Nothing written down anywhere, not forgotten one yet, far more secure. Works fantastically well for any password function where you are permitted long passwords. I start swearing at places that impose limits, now.
Replies from: DanielLC