Open Thread, April 1-15, 2013
post by Vaniver · 2013-04-01T15:00:03.560Z · LW · GW · Legacy · 261 commentsContents
261 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
261 comments
Comments sorted by top scores.
comment by Vaniver · 2013-04-01T15:06:08.269Z · LW(p) · GW(p)
Remember, today is "Base Rate Neglect Day," also known as April Fool's.
Replies from: AlexSchell, Multiheaded↑ comment by AlexSchell · 2013-04-01T21:35:05.489Z · LW(p) · GW(p)
I propose "Confusion Awareness Day".
↑ comment by Multiheaded · 2013-04-01T17:43:52.200Z · LW(p) · GW(p)
Oh, Wikipedia gave me a feel today
Did you know... ...that Walter Baxter wrote about the gay Batman–Kent relationship?
(snicker)
(click)
The officer and the batman are captured after a Japanese attack. Kent initially divulges nothing more than his name, rank and serial number, but after being shown how his fellow soldiers had been tortured, and he is threatened with the same, he reveals to his Japanese interrogator all of the information he knows. After an air raid, Anson is able to escape, also saving Kent. Their relationship resumes, and Kent kills a soldier who has figured this out and tried to blackmail him. After the company has arrived in India, Kent attempts suicide, but when unsuccessful, finds himself happy to be alive, and to be with Anson.
...
LATER LIFE: The trials were disheartening for Baxter, and he would not write another book. Christopher Isherwood wrote in 1961 that Baxter "... has become a rather tragic self-pitying drunken figure with a philosophy of failure."
Poignant.
comment by Morendil · 2013-04-01T15:21:58.419Z · LW(p) · GW(p)
Received word a few days ago that, (unofficially, pending several unresolved questions) my GJP performance is on track to make me eligible for "super forecaster" status (last year these were picked from the top 2%).
ETA, May 9th: received the official invitation.
Replies from: RolfAndreassen, dvasya↑ comment by RolfAndreassen · 2013-04-01T16:51:07.330Z · LW(p) · GW(p)
I'm glad to report that I am one of those who make this achievement possible by occupying the other 98%. Indeed I believe I am supporting the high ranking of a good 50% of the forecasters.
More seriously, congratulations. :)
↑ comment by dvasya · 2013-05-09T17:27:07.689Z · LW(p) · GW(p)
Congratulations! I also received it (thanks not the least to your posts). I wonder how many other LWers participate and who else (if anybody) got their invitations.
Replies from: ITakeBets, Morendil, gwern↑ comment by ITakeBets · 2013-05-09T17:45:08.833Z · LW(p) · GW(p)
I participate and was invited the first season to be a super-forecaster in the second. It is kind of a lot of work and I have been very busy, so I really quit doing anything about it at all pretty early on, but mysteriously have been invited to participate again in the third season.
↑ comment by gwern · 2013-05-09T17:41:39.329Z · LW(p) · GW(p)
I participate (http://www.gwern.net/Prediction%20markets#iarpa-the-good-judgment-project); and haven't been invited. (While I stopped trying in season 2, my season 1 scores were merely great & not stellar enough to make it plausible that I could have made it.)
comment by gwern · 2013-04-03T20:34:28.553Z · LW(p) · GW(p)
For kicks, and reminded by all my recent searching for digging up long-forgotten launch and shut down dates for Google properties, I've compiled a partial list of times I've posted searches & results on LW:
-
http://lesswrong.com/lw/h4e/differential_reproduction_for_men_and_women/8ovg
-
http://lesswrong.com/lw/h2h/i_hate_preparing_food_my_solution/8o19
-
http://lesswrong.com/lw/h1t/link_the_power_of_fiction_for_moral_instruction/8nkc
-
http://lesswrong.com/lw/gnk/link_scott_and_scurvy_a_reminder_of_the_messiness/8gfc
-
http://lesswrong.com/lw/g75/psa_please_list_your_references_dont_just_link/87ds
-
http://lesswrong.com/lw/f8x/rationality_quotes_november_2012/7tdc
-
http://lesswrong.com/lw/3dq/medieval_ballistics_and_experiment/7i2k
-
http://lesswrong.com/lw/e26/who_wants_to_start_an_important_startup/7adg
-
http://lesswrong.com/lw/if/your_strength_as_a_rationalist/768t
-
http://lesswrong.com/lw/dx7/link_holistic_learning_ebook/74yj
-
http://lesswrong.com/lw/c3g/seq_rerun_quantum_nonrealism/6h7e
-
http://lesswrong.com/lw/bws/stupid_questions_open_thread_round_2/6g25?context=1#6g25
-
http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/68ps
-
http://lesswrong.com/lw/5qm/living_forever_is_hard_or_the_gompertz_curve/63ak
-
http://lesswrong.com/lw/7rh/cognitive_style_tends_to_predict_religious/6sfx?context=1#6sfx
-
http://lesswrong.com/lw/7hi/free_research_help_editing_and_article_downloads/6xcu
-
http://lesswrong.com/lw/aw6/global_warming_is_a_better_test_of_irrationality/61me?context=1#61me
-
http://lesswrong.com/lw/td/magical_categories/52uq?context=1#52uq
-
http://lesswrong.com/lw/ub/competent_elites/nkg?context=1#nkg
-
http://lesswrong.com/lw/7gy/case_study_reading_edges_financial_filings/
-
http://lesswrong.com/lw/m3/politics_and_awful_art/3moz?context=1#3moz
-
http://lesswrong.com/lw/3am/honours_dissertation/35lk?context=1#35lk
-
http://lesswrong.com/lw/5r/crowley_on_religious_experience/34wj?context=1#34wj
-
http://lesswrong.com/lw/2t0/rationality_quotes_october_2010/2qxe?context=1#2qxe
-
http://lesswrong.com/lw/2kl/open_thread_august_2010_part_2/2fsb
-
http://lesswrong.com/lw/2ab/harry_potter_and_the_methods_of_rationality/2cmp
-
http://lesswrong.com/lw/1lx/reference_class_of_the_unclassreferenceable/1fg3
-
http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/1dfm
-
http://lesswrong.com/lw/hbp/using_evolution_for_marriage_or_sex/8xn2
-
http://lesswrong.com/lw/hb6/links_passing_through_apiviglinkcom/8v0g
Can't help but get the impression that even people here aren't very good at Googling. Maybe they should be taking Google's little search classes; knowing how to search seems like the sort of skill that would pay off constantly over a lifetime.
Replies from: Douglas_Knight, gwern, lukeprog, gwern↑ comment by Douglas_Knight · 2013-04-04T02:08:52.897Z · LW(p) · GW(p)
Can't help but get the impression that even people here aren't very good at Googling. Maybe they should be taking Google's little search class; knowing how to search seems like the sort of skill that would payoff constantly over a lifetime.
It appears to me that in half of these examples people hadn't tried to google at all. It doesn't seem particularly likely to me that the class would develop such a habit. Not that I have a better idea.
Replies from: gwern↑ comment by gwern · 2013-04-04T03:43:27.443Z · LW(p) · GW(p)
My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.
↑ comment by gwern · 2013-12-16T04:26:16.642Z · LW(p) · GW(p)
Since April 2013:
- http://lesswrong.com/lw/hbp/using_evolution_for_marriage_or_sex/8xn2
- http://lesswrong.com/lw/hb6/links_passing_through_apiviglinkcom/8v0g
- http://lesswrong.com/lw/4n8/rationality_quotes_march_2011/8uoo
- http://lesswrong.com/lw/h7r/open_thread_april_1530_2013/8tca
- http://lesswrong.com/lw/jb9/some_notes_on_existential_risk_from_nuclear_war/a6k2
- http://lesswrong.com/lw/j24/to_like_or_not_to_like/a1pj
- http://lesswrong.com/lw/ing/open_thread_september_1622_2013/9ren
- http://lesswrong.com/lw/ikg/yet_more_stupid_questions/9qgc
- http://lesswrong.com/lw/i7t/rationality_quotes_august_2013/9ocd
- http://lesswrong.com/lw/ic0/where_ive_changed_my_mind_on_my_approach_to/9m2o
- http://lesswrong.com/lw/hva/open_thread_july_115_2013/9by7
- http://lesswrong.com/lw/hsd/start_under_the_streetlight_then_push_into_the/98ur
- http://lesswrong.com/lw/hgm/open_thread_may_1731_2013/932n
- http://lesswrong.com/r/discussion/lw/jc8/harry_potter_and_the_methods_of_rationality/a7fo
↑ comment by gwern · 2014-09-14T21:25:10.723Z · LW(p) · GW(p)
Since December 2013:
- http://lesswrong.com/lw/kid/this_is_why_we_cant_have_social_science/b40h
- http://lesswrong.com/lw/kge/question_lesswrong_web_traffic_data/b2wq
- http://lesswrong.com/lw/kfb/open_thread_30_june_2014_6_july_2014/b1xd
- http://lesswrong.com/lw/k6r/open_thread_may_5_11_2014/avtl
- http://lesswrong.com/lw/jys/what_colleges_look_for_in_extracurricular/aqn6
- http://lesswrong.com/lw/jr8/open_thread_february_25_march_3/an4q
comment by [deleted] · 2013-04-03T14:31:21.206Z · LW(p) · GW(p)
Iain (sometimes M.) Banks is dying of terminal gall bladder cancer.
Of more interest is the discussion thread on Hacker News regarding cryonics. There's a lot of cached responses and misinformation going around on both sides.
Replies from: SilasBarta, Multiheaded, Kawoomba↑ comment by SilasBarta · 2013-04-13T14:01:43.870Z · LW(p) · GW(p)
Great point I saw in the discussion:
Look at it like a cryptographer: Is putting a brain in liquid nitrogen a secure erasure method against all future attacks from a determined opponent with lots of resources? Would you trust your financial data to such a method of data erasure?
↑ comment by Multiheaded · 2013-04-03T16:17:00.876Z · LW(p) · GW(p)
It's really, really saddening that he of all people has been an outspoken deathist and now it's depriving him of any chance whatsoever. (Well, except for hypothetical ultra-remote reconstruction by FAI or something.)
Replies from: FiftyTwo↑ comment by FiftyTwo · 2013-04-04T00:58:10.211Z · LW(p) · GW(p)
Where has he been an outspoken deathist?
Replies from: gwern, None↑ comment by gwern · 2013-04-04T01:36:02.101Z · LW(p) · GW(p)
In the Culture novels, he has all humans just sorta choosing to die after a millennium of life, despite there being absolutely no reason for humans to die since available resources are unlimited, almost all other problems solved, aging irrelevant, and clear upgrade paths available (like growing into a Mind).
Replies from: FiftyTwo↑ comment by FiftyTwo · 2013-04-04T12:18:37.079Z · LW(p) · GW(p)
Its not entirely clear cut. He has had characters from outside the culture describing it as a 'fashion' and a sign of the culture's decadence. And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.
Either way, thinking a thousand years in the culture is enough doesn't mean he thinks 70 years on earth is enough. Has he ever made a direct comment about cryonics? I can't find any. So its still possible eh would eb open to it given up to date information.
Replies from: gwern↑ comment by gwern · 2013-04-04T13:44:08.362Z · LW(p) · GW(p)
And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.
Stories would tend to focus on characters who are interested or involved in traumatically interesting events, so not sure how much one could infer from that.
Either way, thinking a thousand years in the culture is enough doesn't mean he thinks 70 years on earth is enough.
A thousand years instead of 70 is just deathism with a slightly different n.
Replies from: ArisKatsaris, None↑ comment by ArisKatsaris · 2013-04-06T11:49:50.389Z · LW(p) · GW(p)
A thousand years instead of 70 is just deathism with a slightly different n.
Eh, I kinda agree with you in a sense, but I'd say there's still a qualitative difference if one has successfully moved away from the deathist assumption that the current status quo for life-span durations is also roughly the optimal life-span duration.
↑ comment by [deleted] · 2013-04-04T14:38:55.095Z · LW(p) · GW(p)
A thousand years instead of 70 is just deathism with a slightly different n.
Then some form of deathism may be the truth anyway.
On the other hand, I can't remember Banks ever suggesting that organics in the Culture would want to die after a thousand years, only that if they wanted to die they would be able to. I don't think the later is incompatible with anti-deathism -- is Lazarus Long a deathist, after all?
EDIT: On the gripping hand, there's also a substantial bit of business in the Culture about subliming.
Instead of arguing on in this vein, I know that he's made comments in the past about how he believes death is a natural part of life. I just can't find the right interview now that "Iain Banks death" and variants are nearly-meaningless search terms.
Replies from: Douglas_Knight, gwern↑ comment by Douglas_Knight · 2013-04-04T19:40:01.593Z · LW(p) · GW(p)
now that "Iain Banks death" and variants are nearly-meaningless search terms
If you want to search the past, go to google, search, click "Search tools," "Any time," "Custom range..." and fill in the "To" field with a date, such as "2008."
↑ comment by gwern · 2013-04-04T15:05:32.417Z · LW(p) · GW(p)
On the other hand, I can't remember Banks ever suggesting that organics in the Culture would want to die after a thousand years, only that if they wanted to die they would be able to. I don't think the later is incompatible with anti-deathism -- is Lazarus Long a deathist, after all?
I don't recall seeing any people who are supposed to be older than a thousand years without mechanics like cryostorage/scanning; if you present a world in which pretty much everyone does want to die after a trivial time period, you're presenting a deathist world and you may well hold deathist views.
EDIT: On the gripping hand, there's also a substantial bit of business in the Culture about subliming.
About not subliming, specifically.
Replies from: RolfAndreassen, None↑ comment by RolfAndreassen · 2013-04-04T20:29:03.878Z · LW(p) · GW(p)
I don't recall seeing any people who are supposed to be older than a thousand years without mechanics like cryostorage/scanning
Such a character appears in the latest Culture novel, "The Hydrogen Sonata". But he is stated to be extremely unusual.
↑ comment by [deleted] · 2013-04-04T19:46:17.242Z · LW(p) · GW(p)
Philosophy, again; death is regarded as part of life, and nothing, including the universe, lasts forever. It is seen as bad manners to try and pretend that death is somehow not natural; instead death is seen as giving shape to life.
While burial, cremation and other - to us - conventional forms of body disposal are not unknown in the Culture, the most common form of funeral involves the deceased - usually surrounded by friends - being visited by a Displacement Drone, which - using the technique of near-instantaneous transmission of a remotely induced singularity via hyperspace - removes the corpse from its last resting place and deposits it in the core of the relevant system's sun, from where the component particles of the cadaver start a million-year migration to the star's surface, to shine - possibly - long after the Culture itself is history.
None of this, of course, is compulsory (nothing in the Culture is compulsory). Some people choose biological immortality; others have their personality transcribed into AIs and die happy feeling they continue to exist elsewhere; others again go into Storage, to be woken in more (or less) interesting times, or only every decade, or century, or aeon, or over exponentially increasing intervals, or only when it looks like something really different is happening....
I'm on the fence as to whether or not this really constitutes full-blown deathism or just a belief that sentient beings should be permitted to cause their own death.
Replies from: TheOtherDave, Nornagest↑ comment by TheOtherDave · 2013-04-05T03:10:25.408Z · LW(p) · GW(p)
I suspect that any cultural norm inconsistent with treating the death of important life forms as an event to be eradicated from the world is at least an enabler to "deathism" as defined locally.
↑ comment by Nornagest · 2013-04-05T03:19:01.762Z · LW(p) · GW(p)
There seems to be some appeal to nature floating around in it, at the very least.
Sure, death is natural. So is Ophiocordyceps, but that doesn't mean I want parasitic mind-altering fungi in my life.
comment by Kaj_Sotala · 2013-04-01T16:04:44.921Z · LW(p) · GW(p)
I've been writing blog articles on the potential of educational games, which may be of interest to some people here:
- Why I'm considering a career in educational games
- Videogames will revolutionize school (not necessarily the way you think)
I'd be curious to hear any comments.
Replies from: RolfAndreassen, Viliam_Bur, Armok_GoB, latanius, John_Maxwell_IV, Qiaochu_Yuan↑ comment by RolfAndreassen · 2013-04-01T16:58:50.183Z · LW(p) · GW(p)
I realise it's a constructed example, but a videogame that would be even remotely accurate in modelling the causes of the fall of the Roman Empire strikes me as unrealistically ambitious. I would at any rate start out with Easter Island, which at least is a relatively small and closed system.
Another point is that, if you gave the player the same levers that the actual emperors had, it's not completely clear that the fall could be prevented; but I suppose you could give points on doing better than historically.
Replies from: latanius, OrphanWilde↑ comment by latanius · 2013-04-01T21:38:13.569Z · LW(p) · GW(p)
Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?
Sure, games about physics should be able to present a reasonably accurate model so that if you understand their model, you end up knowing something about physics... but with history:
actually, what's the goal of studying history?
- if the goal is to do well on tests, we already have a nice model for that, under the name of Anki. Of course, this doesn't make things really fun, but still.
- if we want to make students remember what happened and approximately why (that is, "should be able to write an essay about it"), we can make up an arbitrary, dumb and scripted thing, not even close to a real model, but exhibiting some mechanics that cover the actual reasons. (e.g. if one of the causes would have been "not enough well-trained soldiers", then make "Level 8 Advanced Phalanx" the thing to build if you want to survive the next wave of attacks.)
- if we'd like to see students discover general ideas throughout history, maybe build a game with the same mechanics across multiple levels? (and they also don't need to be really accurate or realistic.)
- and finally, if we want to train historians who could come up with new theories, or replacement emperors to be sent back in time to fix Rome... well, for that we would need a much better model indeed. Which we are unlikely to end up with. But do we need this level in most of the cases?
TL;DR by creating games with wildly unrealistic but textbook-accurate mechanics we are unlikely to train good emperors, but at least students would understand textbook things much more than the current "study, exam, forget" level.
Replies from: Vaniver, RolfAndreassen↑ comment by Vaniver · 2013-04-01T22:50:55.077Z · LW(p) · GW(p)
Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?
If what they learned about "evolution" comes from Pokemon, then yes.
Replies from: lfghjkl↑ comment by lfghjkl · 2013-04-04T09:51:49.881Z · LW(p) · GW(p)
When did Pokemon become an educational game about evolution?
Replies from: Vaniver↑ comment by Vaniver · 2013-04-05T20:51:31.982Z · LW(p) · GW(p)
Pokemon is an example of what an educational game which doesn't care about realism could look like. People should be expected to learn the game, not the reality, and that will especially be the case when the game diverges from reality to make it more fun/interesting/memorable. If you decide that the most interesting way to get people to play an interactive version of Charles Darwin collecting specimens is to make him be a trainer that battles those specimens, then it's likely they will remember best the battles, because those are the most interesting part.
One of the research projects I got to see up close was an educational game about the Chesapeake; if I remember correctly, children got to play as a fish that swum around and ate other fish (and all were species that actually lived in the Chesapeake). If you ate enough other fish, you changed species upwards; if you got eaten, you changed species downwards. In the testing they did afterwards, they discovered that many of the children had incorporated that into their model of how the Chesapeake worked; if a trout eats enough, it becomes a shark.
Replies from: gwern↑ comment by RolfAndreassen · 2013-04-02T00:21:24.576Z · LW(p) · GW(p)
It's true that you don't need a model that lets you form new theories of the downfall of the Empire; but my point is that even the accepted textbook causes would be very hard to model in a way that combines fun, challenge, and even the faintest hint of realism. Take the theory that Rome was brought down partly by climate change; what's the Emperor supposed to do about it? Impose a carbon tax on goats? Or the theory that it was plagues what did it. Again, what's the lever that the player can pull here? Or civil wars; what exactly is the player going to do to maintain the loyalty of generals in far-off provinces? At least in this case we begin to approach something you can model in a game. For example, you can have a dynastic system and make family members more loyal; then you have a tradeoff between the more limited recruiting pool of your family, which presumably has fewer military geniuses, versus the larger but less loyal pool of the general population. (I observe in passing that Crusader Kings II does have a loyalty-modelling subsystem of this sort, and it works quite well for its purposes. Actually I would propose that as a history-teaching game you could do a lot worse than CKII. Kaj, you may want to look into it.) Again, suppose the issue was the decline of the smallholder class as a result of the vast slaveholding plantations; to even engage with this you need a whole system for modelling politics, so that you can model the resistance to reform among the upper classes who both benefit by slavery and run most of your empire. Actually this sounds like it could make a good game, but easy to code it ain't.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-04-03T05:18:08.169Z · LW(p) · GW(p)
It gets even more complicated when these causes interact. A large part of the reason for the decline of smallholders and the rise of vast manors using serfs (slavery was in decline during that period), was the fact that farmers had to turn to the lords for protection from barbarians and roving bandits. The reason there were a lot of marauding bandits is that the armies were to busy fighting over who the next emperor was going to be to do their job of protecting the populace.
↑ comment by OrphanWilde · 2013-04-02T19:54:04.642Z · LW(p) · GW(p)
Dynasty Warriors and Romance of the Three Kingdoms, while heavily stylized and quite frequently diverging from actual history, nevertheless do a pretty good job of conveying the basics of the time period and region.
↑ comment by Viliam_Bur · 2013-04-02T09:45:59.142Z · LW(p) · GW(p)
A big part of education today is memorization. Perhaps it is wrong, but it is going to stay here for a while anyway. And at least partially it is necessary; how else would one learn e.g. a vocabulary of a foreign language?
So while it is great to invent games that teach principles instead of memorization, let's not forget that there is a ton of low-hanging fruit in making the memorization more pleasant. If we could just take all the memorization of elementary and high school, and turn it into one big cool game, it would probably make the world a much better place. How much resources (especially human resources) do we spend today on forcing the kids to learn things they try to avoid learning? Instead we could just give them a computer game, and leave teachers only with the task of explaining things. Everyone could get today's high school level education without most of the frustration.
Recently I started using Anki for memorization, and it seems to work great. But I still need some minimum willpower to start it every day. For me that is easy, because with my small amounts of data, I get usually 10-20 questions a day. But if I tried to use it in real time for high-school knowledge, that would be much more. Also, today I know exactly why I am learning, but for a small child it is an externally imposed duty, with uncertain rewards in a very far future. So some additional rewards would be nice.
It could be interesting to make a school where in the morning the students would play some gamified Anki system, and in the afternoon they would work in groups or discuss topics with teachers.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-02T13:23:17.896Z · LW(p) · GW(p)
Sure, that's big too. I just didn't talk about it as much because everyone else seems to be talking about it already.
↑ comment by Armok_GoB · 2013-04-07T01:13:12.552Z · LW(p) · GW(p)
These games already exist for many things, good enough that watching Letsplays of them are probably more efficient than most deliberately educational videos. It's just finding them and realizing it's that tricky.
Games relevant to this discussion include Rome: Total War and Kerbal Space Program. Look them up.
Replies from: drethelin, Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-07T11:11:14.716Z · LW(p) · GW(p)
Yes, these are good examples.
↑ comment by latanius · 2013-04-01T17:18:37.224Z · LW(p) · GW(p)
Have you played the Portal games? They include lots of things you mention... they introduce how to use the portal gun, for example, not by explaining stuff but giving you a simplified version first... then the full feature set... and then there are all the other things with different physical properties. I can definitely imagine some Portal Advanced game when you'll actually have to use equations to calculate trajectories.
Nevertheless... I'd really like to be persuaded otherwise, but the ability to read Very Confusing Stuff, without any working model, and make sense of it can't really be avoided after a while. We can't really build a game out of every scientific paper, due to the amount of time required to write a game vs. a page of text... (even though I'd love to play games instead of reading papers. And it sounds definitely doable with CS papers. What about a conference accepting games as submissions?)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-03T15:50:50.697Z · LW(p) · GW(p)
I've played the first Portal game for a bit, and I liked it, but haven't finished it because puzzle games aren't that strongly my thing. I wonder whether not liking them much is a benefit or a disadvantage for an edugame designer. :-)
the ability to read Very Confusing Stuff, without any working model, and make sense of it can't really be avoided after a while
True enough. But I don't think that very much of education consists of trying to teach this skill in the first place (though one could certainly argue that it should be taught more), and having a solid background in other stuff should make it easier when you do get to that point.
Replies from: Bobertron↑ comment by Bobertron · 2013-04-03T17:11:45.013Z · LW(p) · GW(p)
What I found fascinating about Portal is the effort they made in testing the game on players. There is a play-mode with developer commentary (thought perhaps it's only available after the first play-through) in which they comment on all the details they changed to make sure that the players learned the relevant concepts, that they didn't forget them and that they have enough hints to solve the puzzle (for example, it's difficult to make a player look up). It'd be awesome if educational material (not necessarily just edugames) or even whole courses were designed and tested that well.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-06T09:28:47.109Z · LW(p) · GW(p)
Thanks, I saw the developer commentary option but didn't try it out. Now that you've told me what it consists of, I'll have to check it out.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-04-07T01:10:04.718Z · LW(p) · GW(p)
One point is that while memorizing specific causes of the fall of the roman empire may not be especially useful, acquiring the self-discipline necessary to do this without a game to motivate you might be very useful.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-07T11:15:48.333Z · LW(p) · GW(p)
Perhaps, but if the task doesn't also feel interesting and worthwhile by itself, then we're effectively teaching kids that much of learning is dull, pointless and tedious, detached from anything that would have any real-world significance, and something that you only do because the people in power force you to. That's one of the most harmful attitudes that anyone can pick up. Let's associate learning with something fun and interesting first, and then channel that interest into the ability to self-motivate yourself even without a game later on.
↑ comment by Qiaochu_Yuan · 2013-04-01T18:31:10.245Z · LW(p) · GW(p)
There are many people having thoughts along these lines, I think. Before forging ahead too much on your own it would be worth poking around to see what's already being done (e.g. Valve started some kind of initiative to get Portal played in schools as part of physics classes or something).
comment by niceguyanon · 2013-04-05T18:15:28.498Z · LW(p) · GW(p)
I mentioned that I was attending a Landmark seminar. Here is my review of their free introductory class that hopefully adds to the conversation for those who want to know:
Coaches - They are the people who lead the class and I found them to be genuine in their belief in the benefits of taking the courses. These coaches were unpaid volunteers. I found their motives for coaching were for self-improvement and to some degree altruism. In short, it helped them, and they really want to share it.
Material - The intro course consists of more informative ideas rather than exercises. Their informative ideas are also trade-marked phrases, which makes it gimmicky and gives it more importance than an idea really warrants. We were not told these ideas were evidence-based. Lots of information on how to improve one's life was thrown around but no research or empirical evidence was given. Not once was the words " cognitive science" or "rationality" used. I speculate that the value the course gives its students is not from their informative ideas, but probably from the exercises and motivation that one gets from being actively pushed by their coaches to pursue goals.
Final thoughts - If you are rationality minded then this is not for you. I am no worse for going, and I do not believe that anyone who is rationality minded and attends will be worse off either, however I do believe that it is most likely damaging for a person's rationality , who is naive in rationality to begin with, to attend. I have never attended CFAR but just from browsing their website I can tell that Landmark is very far from what CFAR does. I think people in general would benefit more from attending CFAR than Landmark.
comment by Qiaochu_Yuan · 2013-04-05T04:25:21.286Z · LW(p) · GW(p)
I am generally still very bad at steelmanning, but I think I am now capable of a very specific form of it. Namely, when people say something that sounds like a universal statement ("foos are bar") I have learned to at least occasionally give them the benefit of the doubt instead of assuming that they literally mean "all foos are bar" and subsequently feeling smug when I point out a single counterexample to their statement. I have seen several people do this on LW lately and I am happy to report that I am now more annoyed at the strawmanners than the strawmanned in this scenario.
Replies from: ThrustVectoring, John_Maxwell_IV↑ comment by ThrustVectoring · 2013-04-05T04:32:25.349Z · LW(p) · GW(p)
It sounds to me like it has a lot in common with the noncentral fallacy. There's a general tendency to think of groups in terms of their central members and not their noncentral ones. This both makes sneaking in connotations by noncentral labels possible, and makes "all central foos are bar" feel like the same thing as "all foos are bar".
Replies from: Thomas↑ comment by Thomas · 2013-04-05T09:57:30.603Z · LW(p) · GW(p)
Even more so with the "No foo is a bar". Those statements are most probably either very common definitions like "no mammal is a bird" and therefore not very informative, either they are improbable. Like "no man can live more than X minutes without oxygen, ever".
In the last case, even if the X is huge, we can assume that maybe it can be done under some (unseen yet) circumstances.
In other words, don't be too hasty with universal negations!
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-04-07T01:24:14.327Z · LW(p) · GW(p)
Why can't people say "some foos are bar" or "foos tend to be bar"? My default interpretation of "foos are bar" is "all foos are bar". I tend to classify confident assertions that "foos are bar" with clear counterexamples as blustering. We already know from Philip Tetlock's work that hedgehogs who make predictions based on simple models tend to be more confident, more widely quoted in the media, and more wrong than foxes who make equivocal, better-calibrated predictions based on more complicated models.
I think there may actually be a bit of a group coordination problem here--hedgehogs gain status from appearing confident and getting quoted in the media, but they're spreading low-quality info. So it's a case of personal gain at the expense of group loss. I'm inclined to call people out for hedgehog-style behavior as a way of dealing with this coordination problem. (In case it's not obvious, I frequently see hedgehog-style predictions from LW-affiliated people and find them annoying and unconvincing.)
Replies from: Qiaochu_Yuan, elharo↑ comment by Qiaochu_Yuan · 2013-04-07T01:51:08.265Z · LW(p) · GW(p)
Why can't people say "some foos are bar" or "foos tend to be bar"?
I mean, of course they can, but sometimes they won't. People aren't careful with their language and it's uncharitable to assume that people mean what you think their words should have meant instead of what they most likely actually meant.
I also think you have a different prototypical case in mind than I do. I'm thinking the kind of nitpicking where someone says something like "fire is hot" and someone responds "nuh-uh, there's a special type of fire you can make that is actually cool to the touch" or something like that.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-04-07T02:33:59.294Z · LW(p) · GW(p)
I also think you have a different prototypical case in mind than I do.
Fair.
↑ comment by elharo · 2013-04-07T21:19:33.043Z · LW(p) · GW(p)
People can't/don't say that "some foos are bar" or "foos tend to be bar" because it is often less accurate than "all foos are bar" or better yet "foos are bar". This is because truth is fuzzy, not binary or digital. For example, "some humans have two arms" gives you very little information. Do 10 out of seven billion humans have two arms? 6.99999 billion out of 7 billion? Maybe half of humans?
By contrast the statement, "humans each have two arms" or even "all humans have two arms" is mostly true, probably better than 99% true, despite the existence of rare counter examples. You can make useful plans based on the knowledge that "all humans have two arms".
If we see truth as binary, and allow a mostly true statement to be invalidated by a single rare counterexample, we have lost a lot of real information. If I know that 100% of humans have two arms, I have a more complete and accurate, though imperfect, view of the world than if I know only that "some humans have two arms".
Best of all of course is if I know that 99.9834% +/- 0.0026% of humans have two arms. However absent such precise information, the statement "humans have two arms" is a pretty accurate and useful representation of reality.
comment by SilasBarta · 2013-04-01T22:51:54.029Z · LW(p) · GW(p)
Looks like Scott Adams has given Metamed a mention. (lotta m's there...)
I find it particularly interesting because a while back he himself was a great example of a patient independently discovering, against official advice, that their rare, debilitating illness could be cured -- specifically, that of losing his voice due to a neurological condition. He doesn't mention it in the blog post though.
(At least, I think this is a better example to use than woman who found out how to regenerate her pinky.)
Replies from: FiftyTwo, NancyLebovitz↑ comment by FiftyTwo · 2013-04-01T23:26:41.183Z · LW(p) · GW(p)
[Aside] I'm not sure how I feel about Scott Adams in general. I enjoyed his work a lot when I was younger, but he seems very prone to being contrarian for its own sake and over-estimating his competence in unrelated domains.
Replies from: pragmatist, SilasBarta↑ comment by pragmatist · 2013-04-02T16:03:10.312Z · LW(p) · GW(p)
I was a big Dilbert fan in my mid-teens and bought all his books. In one of them (The Dilbert Future, I think), he has this self-confessedly serious chapter about questioning received assumptions and thinking creatively. As an example, he suggests an alternate explanation for gravity, which he claims is empirically indistinguishable from the standard theory (prima facie, at least). His bold new theory: everything in the universe is just getting bigger all the time. So when we jump in the air the Earth and our bodies get bigger so they come back into contact. Seriously. Even as a fourteen-year-old, it took me only a few minutes to think of about five reasons this could not be true.
I read that book in the late 90s, and I've read very little by Scott Adams since then. In recent years, I've heard a few people cite him as a generally smart and thoughtful guy, and I have a very hard time reconciling that description with the author of that monumentally stupid chapter.
Replies from: NancyLebovitz, maia↑ comment by NancyLebovitz · 2013-04-02T16:08:57.173Z · LW(p) · GW(p)
It's conceivable that he focuses down on things that are important to him, and is quite content to do more or less humorous BS the rest of the time.
↑ comment by maia · 2013-04-02T17:27:44.145Z · LW(p) · GW(p)
IIRC, in that chapter, he also discussed how quantum mechanics (specifically the double slit experiment) meant that information could travel backwards in time...
Replies from: pragmatist, SilasBarta↑ comment by pragmatist · 2013-04-03T04:35:23.393Z · LW(p) · GW(p)
I don't remember that specifically, but it would be one of the less crazy things he says. There are sound theoretical motivations for a retro-causal account of quantum mechanics, although a successful retro-causal model of the theory is yet to be constructed (John Cramer's transactional interpretation comes close).
However, I do remember Adams endorsing something like The Secret) in the chapter, where you can change the world to your benefit merely by wanting it enough. I don't entirely recall if he sees this as a consequence of quantum retro-causality, but I think he does, and if that's the case then yeah, the quantum stuff is batshit too.
Replies from: maia↑ comment by maia · 2013-04-03T12:46:31.994Z · LW(p) · GW(p)
Yes, he does. It's not necessarily "wanting it enough," though; he specifically instructs that you have to pick a sentence that describes what you want, such as "I want to get rich in the stock market" - specific, but not too specific - and write it, by hand, in a notebook designated for this purpose, at least 10 times each night. He claims that doing this, he did in fact make a lot of money in the stock market, and became the mostt popular cartoonist in the world by a metric he specified (some index, I don't remember which).
Not really connected to the quantum stuff, and possibly not as crazy. I think he mentions some possibility that all it actually does is force you to focus on your goals, which subconsciously makes you more responsive to opportunities, or something.
↑ comment by SilasBarta · 2013-04-07T20:35:23.836Z · LW(p) · GW(p)
Confession: I was taken in by that section too for a while ... a long while. In fact, when Eliezer's quantum physics series started, my initial reaction was, "oh, I wonder how he's going to handle the backwards-in-time stuff!"
↑ comment by SilasBarta · 2013-04-02T00:12:03.366Z · LW(p) · GW(p)
I agree in a lot of respects. But if you can cure such a major disorder when professionals, who are supposed to know this stuff, think it's impossible, and do it by your own research ... well, you have credibility on that issue.
↑ comment by NancyLebovitz · 2013-04-01T23:15:33.546Z · LW(p) · GW(p)
I'm a little surprised he didn't try Alexander Technique, an efficient movement method which was developed by F.M. Alexander to cure his serious problems with speaking-- problems which sound a good bit like vocal dystonia.
The problem may be that F.M. Alexander was an actor, and his technique has remained best known in the theater arts community.
In other news, Too Loud, Too Bright, Too Fast, Too Tight is about people whose range of sensory comfort is mismatched to what's generally expected. It's a problem that doesn't just happen to people on the autistic spectrum.
There's some help for it-- what was in the book was putting people in a non-stressful environment and gradually introducing difficult stimuli-- but working with this problem is cleverly concealed under occupational therapy, where no one is likely to find it.
comment by gwern · 2013-04-03T15:05:09.858Z · LW(p) · GW(p)
I'm working on an analysis of Google services/products shutdown, inspired by http://www.guardian.co.uk/technology/2013/mar/22/google-keep-services-closed
The idea is to collate as many shuttered Google services/products as possible, and still live services/products, with their start and end dates. I'm also collecting a few covariates: number of Google hits, type (program/service/physical object/other), and maybe Alexa rank of the home page & whether source code was released.
This turns out to be much more difficult than it looks because many shut downs are not prominently advertised, and many start dates are lost to our ongoing digital dark age (for example, when did the famous & popular Google Translate open? After an hour applying my excellent research skills, #lesswrong's and no less than 5 people on Google+'s help, the best we can say is that it opened some time between 02 and 08 March 2001). Regardless, I'm up to 274 entries.
The idea is to graph the data, look for trends, and do a survival analysis with the covariates to extrapolate how much longer random Google things have to live.
Does anyone have suggestions as to additional predictive variables which could be found with a reasonable amount of effort for >274 Google things?
Replies from: TimS↑ comment by TimS · 2013-04-03T16:37:19.814Z · LW(p) · GW(p)
Do you have registered user numbers for those services where this is meaningful?
Also, I assume you've see this and this, but just in case you hadn't . . .
Replies from: gwern↑ comment by gwern · 2013-04-03T16:44:39.217Z · LW(p) · GW(p)
Do you have registered user numbers for those services where this is meaningful?
Hm, no. I figured that I would be able to get such numbers for a handful of services at best, and it wasn't worth the effort. (I mean, Google didn't even release user count for Reader as far as I know. So I'd be able to get random user counts for like YouTube and Gmail and that's it. If even those.)
Yeah, I've seen those.
Replies from: TimS↑ comment by TimS · 2013-04-03T16:50:11.369Z · LW(p) · GW(p)
Might the possibility of registering be an interesting variable? Or the possibility of paying money for the service?
Confession: I'm interested in this type of study because the second article I referenced mentioned that Google Voice seemed on a doomed trajectory, and I use Google Voice all the time.
Replies from: gwern, gwern↑ comment by gwern · 2013-05-03T20:07:19.449Z · LW(p) · GW(p)
Update: I've finished, and I'm afraid it doesn't look good for Voice.
Replies from: TimS↑ comment by TimS · 2013-05-04T18:31:42.421Z · LW(p) · GW(p)
I appreciate - I suspected as much based on the Slate article. Even before reading it, I was constantly surprised that Google hadn't announced Voice would cost money. And Voice is hardly central to Google's apparent mission, the way you correctly note Calendar seems central.
I'm trying to estimate how much longer it will last - i.e. when I should start looking for a different service. Given the low likelihood of five-year survival, and that the product is about five years old, I should probably get a move on it.
↑ comment by gwern · 2013-04-03T16:53:10.031Z · LW(p) · GW(p)
Ah, that's a good suggestion: 'is someone paying Google for this?' This subsumes the advertising covariate I was musing about (I didn't bring it up because looking at a bunch of the dead services, I didn't know how I could possibly check whether advertising was involved). I'll add that.
And yes, I would be worried about Voice's long-term prospects, but my impression is that it won't be going away within the next 5 years, say. They've sunsetted the Blackberry app, and neglected the service, but that still leaves the main service and 3 other apps/services according to my current list.
Replies from: TimScomment by SilasBarta · 2013-04-03T01:18:43.053Z · LW(p) · GW(p)
It was recently brought to my attention that Eliezer Yudkowsky regards the monetary theories of Scott Sumner (short overview) to be (what we might call) a "correct contrarian cluster", or an island of sanity when most experts (though apparently a decreasing number) believe the opposite.
I would be interested in knowing why. To me, Sumner's views are a combination of:
a) Goodhart's folly ("Historically, an economic metric [that nobody cared about until he started talking about] has been correlated with economic goodness; if we only targeted this metric with policy, we would get that goodness. Here are some plausible mechanisms why ..." -- my paraphrase, of course)
b) Belief that "hoarded" money is pure waste with no upside. (For how long? A day? A month?)
If you are likewise surprised by Eliezer's high regard for these theories, please join me in encouraging him to explain his reasoning.
Replies from: bogus, Ante↑ comment by bogus · 2013-04-03T08:04:34.982Z · LW(p) · GW(p)
To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well. They include most obviously Iceland (despite a severe financial crisis), and some less obvious instances like Australia, Poland and Isreal. One could point to the UK as the clearest counterexample, but just about everyone agrees that they have severe structural problems, which NGDPLT is not intended to address. And even then, monetary easing has allowed the conservative government to implement fiscal austerity without crashing the economy - this was widely expected to happen and there was a lot of public concern (compare the situation in the US wrt the "fiscal cliff" and "sequestration" scares. Here too, the Fed offset the negative fiscal effect by printing money).
As for (b), nobody argues that money hoarding is a bad thing per se. But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations. Inflation targeting is a very rough way of doing this, but it's just not good enough (see George Selgin's book Less than Zero for an argument to this effect). ISTM that this is not well understood in the mainstream ("NK") macro literature, where supply shocks are confusingly modeled as "markup shocks". I have seen cutting-edge papers pointing out that these make inflation targeting unsound (sorry for not having a ref here).
Replies from: SilasBarta↑ comment by SilasBarta · 2013-04-04T19:54:28.753Z · LW(p) · GW(p)
To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well.
None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence. Rather, they had policies which, despite not specifically intending to, were followed by rising NGDP. The purported similarity to NGDPLT is typically justified on the grounds that the policy caused something related to happen, but there is a very big difference between that and directly targeting NGDP. And hence why it can't demonstrate why targeting a metric (that, again, no one even cared about until Sumner started blogging about it) will have the causal power that is claimed of it.
As for (b), nobody argues that money hoarding is a bad thing per se.
I disagree; I have yet to see any anti-hoarders mention anything positive whatsoever about hoarding and take it as a given that eliminating it is bad. Landsburg says it better than I can: the very people promoting anti-hoarding policies lack any framework in which you can compare the benefits of hoarding to the hoarders against its costs, and thus know whether it's on net bet. The best answer he gets is essentially, "well, it's obvious that there's a shortfall that needs to be rectified" -- in other words, it's just assumed.
To find an example of anyone saying anything positive about hoarding, you have to go to fringe Austrian economists, like in this article.
But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations.
But until you've quantified (or at least acknowledged the existence of) the benefits of hoarding, you can't know if these supposed misallocations are worse than the benefits given by the hoarding. You can't even know if they are misallocations, properly understood.
For once you accept that there's a benefit to hoarding, then the changes in prices induced by it are actually vital market signals, just like any price. Which would mean that you can't eliminate the price change without also destroying information that the market uses to improve resource use. I mean, oil shocks cause widespread price changes, but any attempt to stop these price changes is going to worsen the misallocation problem.
Toy example to illustrate the benefits, and important signal sent by, hoarding: let's say we have a class of typical investors, with no special non-public knowledge about specific companies. So when they invest, they invest in the economy as a whole. (Let's say they won't even consider using this part of their money for consumption.) But! 70% of the economy's investment venues are unsustainable and are actually destroying value in a way not currently obvious. In that case, it would be much better for these potential investors to hoard, rather than further advance this malinvestment. Sure, they'll starve the good 30% of projects of funds, but they'll also pull back on the bad 70%.
So I have yet to see any actual recognition of the benefits of hoarding among this group, which puts them in a ridiculous position. If holding money is bad, then the optimal situation is for any money received to be instantly spent on something else (whether consumption or investment). But this requires that you know what you're going to spend the money on before you earn it -- which just takes us back to barter! Thus, we see the benfit of hoarding/holding money: retaining the option value when you lack certainty about what you will spend it on. It thus signals consumers' uncertainty that they will be able to enter sustainable patterns of trade, and cannot be costlessly squashed (as another another Econ School though of interest -- that it could be zeroed without negative consequence).
Replies from: bogus↑ comment by bogus · 2013-04-05T07:40:22.306Z · LW(p) · GW(p)
None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence.
I think my examples do constitute supporting evidence of some kind. Yes, it would be good to have examples of countries specifically targeting NGDP, to prevent spurious correlations or Lucas critique problems. But even so, Iceland and to a lesser extent, Poland - and, to be fair, the UK - specifically accepted a rise in inflation in order to sustain demand - it wasn't a simple case of exogenously strong RGDP growth. (I think this might also apply to Australia, actually. Their institutional framework would certainly allow for that.) This makes the evidence quite credible, although it's not perfect by any means.
Also, Sumner was not at all the first economist to care about NGDP as a possible target. He is a prominent popularizer, but James Meade and Bennett McCallum had proposed it first.
Your example of the "benefits of hoarding" doesn't address the very specific problems with hoarding the unit of account for all prices in the economy, when prices are hard to adjust. Yes, money has a real option value, so money hoarding might signal some kind of uncertainty. However, you have not made the case that this "signaling" has any positive effects, especially when the operation of the price system is clearly impaired. By analogy, if peanuts were the unit of account and medium of exchange, then widespread hoarding of peanuts might signal uncertainty about the next harvest. But it would still cause a recession, and it wouldn't actually cause the relative price of peanuts to rise (or rise much at any rate), which is what might incent additional supply.
Moreover, in practice, an uncertain agent can attain most (if not all) of the benefit of hoarding money by holding some other kind of asset, such as low-risk bonds, gold or whatever the case may be. It's not at all clear that hoarding money specifically provides any additional benefit, or that such incremental benefits could be sustained without inflicting greater costs on other agents.
↑ comment by Ante · 2013-04-03T05:27:02.489Z · LW(p) · GW(p)
Yes!
The comment is from hacker news thread about Bitcoin hitting $100. It would be cool to have him also expand more on bitcoin itself which he seems to regard as destructive but not necessarily doomed to fail? Here he entertains the idea about combining NGDP level targeting (which I don't understand) with the best parts of Bitcoin. This all sounds very interesting.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-03T12:31:43.389Z · LW(p) · GW(p)
Bitcoin hitting $100
I downloaded a Bitcoin client a couple weeks and was going to buy a few bitcoins, but the inconvenience of having to get a Mt Gox account or something made me keep on putting that off. Whoops. Hopefully this'll teach me to be less of a procrastinator.
Replies from: Kaj_Sotala, Kawoomba, None↑ comment by Kaj_Sotala · 2013-04-03T15:47:21.015Z · LW(p) · GW(p)
You think that's bad? I considered buying a ~hundred Bitcoins after the last crash, when they were going for at less than $1, but could never be bothered. :-)
↑ comment by Kawoomba · 2013-04-03T13:10:48.681Z · LW(p) · GW(p)
Bitcoin is a form of currency that's supposed to be used. Now that so many people jump on the speculation train, and the rest mostly hold onto their bitcoins in the hope the price keeps rising, the practical viability of bitcoins can be called into question. I saw a stick of RAM for sale for the equivalent of over a thousand dollars. For a currency to rise so fast is terribly disruptive, and the rise (lack of supply in relation to demand) itself creates a vicious circle, since the faster it rises the more many bitcoin holders are tempted to further keep their bitcoins out of circulation.
If the price is then calculated by only the very small portion that's up for sale (with the rest being held for often speculative purposes) - compared to a lot of prospective buyers like you who want to join the gold rush - what do you think could easily happen once some people start cashing in, the price drops, seeing the price dropping more people want to cash in, and suddenly there's a very large portion up for sale, in conjunction with a loss of interested buyers (most don't buy into a falling market, tautologically)?
My advice, which I may even follow myself: buy at 100, sell at 150, never look back.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-03T16:24:26.649Z · LW(p) · GW(p)
Well... what I expected to happen when I downloaded the client was for the value of bitcoins to stay about the same (as it had done in the last couple months of 2012) or to rise by 10%-ish per week (as it had done in the first couple months of 2013). If I had bought some when they were at 50-ish, I would definitely be selling most of them now. And right now I don't feel buying something that costs 20% more than it did literally yesterday.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-04-06T11:11:10.541Z · LW(p) · GW(p)
And right now I don't feel buying something that costs 20% more than it did literally yesterday.
Forgive me for stating the obvious: this sounds like the sunk cost fallacy. There's a cost in that you did not buy coins when they were cheaper, and though this does affect how you feel about the issue, it shouldn't (instrumental-rationally) affect your choices.
I did buy coins when they were at ~40$, and I was then regretting that I hadn't bought more when two weeks earlier they were at 10$. When they were at 70$ I chose to buy some more -- and I regretted not buying more when they were at 40$. But both my buy at 40$ and my buy at 70$ were good ones.
Now bitcoins are at around 141$ to 143$. Whether to buy or not buy at this point should depend on an estimation of whether the price is going to go up or down from here -- and your estimation of how soon and how far the price of bitcoin is going to rise or crash from this point onwards. There's always a risk and a chance.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-06T19:08:14.442Z · LW(p) · GW(p)
I was going to reply “Actually, I meant it in the sense that given that their price has changed so quickly, I don't trust their price to not fall by 40% while I'm sleeping”, but I'm afraid that that would just be a rationalization. (I might buy some if their price doesn't change so much in the next couple days.)
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-04-06T21:45:31.998Z · LW(p) · GW(p)
Okay, then let me just warn people in general that transferring money from a bank to the usual bitcoin exchanges (mtgox, bitstamp, etc) may by itself take a couple days -- they don't tend to accept some of the faster methods like paypal.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-11T11:20:39.159Z · LW(p) · GW(p)
That's what prevented me from owning bitcoins yesterday afternoon while their price halved.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-04-11T19:23:48.331Z · LW(p) · GW(p)
This delay has both accidentally helped and hindered me in the past -- it helped when it prevented me from buying in at 30 before the first crash in 2011; it had hindered me now, when I couldn't buy in at 90 as I had wanted before the price rose to 140.
My bitcoin transactions during the last couple months have perhaps gotten me a 5000$ gain (or so) on the whole. It's sad to think that I could have gained four times as much if I had sold one day earlier than I did; still I came out of this round benefitting, as I did back in 2011 (back then I had perhaps gotten a 2000 or 3000$ gain).
Now, I'm debating with myself whether to reinvest the money I got on bitcoin, or if the price is going to drop further... (it's currently around 70$ in the exchanges which are still open like bitstamp.net)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-12T18:26:24.469Z · LW(p) · GW(p)
You see that cute nice near-vertical drop around 15:00 UTC yesterday? That was while I was on the bus on my way home, when I had about 0.42 bitcoins on bitcoin-24.com (I'm not crazy enough to play with more money that that at the moment). #$%&. (By this morning, I had somehow managed to get back all of the value through sheer luck by repeatedly selling and buying at the right times. Now bitcoin-24.com is down, and I don't know whether that happened before or after my offer to sell most of the bitcoins I had left at €80/BTC was accepted. (From this graph I guess I was lucky.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-13T20:29:47.921Z · LW(p) · GW(p)
By this morning, I had somehow managed to get back all of the value through sheer luck by repeatedly selling and buying at the right times.
Sounds like maybe it wasn't just timing. :-|
↑ comment by [deleted] · 2013-04-04T05:11:47.644Z · LW(p) · GW(p)
So do you have a to-do list that you wrote down "buy a few bitcoins" in? If not, maybe you didn't actually procrastinate; maybe you just forgot.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-04T11:18:55.396Z · LW(p) · GW(p)
So do you have a to-do list that you wrote down "buy a few bitcoins" in?
I don't.
If not, maybe you didn't actually procrastinate; maybe you just forgot.
More like, first the former, then the latter. ;-)
comment by wedrifid · 2013-04-04T05:14:22.954Z · LW(p) · GW(p)
The free will page is obnoxious. There have been several times in recent months when I have needed to link to a description of the relationship between choice, determinism and prediction but the wiki still goes out of its way to obfuscate that knowledge.
One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.
That's a nice thought. But it turns out that many lesswrong participants don't try to solve it on their own. They just stay confused.
There have been some other discussions of the subject here (and countless elsewhere). Can someone suggest the best reference available that I could link to?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-04T07:46:11.385Z · LW(p) · GW(p)
Can someone suggest the best reference available that I could link to?
The free will (solution)) page?
comment by Shmi (shminux) · 2013-04-08T23:36:20.233Z · LW(p) · GW(p)
You know you spend too much time on LW when someone mentioning paperclips within earshot startles you.
comment by A1987dM (army1987) · 2013-04-02T12:30:18.179Z · LW(p) · GW(p)
How many people will agree with a statement depends on what typeface it's written in.
Replies from: GLaDOS, SilasBarta, itaibn0↑ comment by GLaDOS · 2013-04-04T13:03:39.675Z · LW(p) · GW(p)
From this day forward all speculation and armchair theorizing on LessWrong should be written in Comic Sans.
Replies from: army1987, David_Gerard↑ comment by A1987dM (army1987) · 2013-04-04T14:11:48.259Z · LW(p) · GW(p)
For some reason, my mind is picturing that sentence written in Comic Sans. (Similar things often happen to me with auditory imagery, e.g. when I read a sentence about a city I sometimes imagine it spoken in that city's accent, but this is the first time I recall this happening with visual imagery.)
↑ comment by David_Gerard · 2013-04-06T16:01:08.080Z · LW(p) · GW(p)
↑ comment by SilasBarta · 2013-04-03T01:56:33.344Z · LW(p) · GW(p)
Shouldn't it? Isn't epistemic hygiene correlated with font choice in known cases? I mean, if someone posts something in Comic Sans ...
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-03T12:16:10.497Z · LW(p) · GW(p)
I'd expect that to be mostly screened off by e.g. grammar and wording, though. (If I had read that passage about asteroids as existential risk written in Comic Sans, I would probably have assumed that the person who chose the font wasn't the same person who wrote the passage.)
↑ comment by itaibn0 · 2013-04-02T20:39:17.828Z · LW(p) · GW(p)
Eyeballing this, the effect size is tiny. Looking at their own measurements, it is statistically significant, but barely.
ADDED: Hmm... I missed the second page. Over there is more explanation of the analysis. In particular:
But this analysis gives us a way to quantify the advantage to Baskerville. It’s small, but it’s about a 1% to 2% difference — 1.5% to be exact, which may seem small but to me is rather large... Many online marketers would kill for a 2% advantage either in more clicks or more clicks leading to sales.
Point taken. This is large enough that it might be useful. However, I don't think it is a large enough bias to be important for rationalist.
Replies from: gwern, gwern↑ comment by gwern · 2013-04-03T03:52:10.506Z · LW(p) · GW(p)
Depends. It would certainly be interesting to know for, say, the LW default CSS. I think I'll A/B test this Baskerville claim on gwern.net at some point.
EDIT: in progress: http://www.gwern.net/a-b-testing#fonts
↑ comment by gwern · 2013-06-16T22:30:24.023Z · LW(p) · GW(p)
My A/B test has finished: http://www.gwern.net/a-b-testing#fonts
Baskerville wasn't the top font in the end, but the differences between the fonts were all trivial even with an ungodly large sample size of n=142,983 (split over 4 fonts). I dunno if the NYT result is valid, but if there's any effect, I'm not seeing it in terms of how long people spend reading my website's pages.
comment by beoShaffer · 2013-04-03T02:56:25.540Z · LW(p) · GW(p)
I’m doing a research project on attraction and various romantic strategies. I’ve made several short online courses organizing several different approaches to seduction, and am looking for men 18 and older who are interested in taking them as well as a short pre and post survey designed to gauge the effectiveness of the techniques taught. If you are want to sign up, or know anyone who might be interested you can use this google form to register. If you have any questions comment or PM me and I’ll get back to you.
ETA:Since someone mentioned publication I thought I should clarify. This is specifically a student research project so, unlike a class project, I am aiming for a peer-reviewed publication, however the odds are much slimmer than if someone more experienced/academically higher status were running it. Also, even if it doesn't get formally published I will follow the "Gwern model". That is to say I'll publish my results online along with as much as my materials as I can (the courses are my own work + publicly available texts, but I only have a limited license for the measures I'm using).
Replies from: wedrifid, Adele_L, ChristianKl↑ comment by wedrifid · 2013-04-04T05:04:00.314Z · LW(p) · GW(p)
I’m doing a research project on attraction and various romantic strategies.
Excellent! An area in which there has been far less formal research than the usefulness of the knowledge calls for. I'll be interested in seeing your results once you publish them.
↑ comment by Adele_L · 2013-04-04T04:18:14.319Z · LW(p) · GW(p)
Are you also going to try to gauge how friendly to women each technique is?
Replies from: beoShaffer↑ comment by beoShaffer · 2013-04-04T04:58:37.010Z · LW(p) · GW(p)
That is not something the study is designed to study, however, it was a major consideration in designing the curriculums.
Replies from: Adele_L↑ comment by ChristianKl · 2013-04-07T00:51:47.689Z · LW(p) · GW(p)
Your sign up form doesn't say anything about the amount of time/effort that you expect students to invest into the course.
Replies from: beoShaffer↑ comment by beoShaffer · 2013-04-07T18:32:12.354Z · LW(p) · GW(p)
Thanks for catching that. I’ve edited the instructions to be clearer. For reference here is the added text. The basic lesson format is a short reading (a few pages), an assignment applying the reading to your life, and a short follow-up/written reflection. There is some variability, but the assignments tend to be short (in the vicinity of 5 minutes) and/or designed to be worked into normal social interaction. That said, the normal social interaction part does assume that you are frequently around women that you some interest in flirting with, asking out ect. If this is not the case finding suitable women to interact with could take significantly more time
comment by Panic_Lobster · 2013-04-02T06:41:27.177Z · LW(p) · GW(p)
Here is a blog which asserts that a global conspiracy of transhumanists controls the media and places subliminal messages in pop music such as the Black Eyed Peas music video "Imma Be" in order to persuade people to join the future hive-mind. It is remarkably lucid and articulate given the hysterical nature of the claim, and even includes a somewhat reasonable treatment of transhumanism.
http://vigilantcitizen.com/musicbusiness/transhumanism-psychological-warfare-and-b-e-p-s-imma-be/
Transhumanism is the name of a movement that claims to support the use of all forms of technology to improve human beings. It is far more than just a bunch of harmless and misguided techie nerds, dreaming of sci-fi movies and making robots. It is a highly organized and well financed movement that is extremely focused on subverting and replacing every aspect of what we are as human beings – including our physical biology, the individuality of our minds and purposes of our lives – and the replacement of all existing religious and spiritual beliefs with a new religion of their own – which is actually not new at all.
EDIT: I see this was previously posted back in 2010, but if you haven't witnessed this blog yet it is worth a look.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-04-02T18:10:28.449Z · LW(p) · GW(p)
Good to know that someone's keeping the ol' Illuminati flame burning. Pope Bob would be proud.
The thing I find most curious about the Illuminati conspiracy theory is that if you look at the doctrines of the historical Bavarian Illuminati, they are pretty unremarkable to any educated person today. The Illuminati were basically secular humanists — they wanted secular government, morality and charity founded on "the brotherhood of man" rather than on religious obedience, education for women, and so on. They were secret because these ideas were illegal in the conservative Catholic dictatorship of 18th-century Bavaria — which suppressed the group promptly when their security failed.
If CFAR becomes at all successful, conspiracists will start referring to it as an Illuminati group. They will not be entirely wrong.
Replies from: Multiheaded, ChristianKl↑ comment by Multiheaded · 2013-04-03T02:27:27.061Z · LW(p) · GW(p)
The thing I find most curious about the Illuminati conspiracy theory is that if you look at the doctrines of the historical Bavarian Illuminati, they are pretty unremarkable to any educated person today. The Illuminati were basically secular humanists — they wanted secular government, morality and charity founded on "the brotherhood of man" rather than on religious obedience, education for women, and so on.
Might I interest you in the theories of Mencius Moldbug?
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-03T12:10:24.747Z · LW(p) · GW(p)
Please give the poor sap a link to a summary of them; even “A gentle introduction to Unqualified Reservations” made me go tl;dr a third of the way through Part 1.
(What little I know about reactionary ideas comes from this, but I don't know how accurate that is.)
↑ comment by ChristianKl · 2013-04-07T11:22:19.329Z · LW(p) · GW(p)
They modeled themselves after the Freimauers and draw a lot of their membership from them. Being a member of the Illuminati required a pledge of obedience. I would be very surprised if CFAR introduces that kind of behavior. You don't need pledges of obedience to advocate secular humanism.
Like the Freemansons the Illuminati also performed secret rituals.
They were secret because these ideas were illegal in the conservative Catholic dictatorship of 18th-century Bavaria
That not really true. Karl Theodor who banned them was a proponent of the Englighement. He didn't want secret groups that pledge obedience to get political power. He didn't want his government to be overturned. A lot of French people died in the French revolution.
comment by NancyLebovitz · 2013-04-02T11:16:26.736Z · LW(p) · GW(p)
Offhand, I haven't seen any LWers write about having chemical addictions, which seems a little surprising considering the number of people here. Have I missed some, or is it too embarrassing to mention, or is it just that people who are attracted to LW are very unlikely to have chemical addictions?
Replies from: wedrifid, drethelin, NancyLebovitz↑ comment by wedrifid · 2013-04-02T11:40:39.576Z · LW(p) · GW(p)
or is it just that people who are attracted to LW are very unlikely to have chemical addictions?
To busy with the internet addictions?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-02T11:49:20.824Z · LW(p) · GW(p)
Could be, but it seems worth finding out.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-04-02T13:58:36.853Z · LW(p) · GW(p)
Add a poll to your top-level comment. Suggested options: no chemical addiction, had one in the past, have one today.
↑ comment by NancyLebovitz · 2013-04-02T16:23:24.356Z · LW(p) · GW(p)
Have you had a chemical addiction? [pollid:422]
Unfortunately, the poll options don't seem to include ticky-boxes, so I don't see an elegant way to ask about which chemicals.
Replies from: gwern, CronoDAS, OrphanWilde, evand, Desrtopa, Decius, coffeespoons, army1987↑ comment by gwern · 2013-04-02T16:56:32.148Z · LW(p) · GW(p)
As usual, caffeine addiction is so common that it needs to either be explicitly excluded or else its inclusion pointed out so readers know how meaningless the results may be for what they think of as 'chemical addiction'.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-02T17:11:10.442Z · LW(p) · GW(p)
My original thought was to phrase it as "chemical addiction generally considered destructive", but that's problematic, too. What about sugar?
Replies from: elharo↑ comment by elharo · 2013-04-02T20:24:10.194Z · LW(p) · GW(p)
Sugar is incredibly destructive. It is a major, perhaps the major, cause of diabetes, heart diseases, obesity and other diseases of civilization.
Replies from: SilasBarta↑ comment by SilasBarta · 2013-04-03T01:57:33.109Z · LW(p) · GW(p)
*wants to change answer now*
↑ comment by CronoDAS · 2013-04-02T20:36:32.802Z · LW(p) · GW(p)
I get withdrawal symptoms if I miss too many antidepressant pills. Does that count?
Replies from: FiftyTwo, NancyLebovitz↑ comment by NancyLebovitz · 2013-04-03T00:13:29.119Z · LW(p) · GW(p)
I wouldn't say so. The definition of addiction is foggy enough that some discussion first would be a good idea if I want to do a more substantial poll.
↑ comment by OrphanWilde · 2013-04-03T17:36:00.016Z · LW(p) · GW(p)
Nicotine, caffeine, simple carbohydrates. (Didn't even realize the last one until I started getting hit with withdrawal - I've never been addicted to sugar before. But since I've cut it out of my diet this last time, which I've done many times before without issue, I've started getting splitting headaches that are rapidly remedied by eating an orange.)
I have alcohol cravings from time to time, but I'm not addicted, since drinking is actually infrequent for me, and not doing so doesn't cause me any issue. That's another recent development which is making me consider clearing out the liquor cabinet. (I did have alcohol cravings once before, after my grandfather died. And my grandmother just died after a few years of progressive decline - she had a form of dementia, possibly Alzheimer's - so it may be depression. I don't -feel- depressed, but I didn't feel depressed last time I was, either, and it was only obvious in retrospect.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-03T18:16:03.235Z · LW(p) · GW(p)
Thanks for writing that up. I probably should have realized that cravings can vary a lot for individuals, but I hadn't thought about it. I've also never heard of a sugar craving which manifests as headaches-- my impression is that typical sugar cravings manifest as obsessive desire without more obviously physical symptoms.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-04-03T19:25:44.680Z · LW(p) · GW(p)
I've actually never had a desire for sugar. Not even when I was a child - we kept a bowl full of candy and chocolate which I almost never touched. (I preferred, odd as it may sound, things like brussel sprouts, although I've stopped having any desire for -those- after getting moldy ones once too often)
I crave spicy foods the way most people crave sweet foods. My favorite is spicy pickled asparagus, which is impossible to find. (Spicy pickled okra is easier, and almost as good, though.) That may actually count as an addiction as well, come to think of it. (Apparently spicy foods induce endorphin and dopamine production?)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-06T14:19:50.308Z · LW(p) · GW(p)
So you've got a strong withdrawal reaction to sugar without having a desire for it?
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-07T11:54:09.278Z · LW(p) · GW(p)
Sometimes something similar happens to me with food in general -- if I have eaten very little in the past dozen hours, sometimes I start feeling dizzy, lazy, and sad but not unusually hungry. (I haven't tested whether different food groups have different effects.)
(For example, I woke up at noon this morning and now it's almost 2 p.m., but I don't feel particularly motivated to getting out of bed; but I know that if I got up and went eat something I'd feel much more energetic.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-04-08T03:36:16.654Z · LW(p) · GW(p)
Sometimes something similar happens to me with food in general -- if I have eaten very little in the past dozen hours, sometimes I start feeling dizzy, lazy, and sad but not unusually hungry.
This is starting to remind me of the dihydrogen monoxide joke.
Replies from: Decius↑ comment by evand · 2013-04-03T04:09:49.403Z · LW(p) · GW(p)
Does my caffeine addiction count? If I stop drinking coffee, I anticipate mild withdrawl symptoms. I periodically do this when I find myself drinking lots of coffee; a few days without increases the effectiveness of the caffeine later.
I take prescription adderall, and am decidedly less functional without it. I sometimes skip a day on the weekends. I anticipate no withdrawl symptoms, but would be far less willing to stop taking it than the caffeine.
One evening a number of years ago, I smoked a couple cigarettes at a party. For almost two weeks afterwards, I reacted to seeing or smelling cigarettes by wanting one. I didn't have any more, and those thoughts went away.
Which of those would you count as addictions? I can imagine plenty of obvious cases either way, but the boundary seems awkward to define, and very common in the case of things like caffeine and sucrose. (I answered yes in the poll, because of the caffeine.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-03T16:41:18.210Z · LW(p) · GW(p)
For what it's worth, what I was interested in was getting deep enough into the obvious life-wreckers that it was urgent to stop using them. Even that's vague, of course. Alcohol has short term emotional/cognitive effects which cause much more damage faster than cigarettes can.
↑ comment by Desrtopa · 2013-04-08T21:24:03.648Z · LW(p) · GW(p)
It may be worth creating another poll which clarifies whether or not to count socially accepted addictions such as caffeine; some people seem to have answered on the assumption that it doesn't count, while others have answered on the assumption that it does.
↑ comment by coffeespoons · 2013-04-08T20:04:27.273Z · LW(p) · GW(p)
I used to be a smoker, and I went through a phase of drinking too much alcohol when I was younger (this was especially worrying as there are many alcoholics in my family). I managed to give up smoking and my alcohol consumption is much healthier now.
I've also noticed that I haven't seen many people on LW worrying about how to cut down on/give up drinking, smoking or drugs. My impression is that LWers are not all that likely to do things that are self destructive in that way.
↑ comment by A1987dM (army1987) · 2013-04-03T18:22:30.991Z · LW(p) · GW(p)
I answered “No”, but one might quibble about whether I actually qualify as not addicted to caffeine. (I'm operationalizing “addiction” as ‘my performances when I don't use X for a couple days are substantially worse than the baseline level before I started regularly using X in the first place, or when I stop using X for several months’. I am a bit less wakeful if I let go of caffeine for a couple days than the level I revert to when I let go of it for months, but not terribly much so and there are all sorts of confounds anyway.)
comment by Michelle_Z · 2013-04-02T06:06:14.541Z · LW(p) · GW(p)
I started a blog about a month or two ago. I use it as a "people might read this so I better do what I'm committing to do!" tool.
Link: Am I There Yet?
Feel free to read/comment.
comment by CAE_Jones · 2013-04-04T23:04:42.505Z · LW(p) · GW(p)
I get the impression that there is something extremely broken in my social skills system (or lack there of). Something subtle, since professionals have been unable to point this out to me.
I find that my interests rarely overlap with anyone else's enough to sustain (or start, really) conversation. I don't feel motivated to force myself to look at whatever everyone else is talking about in order to participate in a conversation about it.
But it feels like there's something beyond that. I was given the custom title of "the confusenator" on one forum. I was straight-up told I was boring when I interjected in a round of bickering that interrupted a debate (also on an internet forum). I find myself being ignored in many places, even those specifically narrow enough in focus to increase interest overlap. (No, I don't post enough at LW for me to count it at this point in time.)
In real life, I physically can't do the all-important eyecontact thing, and I'm too selfconscious/anxious/whatever to use a great deal of volume when speaking. And I can't see lots of things that convey important information about whether someone is available for talking to / nonverbal cues / etc. So real life, I kinda understand.
But none of those apply to the internet, and I still wind up stuck in my own little world there.
Surely I'm missing something?
Replies from: niceguyanon, ChristianKl, NancyLebovitz↑ comment by niceguyanon · 2013-04-05T21:03:03.190Z · LW(p) · GW(p)
Surely I'm missing something?
Perhaps more practice?
↑ comment by ChristianKl · 2013-04-07T16:49:33.571Z · LW(p) · GW(p)
Your writing isn't very clear.
http://lesswrong.com/lw/ou/if_you_demand_magic_magic_wont_help/8o31 is a good example. To me it isn't clear what point you want to make with that post.
I get the impression that you try to list a few facts that you consider to be true instead of trying to make a point. It might help you to edit your writing to remove words that don't help you to make the point that you want to make.
When it comes to real life conversations, lack of interest overlap is rarely the main problem. Even if you know nothing about a topic you can have a conversation where the other person explains you something about the topic.
The problem is more emotional. If you are anxious than it's hard for a conversation to flow.
*For disclosure, my own writing isn't the clearest either. It's still a lot better than it was in the past.
↑ comment by NancyLebovitz · 2013-04-06T14:27:33.690Z · LW(p) · GW(p)
If you supply a sample or two of your writing in context from other forums, perhaps it will be easier for someone here to see a pattern of what you're doing.
comment by [deleted] · 2013-04-02T22:09:06.323Z · LW(p) · GW(p)
If I stay up ~4 hours past my normal waking period, I get into a flow state and it becomes really easy to read heavy literature. It's like the part of my brain that usually wants to shift attention to something low effort is silenced. I've had a similar, but less intense increase in concentration after sex / masturbation.
Anyone else had that experience?
Replies from: Douglas_Knight, OrphanWilde↑ comment by Douglas_Knight · 2013-04-04T00:54:11.244Z · LW(p) · GW(p)
A very common phenomenon is that people are inhibited from doing work because they don't like the quality of what they produce. If they are a little sleep-deprived or drunk, they can avoid this inhibition. I think you're talking about something else, though.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-04-05T07:10:32.567Z · LW(p) · GW(p)
this seems like a super important insight for creativity. Is there a way to practice caring less about initial quality? I'm thinking the obvious of just brainstorming and stream of consciousness writing with as little filter as possible.
Replies from: NancyLebovitz, Qiaochu_Yuan↑ comment by NancyLebovitz · 2013-04-06T14:22:23.741Z · LW(p) · GW(p)
How about meditation? Or the cognitive approach of reminding yourself that the path to excellence requires both mistakes and messing around?
↑ comment by Qiaochu_Yuan · 2013-04-05T07:38:26.788Z · LW(p) · GW(p)
Or the even more obvious of just getting drunk?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-04-05T08:04:20.348Z · LW(p) · GW(p)
Yes, I meant cultivating it in a non-impaired state.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-04-05T08:54:29.996Z · LW(p) · GW(p)
You don't think practice while drunk would transfer to non-drunk? I guess there's the issue of state-dependent memory, but I think a plausible strategy is to start your creative sessions drunk and then gradually decrease the amount of alcohol involved over time.
Replies from: Zaine↑ comment by Zaine · 2013-04-09T06:44:25.119Z · LW(p) · GW(p)
Alcohol is a depressant - it binds to pre-synaptic receptors for the brain's major inhibitory neurotransmitter, gamma aminobutyric acid (GABA). The delta subunit containing GABA receptor, to which alcohol's ethanol has now bound, allows for influx of negatively charged Chlorine into the pre-synaptic GABAergic (GABA transmitting) cell; the cell's charge is lowered, which inhibits further action potentials. Cells that transmit GABA will inhibit other cells; hyperpolarising (making the cell's net charge negative) the inhibitory pre-synaptic GABAergic cell dis-inhibits the post-synaptic cell, which may be excitatory or inhibitory. In the general case of the post-synaptic cell being excitatory, one's brain will become less inhibited - which is not a good thing for cognitive computation.
Due to physics I confess to not presently comprehend, an entirely uninhibited brain will fire in synchrony. Synchrony of action potential frequency has been observed and mathematically measured to result in decreased cognitive performance: asynchronous brain activity is high performance brain activity (beta waves). I understand it from a reactivity perspective - in order to respond quickly to a stimulus, one needs to inhibit their current action and respond to that stimulus; GABAergic neurones are critical to that inhibition.
In sum, while a buzzed person may feel very happy and jumpy, their reduced cognitive ability to inhibit active firing patterns hinders cognitive performance (they are jumpy because motor neurones are being dis-inhibited, too).
With sufficient ethanol saturation voltage-gated sodium channels become less able to detect changes in the charge of their proximity; non-polar lipid-like ethanol does not conduct electricity. Impaired ability to respond to environmental changes around the cell fetters neurone firing, leading to a drunkard's depressed, or rather retarded behaviour.
From a speculative standpoint, perhaps the increased excitability and decreased potential for inhibition conduces fewer cognitive interruptions along the lines of, "Hey, listen! To experience an instant reward go to Hyrule!" One's thoughts, literally, cannot be stopped enough to have that thought.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-04-09T06:56:09.303Z · LW(p) · GW(p)
While ethanol is neurodepressant overall, its effects can initially mirror those of a stimulant ('biphasic').
Replies from: Zaine↑ comment by Zaine · 2013-04-09T23:36:21.584Z · LW(p) · GW(p)
It's still depressing neurones; the neurones it's depressing are inhibitory neurones, which dis-inhibits excitatory neurones. Your comment prompted me to do a research-check, and it turns out I was completely wrong (don't theorise beyond your nose, eh?). The above comment now reflects reality.
↑ comment by OrphanWilde · 2013-04-03T01:54:49.256Z · LW(p) · GW(p)
~5 hours after I usually go to bed is an incredibly productive period of time for me. So the timing doesn't correspond, but the "part of my brain that usually wants to shift attention" does.
comment by therufs · 2013-04-04T16:36:08.065Z · LW(p) · GW(p)
Is there any particular protocol on reviving previously-recurring threads that are now dormant? I had some things to put in a Group Rationality Diary entry, but there hasn't been a post since early January. I sent cata a message a few days ago; haven't heard back.
Replies from: TimS, Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-04-05T04:30:51.637Z · LW(p) · GW(p)
Start a new Group Rationality Diary post.
comment by lukeprog · 2013-04-04T02:19:19.407Z · LW(p) · GW(p)
Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"
Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.
In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn't been solved.
Or imagine trying to predict, back in 1990, when we'd have self-driving cars. Even in 2003 it wasn't obvious we were very close. Now it's 2013 and they totally work, they're just not legal yet.
Same problem with Strong AI. We can't be confident AI will come in the next 30 years, and we can't be confident it'll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.
Replies from: gwern↑ comment by gwern · 2013-04-04T03:40:49.131Z · LW(p) · GW(p)
you probably need fundamental mathematical insights, and it's damn hard to predict those.
We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"
What is the productivity of Science? Can we measure an evolution of the production of mathematicians over history? Can we predict the waiting time till the proof of a challenging conjecture such as the P-versus-NP problem? Motivated by these questions, we revisit a suggestion published recently and debated in the "New Scientist" that the historical distribution of time-to-proof's, i.e., of waiting times between formulation of a mathematical conjecture and its proof, can be quantified and gives meaningful insights in the future development of still open conjectures. We find however evidence that the mathematical process of creation is too much non-stationary, with too little data and constraints, to allow for a meaningful conclusion. In particular, the approximate unsteady exponential growth of human population, and arguably that of mathematicians, essentially hides the true distribution. Another issue is the incompleteness of the dataset available. In conclusion we cannot really reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset that we have studied, translating into an average waiting time to proof of 100 years. We hope that the presented methodology, combining the mathematics of recurrent processes, linking proved and still open conjectures, with different empirical constraints, will be useful for other similar investigations probing the productivity associated with mankind growth and creativity.
They took the 144 from the Wikipedia list of conjectures; their population covariate is just an exponential equation they borrowed from somewhere. Regardless, they turn in the result one would basically expect: a constant chance of solving a problem in each time period. (In turn, this and the correlation with population suggests to me that solving conjectures is more parallel than serial: delays are related more to how much mathematical effort is being devoted to each problem.)
Replies from: lukeprogcomment by FiftyTwo · 2013-04-01T21:56:07.608Z · LW(p) · GW(p)
I've been looking at postgraduate programmes in the philosophy of Artificial intelligence, (primarily but not necessarily in the UK). Does anyone have any advice or suggestions?
Replies from: Jayson_Virissimo, Manfred↑ comment by Jayson_Virissimo · 2013-04-02T20:49:20.628Z · LW(p) · GW(p)
Why so narrow as to exclude good computer science, cognitive science, philosophy of mind, etc...programs from consideration?
Replies from: FiftyTwo↑ comment by FiftyTwo · 2013-04-03T11:09:32.757Z · LW(p) · GW(p)
No particular reason. I am looking at general Philosophy programmes and cognitive science as well.
I ask specifically about AI programme's because its a very specialised field and it is difficult to distinguish which programmes are worth doing (as certain institutions have started up 'AI' programmes that are little more than pre-existing modules rearranged to make money). I figure there are enough people involved in the field here that they would have relevant expertise.
comment by jamesf · 2013-04-01T16:53:37.475Z · LW(p) · GW(p)
I'm going to Hacker School this summer, and I need a place to stay in NYC between approximately June 1 and August 23. Does anyone want an intrepid 20-year-old rationalist and aspiring hacker splitting the rent with them?
Also, applications for this batch of Hacker School are still open, if you're looking for something great to do this summer.
Replies from: Nisancomment by [deleted] · 2013-04-10T14:50:12.076Z · LW(p) · GW(p)
After rereading the metaethics sequence, it occurred to me a possible reason why people can enjoy (the artistic genre) of tragedy. I think there's an argument to be made along the lines of "watching tragedy is about not feeling guilty when you can't predict the future well enough to see what right is."
comment by Kindly · 2013-04-09T00:11:31.129Z · LW(p) · GW(p)
Grading is the bane of my existence. Every time I have to grade homework assignments, I employ various tricks to keep myself working.
My normal approach is to grade 5 homework papers, take a short break, then grade 5 more. It occurred to me just now that this is similar to the "pomodoro" technique so many people here like, except work-based instead of time-based. Is the time-based method better? Should I switch?
Anyway, back to grading 5 more homework papers.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-04-09T00:24:57.207Z · LW(p) · GW(p)
I think using Pomodoros is more fun because you can do things like record how many assignments you grade per Pomodoro. Now you can keep track of your "high score" and try to break it. Competition is fun and worth leveraging for motivation, even if it's with your past selves.
Replies from: jooyous↑ comment by jooyous · 2013-05-09T18:52:11.167Z · LW(p) · GW(p)
But doesn't that make you inclined to not read as carefully or grade as thoroughly or not leave as many comments? "Oh whatever, that was mostly right. Yay, high score!"
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-09T19:15:04.012Z · LW(p) · GW(p)
If you're at the point where you need to employ tricks to finish the grading at all, then I think this is unfortunately a secondary concern. Once you can consistently finish the grading, then I think you can start worrying about its quality.
Replies from: jooyous↑ comment by jooyous · 2013-05-09T20:03:20.873Z · LW(p) · GW(p)
See, I always worry that the easiest way to get through grading is to just give everyone A's regardless of what they turned in. So I feel like you somehow have to factor in a reward for quality or that's what your system will collapse into?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-05-10T04:43:58.429Z · LW(p) · GW(p)
I would never be tempted to do that, but that comes from a strong desire to tell people when they're wrong which is not necessarily a good thing overall.
comment by Paul Crowley (ciphergoth) · 2013-04-08T19:24:39.112Z · LW(p) · GW(p)
I've known for a while that for every user there's an RSS feed of their comments, but for some reason it's taken me a while to get in the habit of adding interesting people in Google Reader. I'm glad I have.
(Effort in adding them now isn't wasted, since when I move from Google Reader I'll use some sort of tool to move all my subscriptions across at once to whatever I move to)
Replies from: drethelincomment by Shmi (shminux) · 2013-04-05T22:19:49.154Z · LW(p) · GW(p)
Trying to get a handle on the concept of agency. EY tends to mean something extreme, like "heroic responsibility", where all the non-heroic rest of us are NPCs. Luke's description is slightly less ambitious: an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs. Wikipedia defines is as a "capacity to act", which is not overly useful (do ants have agency?). The LW wiki defines it as the ability to take actions which one's beliefs indicate would lead to the accomplishment of one's goals. This is also rather vague.
Assuming that agency is not all-or-nothing, one should be able to measure the degree/amount/strength of agency. Is this different from, say, intelligence as an "ability to reason, plan, solve problems"? Are there examples of intelligent non-agents or non-intelligent agents? Assuming the two are correlated but not identical, how does one separate them? Is there a way to orthogonalize the two?
Replies from: Qiaochu_Yuan, ArisKatsaris↑ comment by Qiaochu_Yuan · 2013-04-08T08:25:35.412Z · LW(p) · GW(p)
CFAR's notion of agency is roughly "the opposite of sphexishness," a concept named after the behavior of a particular kind of wasp:
Some Sphex wasps drop a paralyzed insect near the opening of the nest. Before taking provisions into the nest, the Sphex first inspects the nest, leaving the prey outside. During the inspection, an experimenter can move the prey a few inches away from the opening. When the Sphex emerges from the nest ready to drag in the prey, it finds the prey missing. The Sphex quickly locates the moved prey, but now its behavioral "program" has been reset. After dragging the prey back to the opening of the nest, once again the Sphex is compelled to inspect the nest, so the prey is again dropped and left outside during another stereotypical inspection of the nest. This iteration can be repeated again and again, with the Sphex never seeming to notice what is going on, never able to escape from its programmed sequence of behaviors. Dennett's argument quotes an account of Sphex behavior from Dean Wooldridge's Machinery of the Brain (1963). Douglas Hofstadter and Daniel Dennett have used this mechanistic behavior as an example of how seemingly thoughtful behavior can actually be quite mindless, the opposite of free will (or, as Hofstadter described it, sphexishness).
So ants don't have agency. The difference between intelligence and agency seems to me to vanish for sufficiently intelligent minds but is relevant to humans. Like ArisKatsaris I think that for humans, intelligence is the ability to solve problems but agency is the ability to prioritize which problems to solve. It seems to me to be much easier to test for intelligence than for agency; I thought for a little bit awhile ago about how to test my own agency (and in particular to see how it varies with time of day, hunger level, etc.) but didn't come up with any good ideas.
One sign of sphexishness in humans is chasing after lost purposes.
↑ comment by ArisKatsaris · 2013-04-06T09:55:13.795Z · LW(p) · GW(p)
How about "agency" as the extent by which people are moved to action by deliberate thought and by preferences they're aware of -- as opposed to by habit, instinct, social expectations or various unconscious drives.
That's pretty much similar to Luke's definition I guess.
Is this different from, say, intelligence as an "ability to reason, plan, solve problems"?
It's different in that it also chooses which problems to seek to solve, in accordance to one's own self-aware preferences .
Are there examples of intelligent non-agents or non-intelligent agents?
Lots of intelligent non-agents -- a pocket calculator for example.
comment by DanielLC · 2013-04-04T20:57:09.492Z · LW(p) · GW(p)
In HP:MoR, Harry mentioned that breaking conservation of energy allows for faster-than-light signalling. Can someone explain how?
Replies from: shminux, ThrustVectoring↑ comment by Shmi (shminux) · 2013-04-04T23:58:17.681Z · LW(p) · GW(p)
Do you mind pointing out exactly where he says that?
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-04-05T00:28:16.288Z · LW(p) · GW(p)
Chapter 2: "You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! "
Eliezer discussed this point in a reddit thread about a month ago - but I'm not qualified to judge how good his physics on this point are.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-05T05:08:20.561Z · LW(p) · GW(p)
It's not very good. Energy changing in time does not violate unitarity, so you cannot destroy pieces of the wave function and so you don't get FTL in the regular quantum mechanics. You do get FTL this way in general relativity, but this is outside Harry's knowledge (because it's outside Eliezer's). To actually kill a part of the wave function, you need this subsystem to have complex energy. I cannot comment on the quantum computing party of it, it's not my area.
Edit: I'll have to look closer at his partial cancellation argument and see if it can work.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-05T19:59:16.696Z · LW(p) · GW(p)
After some more thinking, I'm still having trouble following this logic:
Let's say I have a quantum search operator on a quantum computer and it turns out that 0000 are not the bits I'm looking for. Within that branch, 0000 splits again into an up-branch that doesn't change into a cat, and a down-branch that changes into a cat, rotates a bit faster or slower, and then changes back out of a cat. Now we have two amplitudes in opposite phase so the whole quantum branch has deliberately decided to cancel itself out.
Specifically, I am not sure in what sense he uses the word "branches". If this is an MWI concept, then different branches do not cancel, since they do not interact. Maybe it means different additive terms in the wave function of some subsystem? But those correspond to different orthogonal eigenstates and so they don't cancel out, either. Maybe it is meant in terms of constructive/destructive interference, only with the destructive part in one place not being compensated by the constructive part elsewhere? This interpretation at least makes sense if you associate branches with propagation paths, but I still have no clue how to use time-dependent energy states to terminate rather than displace (in a perfectly sub-light way) interference maxima and minima.
Maybe someone else can speculate more successfully.
↑ comment by ThrustVectoring · 2013-04-04T21:59:02.677Z · LW(p) · GW(p)
As far as I know, it's because breaking conservation of energy means that relativity is borked.
Let me explain. Conservation of energy is a logical consequence of the fact that experiments performed in different places or at different speeds turn out the same way. In other words, "how fast you are going doesn't matter" -> "conservation of energy". Equivalently, "no conservation of energy" -> "how fast you are going can change things".
We believe relativity is true in large because of how speed and position are invariant in physics (iirc, this is the insight used to generate the theory of relativity in the first place). Once the reasons to believe in relativity go out the window, so does its baggage - specifically, the injunction against FTL.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-04T23:54:27.783Z · LW(p) · GW(p)
As a trained (though non-practicing) physicist, I would like to point out that your comment varies between wrong and meaningless. Conservation of energy is a consequence of the time symmetry, i.e. the time variable not being explicitly present in the Lagrangian or Hamiltonian description of the system under consideration (see Noether's theorem). There are perfectly good relativistic Lagrangians where energy is not conserved (usually because the system is not closed).
Conservation of energy is a logical consequence of the fact that experiments performed in different places or at different speeds turn out the same way. In other words
That describes conservation of momentum, if anything.
Also note that "global" energy is most emphatically not conserved in our expanding universe, and not even well-defined. All that is defined and (locally) conserved is the stress-energy tensor field.
There is also no injunction against FTL in either special or general relativity, though for different reasons. In SR FTL leads to time travel, while in GR it leads to the initial value problem being not well-posed, a rather technical point.
comment by jooyous · 2013-04-02T22:59:28.246Z · LW(p) · GW(p)
I had some students complaining about test-taking anxiety! One guy came in and solved the last midterm problem 5 minutes after he had turned in the exam, so I think this is a real thing. One girl said that calling it something that's not "exam" made her perform better. However, it seems like none of them had ever really confronted the problem? They just sort of take tests and go "Oh yeah, I should have gotten that. I'm bad at taking tests."
Have any of you guys experienced this? If so, have you tried to tackle it head-on? It seems like there should be a handy tool-box of things to do when experiencing anxiety during a test. I personally don't have this problem, so I have no idea. (I get a little nervous and take a minute to breathe and I'm fine. And avoid drinking coffee on exam days!)
Replies from: Qiaochu_Yuan, RomeoStevens, OrphanWilde, latanius↑ comment by Qiaochu_Yuan · 2013-04-03T05:44:02.577Z · LW(p) · GW(p)
so I think this is a real thing
Is this meant to imply that you didn't previously think this is a real thing or that you hadn't heard of it until now? It's apparently a well-studied phenomenon, I think I know people who experience it, and it's completely consistent with my current model of human psychology.
Replies from: jooyous↑ comment by jooyous · 2013-04-03T06:36:40.549Z · LW(p) · GW(p)
Nono, I believed it. I just didn't want people commenting "your students are just complaining to weasel out a better grade from you," because I had some people telling me that students sometimes try to befriend TA's and suck up to them. Though I guess it's not that relevant that these particular students had it. I was just surprised at how bad it was. It's almost like as soon as the test is over, you can think again? I sorta figured people would seek treatment for something that serious.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-04-03T06:52:07.328Z · LW(p) · GW(p)
I think there's a double typical mind fallacy here. You were surprised because your mind doesn't work their way, and it doesn't occur to them to do anything about it because they just think that's what tests feel like. Also, an anxiety disorder is tantamount to a mild mental illness, and people still have a lot of hangups about seeking mental health services in general.
Replies from: jooyous↑ comment by jooyous · 2013-04-03T07:03:04.055Z · LW(p) · GW(p)
Yeah, I think you're right because when when people say they get nervous before tests, I think, "Oh sure, I get nervous too!" But not to the point where I spend half the time sitting there, unable to write anything down.
I'm a bit concerned that a lot of the treatment options on that page are drugs. Is it really safe to drug people before their brain is supposed to do mathy things? Is it cheating? Do any of the people you know have any handy CBT-style rituals that help calm them down? I think from now on I'm also going to persuade professors to call exams "quizzes" or something.
Replies from: wedrifid, Qiaochu_Yuan↑ comment by wedrifid · 2013-04-03T08:02:48.505Z · LW(p) · GW(p)
I'm a bit concerned that a lot of the treatment options on that page are drugs. Is it really safe to drug people before their brain is supposed to do mathy things?
Probably not more unsafe than drugging them other times. As for performance... most anxiolytic substances impair mental function somewhat. It's what they are notorious for (ie. Valium and ethanol). Still, the effects aren't strong enough that crippling anxiety wouldn't be worse. On the other hand a few things like phenibut and aniracetam could lead to somewhat increased performance even beside from anxiolytic effects.
Is it cheating?
No. There isn't (usually) a rule against it so it isn't cheating. (Sometimes there are laws against prescription substances, but that is different. That makes you a criminal not a cheater!)
Replies from: jooyous↑ comment by jooyous · 2013-04-04T19:52:43.127Z · LW(p) · GW(p)
I guess I understand using drugs for other mental disorders (the persistent ones that interfere with more areas of life) but it weirds me out that we create this bizarre social construct called "tests" that give people crippling anxiety ... and then we solve the problem with drugs. Instead of developing alternative models for testing people. (Although there are probably correlations and people with test anxiety might get it for other things as well?)
↑ comment by Qiaochu_Yuan · 2013-04-03T07:13:51.748Z · LW(p) · GW(p)
I got nothin'. Have you tried making an anonymous survey and surveying your Facebook friends? That's what I would try.
↑ comment by RomeoStevens · 2013-04-05T07:13:27.453Z · LW(p) · GW(p)
I think this has to do with the difference between work and curiosity mode. In curiosity mode solving problems is much easier, but stress reliably kills it. Once the stress is gone, the answers come pouring out.
↑ comment by OrphanWilde · 2013-04-04T20:23:47.948Z · LW(p) · GW(p)
It's extremely common with certain learning disabilities, like dyslexia and to a lesser extent dyscalculia. For many people, it's the time limit, rather than the seriousness of the task itself, and eliminating the time limit to take the test permits them to finish it without issue (frequently within the time limit!).
↑ comment by latanius · 2013-04-03T14:43:24.649Z · LW(p) · GW(p)
In the class I TA for, the students can go to the professor's office hours after the midterm / final, and if they can solve the problem there, they still get... half of the points? I wonder how that one affects test-taking performance.
Also, this whole thing seems to be annoyingly resistant to Bayesian updates... "Every time I'm anxious I perform bad, and now I'm worried about being too worried for this exam", and, since performing bad is a very valid prediction in this state of mind, worry is there to stay.
Maybe if the tests are called "quizzes" the students end up in the other stable state of "not being worried"?
Replies from: jooyous↑ comment by jooyous · 2013-04-04T20:06:51.003Z · LW(p) · GW(p)
I feel like it's the students' responsibility to calibrate their own personal correct amount of worry that it takes to make them study, regardless of what the thing is called? (Like if I say "This quiz is worth 50% of your grade," they should be able to tell that it's not really a quiz.) But at the same time, it sounds like some brains have this worry horizon where once they start worrying, then it's all they can do. So we need to somehow calibrate the scariness of exams so that only a very small percentage of people fall off the worry horizon, because people who fail from not studying can just start studying. The stable state of not being worried is a good place! ^_^
This kind of reminds me of all of the (non-technical) articles about game addiction and how it's in the designers' best interest to keep everyone hooked but still high-functioning enough that we won't outlaw WoW the way we outlaw harmful, addictive narcotics.
Brains are such a mess. ^_^
comment by iconreforged · 2013-04-04T15:20:24.254Z · LW(p) · GW(p)
I watched an awesome movie, and now I'm coasting in far mode. I really like being in far mode, but is this useful? What if I don't want to lose my awesome-movie high?
Are there some things that far mode is especially good for? Should I be managing finances in this state? Reading a textbook? Is far mode instrumentally valuable in any way? Or should I make the unfortunate transition back to near mode?
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-04-05T04:32:31.457Z · LW(p) · GW(p)
Based on the description at the LW wiki, it sounds like far mode is a good time to evaluate how risk-averse you've been and whether there are risky opportunities you should be taking that you previously weren't taking because of risk-aversion.
comment by [deleted] · 2013-04-04T05:20:24.769Z · LW(p) · GW(p)
How much do we know about reasoning about subjective concepts? Bayes' law tells you how probable you should consider any given black-and-white no-room-for-interpretation statement, but it doesn't tell you when you should come up with a new subjective concept, nor (I think) what to do once you've got one.
Replies from: lucidian, Qiaochu_Yuan↑ comment by lucidian · 2013-04-12T17:33:51.341Z · LW(p) · GW(p)
You may be interested in the literature on "concept learning", a topic in computational cognitive science. Researchers in this field have sought to formalize the notion of a concept, and to develop methods for learning these concepts from data. (The concepts learned will depend on which specific data the agent encounters, and so this captures the some of the subjectivity you are looking for.)
In this literature, concepts are usually treated as probability distributions over objects in the world. If you google "concept learning" you should find some stuff.
↑ comment by Qiaochu_Yuan · 2013-04-05T04:33:53.718Z · LW(p) · GW(p)
"Subjective" seems uselessly broad. Can you give a more specific example?
Replies from: None↑ comment by [deleted] · 2013-04-09T04:53:01.271Z · LW(p) · GW(p)
Well, I guess that by "subjective concepts", I mean every concept that doesn't have a formal mathematical definition. So stuff like "simple", "similar", "beautiful", "alive", "dead", "feline", and so on through the entire dictionary.
The only theory-of-subjective-concepts I've come across is the example of bleggs and rubes. Suppose that, among a class of objects, five binary variables are strongly correlated with each other; then it is useful to postulate a latent variable stating which of two types the object is. This latent variable is the "subjective concept" in this case.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-04-09T05:15:11.515Z · LW(p) · GW(p)
Think of subjective concepts as heuristics that help you describe models of the world. Evaluate those models based on their predictions. (Grounding everything in terms of predictions is a great way to keep your thinking focused. Otherwise it's too easy to go on and on about beauty or whatever without ever saying anything that actually controls your anticipations.)
Have you read the rest of 37 Ways That Words Can Be Wrong?
comment by NancyLebovitz · 2013-04-11T07:03:15.636Z · LW(p) · GW(p)
Hazards of botched IT Cost overruns are nothing compared to what can go wrong when you actually use the software.
Software which can answer "is this obviously stupid?" would be a step towards FAI.
comment by [deleted] · 2013-04-09T12:05:20.807Z · LW(p) · GW(p)
Toby Ord gave a Google Tech Talk on efficient charity and QALYs this march.
comment by John_Maxwell (John_Maxwell_IV) · 2013-04-07T23:20:21.119Z · LW(p) · GW(p)
Does anyone here have thoughts on the x-risk implications of Bitcoin? Rebalancing is a way to make money off of high-volatility investments like Bitcoin (the more volatility, the more money you make through rebalancing). If lots of people included Bitcoin in their portfolios, and started rebalancing them this way, then the price of Bitcoin would also become less volatile as a side effect. (It might even start growing in price at whatever the market rate of return for stocks/bonds/etc. is, though I'd have to think about that.)
So given that I could spread this meme on how you can get paid to decrease Bitcoin's volatility, should I do it?
Replies from: None↑ comment by [deleted] · 2013-04-07T23:51:36.808Z · LW(p) · GW(p)
So given that I could spread this meme on how you can get paid to decrease Bitcoin's volatility, should I do it?
Why wouldn't you?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-04-08T02:16:04.168Z · LW(p) · GW(p)
It would take time and effort and having Bitcoin be a legitimate alternative currency might have unforseen negative consequences, e.g.
Bitcoins weren't just created to sell a little weed on silk road.. they are used for child pornography, human trafficking, murder for hire, domestic and intl terrorism, hard drugs like heroin, and illegal arms sales... If you are for bitcoins, then you must also be OK with all of the above things...have fun supporting your pedophiles and murderers...
comment by CoffeeStain · 2013-04-06T21:18:50.360Z · LW(p) · GW(p)
So I'm running through the Quantum Mechanics sequence, and am about 2/3 of the way through. Wanted to check in here to ask a few questions, and see if there aren't some hidden gotchas from people knowledgeable about the subject who have also read the sequence.
My biggest hangup so far has been understanding when it is that different quantum configurations sum, versus when they don't. All of the experiments from the earlier posts (such as distinct configurations) seem to indicate that configurations sum when they are in the "same" time and place. Eliezer indicates at some point that this is "smeared" in some sense, perhaps due to the fact that all particles are smeared in space in time; therefore if two "particles" in different worlds don't arrive at the same place at exactly the same time, the smearing will cause the tail end of their amplitude distributions to still interact, resulting in a less perfect collision with somewhat partial results to what would have happened in the perfect experiment.
The hangup becomes an issue, barring any of my own misunderstanding (which is of course likely), when he starts talking about macroscopic other worlds. He goes so far as to say that when a quantum event is "observed," what really happens is that different versions of the experimenter become decohered with the various potential states of the particle.
Several things don't seem quite right here. First, Eliezer seems to imply here that brains only work (to the extent that they can have beliefs capable of being acted on) when they work digitally, with at least some neurons having definite on or off states. What happens to the conservation of probability volume due to Liouville's Theorem described in Classical Configuration Spaces? Or maybe I misunderstand here, and the probability volumes actually do become sharply concentrated in two positions. But then why is it not possible for probability volumes to become usually or always sharply concentrated in one position, giving us, for all practical purposes, a single world?
Backing up a bit though. What keeps different worlds from interacting? Eliezer implies in Decoherence that one important reason that decohered particles are such is a separation in space. What I fail to understand, if there is not some specified other axis, is why the claim stands that different but similar worlds (different only along that axis) fail to interact! According to his interpretation (or my interpretation of his interpretation) of quantum entanglement, your observation of a polarized particle at one end of a light-year limits the versions of your friend (who observed the tangled particle) that you are capable of meeting when you compare notes in the middle. But why do you just as easily not meet any other version of your friend? What is the invisible axis besides space and time that decoheres worlds, if we meet at the same place and time no matter what we observe?
More importantly, what keeps neurons which are at the same space and time from interacting with their other-world counterparts, as if they were as real as their this-world self?
Unless I'm completely off here, couldn't there be many fewer possible worlds than Eliezer suggests? In extremely controlled experiments, we observe decoherence on rather macroscopic levels, but isn't "controlled" precisely the point? In most normal quantum interactions, isn't there always going to be interference between worlds? And what if that interference by the nature of the fundamental laws just so happens to have some property (maybe a sort of race condition) that causes, usually, microscopic other worlds to merge? On average, if possible worlds become macroscopic enough, still-real interactions between the worlds become increasingly likely, and they are no longer "other worlds" but actually-interacting same-world, to the point where no two differently configured sets of neurons could ever observe differently.
I should stop here before I carry on any early-introduced fallacy to increasingly absurd conclusions. Would be very interested in how to resolve my confusion here.
Replies from: None↑ comment by [deleted] · 2013-04-06T22:25:47.872Z · LW(p) · GW(p)
First, Eliezer seems to imply here that brains only work (to the extent that they can have beliefs capable of being acted on) when they work digitally, with at least some neurons having definite on or off states.
I assume you mean this section:
Your world does not split into exactly two new subprocesses on the exact occasion when you see "ABSORBED" or "TRANSMITTED" on the LCD screen of a photon sensor. We are constantly being superposed and decohered, all the time, sometimes along continuous dimensions—though brains are digital and involve whole neurons firing, and fire/not-fire would be an extremely decoherent state even of a single neuron... There would seem to be room for something unexpected to account for the Born statistics—a better understanding of the anthropic weight of observers, or a better understanding of the brain's superpositions—without new fundamentals.
He's not exactly saying that brains only work digitally -- they don't; neuron activation isn't only about electrical impulses -- he's just talking about one particular process that happens in the brain. At least, as far as I can tell.
Replies from: CoffeeStain↑ comment by CoffeeStain · 2013-04-06T22:56:45.912Z · LW(p) · GW(p)
They certainly don't work only digitally, but the suggestion seems to be that for most brain states at the level of "belief" it is required that at least some neurons have definite states, if only in the sense of "neuron A is firing at some definite analog value."
comment by Viliam_Bur · 2013-04-06T10:25:49.155Z · LW(p) · GW(p)
I don't know anything about quantum computing, so please tell me if this idea makes sense... if you imagine many-worlds, can it help you develop better intuitions about quantum algorithms? Anyone tried that? Any resuts?
I assume an analogy: In mathematics, proper imagination can see you some results faster, even if you could get the same results by computation. For example it is easier to imagine a "sphere" than a "set of points with distance at most D from a given center C". You can see that an intersection of a sphere and a plane is a circle faster than you can solve the corresponding equations. Even if computationally the sphere is the same as the given set of points, imagination runs much faster on the visual model.
Analogically, the copenhagen interpretation and many-world interpretation should give same results. Yet, is it possible than one of them would be more imagination-friendly? Would it be possible to immediately "see" the results in one model, which have to be mathematically calculated by the other model? Could then one of these models be a comparative advantage for a quantum programmer?
To avoid misunderstanding: I don't suggest using imagination instead of computation. I only suggest using an imagination to guess a result, and then use a proper mathematical proof to confirm it. Just like the "an intersection of a sphere and a plane is either nothing, or a point, or a circle" can be translated to equations and verified analytically, but is much easier to remember this way.
Replies from: RomeoStevens, Douglas_Knight↑ comment by RomeoStevens · 2013-04-08T07:06:30.305Z · LW(p) · GW(p)
Are you familiar with the Quantum Bomb Tester?
↑ comment by Douglas_Knight · 2013-04-06T19:03:45.392Z · LW(p) · GW(p)
Are you aware that David Deutsch is (1) the loudest proponent of MWI and (2) the inventor* of the quantum computer? Moreover, he claimed that MWI lead him there. He also predicted that quantum computers would convince everyone else of MWI. So far, that claim doesn't look very plausible.
I am skeptical of the possibility of many worlds contributing to imagination. I prefer the phrase "no collapse" to the phrase "many worlds" because there are a lot of straw men associated with the latter phrase. But phrasing it as a negative shows that's it's really a subset of Copenhagen QM, and thus shouldn't require more or different imagination. You might say that the first incarnation of many worlds is Schrödinger's Cat, which everyone talks about, regardless of interpretation.
There is some discussion of the fruitfulness here; in particular Scott Aaronson says "I think Many-Worlds does a better job than its competitors...at emphasizing the aspect of QM—the exponentiality of Hilbert space—that most deserves emphasizing."
* Manin, Feynman, and maybe other people could claim that title, too, but I think they were all independent. Moreover, I think Deutsch was the first person to produce a quantum algorithm that he could prove was better than a classical algorithm; he exploited QM rather than saying it was hard. It is this exploitation that he attributes to MWI.
Deutsch discusses his predecessors, but he didn't know about Manin. I think Manin's contribution is all in the 3 paragraph Appendix (p25).
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-04-06T20:11:13.385Z · LW(p) · GW(p)
I didn't know about David Deutsch, thanks for the information!
it's really a subset of Copenhagen QM, and thus shouldn't require more or different imagination.
Then perhaps the only advantage is that you don't have to waste your time worrying "what if my proposed solution is already so big that the wavefunction will collapse before it computes the result". But to get this advantage, you don't really have to believe in MWI. It's enough to profess belief in colapse, but ignore the consequences of that belief while designing algorithms, which is something humans excel at.
comment by somervta · 2013-04-17T03:23:46.873Z · LW(p) · GW(p)
Recently, for a philosophy course on(roughly) the implications of AI for society, I wrote an essay on whether we should take fears about AI risks seriously, and I had the thought that it might be worth posting to LW discussion. Is there/would there be interest in such a thing? THB, there's not a great deal of original content, but I'd still be interest in the comments of anyone who is interested.
comment by NancyLebovitz · 2013-04-14T03:30:22.697Z · LW(p) · GW(p)
LW Women: Submissions on Misogyny was moved to main, but the article doesn't show up as New, Promoted, or Recent.
comment by Sabiola (bbleeker) · 2013-04-12T15:15:03.982Z · LW(p) · GW(p)
I'm not sure if this is the right place for this, but I've just read a scary article that claims that "The financial system as a whole functions as a hostile AI", and I was wondering what LW thinks of that.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-12T16:39:28.701Z · LW(p) · GW(p)
There have been various threads in the past about whether corporations can be considered AIs. The general consensus seems to be "not in the sense of 'AI' that this community is concerned with."
comment by Oscar_Cunningham · 2013-04-09T12:34:11.359Z · LW(p) · GW(p)
In Anki, LaTeX is rendered too large. Does anyone know an effective fix?
EDIT: I found one. In Anki, LaTeX is rendered to an image and from then on treated as one. Adding
img{ zoom: 0.6;}
to a new line of the "Styling" section of the "Card Type" for whatever Note you're using rescales all the images in that card type. So provided you don't use LaTeX and images on the same Note then this fixes all your problems.
comment by Douglas_Knight · 2013-04-06T17:23:46.831Z · LW(p) · GW(p)
ignore this experiment
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-04-06T17:24:06.411Z · LW(p) · GW(p)
ignore this experiment
comment by Douglas_Knight · 2013-04-06T17:22:04.851Z · LW(p) · GW(p)
ignore this experiment.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-04-06T17:22:16.033Z · LW(p) · GW(p)
ignore this experiment
comment by FiftyTwo · 2013-04-03T11:06:39.963Z · LW(p) · GW(p)
What can you usefully do with underutilised processing power? (E.g. spare computer and server time).
So far the best I can come up with is running Folding@home. But it seems like there should be a way to sell server space etc.
Replies from: gwern↑ comment by gwern · 2013-04-03T14:15:58.415Z · LW(p) · GW(p)
So far the best I can come up with is running Folding@home.
Remember the power consumption entailed: http://www.gwern.net/Charity%20is%20not%20about%20helping
Replies from: FiftyTwocomment by OrphanWilde · 2013-04-02T21:16:08.528Z · LW(p) · GW(p)
Site suggestion:
When somebody attempts to post a comment with the words "why", "comment", and "downvoted", it should open a prompt directing them to an FAQ explaining most likely reasons for their downvote, and also warning them prior to actually submitting the comment that it's likely to be unproductive and just lead to more downvotes.
(Personally I think this site needs to have a little more patience with people asking these questions, as they almost always come from new users who are still getting accustomed to the community norms, but that's just me.)
Replies from: itaibn0, Viliam_Bur, None, drethelin↑ comment by itaibn0 · 2013-04-02T23:12:44.579Z · LW(p) · GW(p)
This suggestions conflicts with the advice of the Welcome Thread, which says:
Replies from: OrphanWilde, KawoombaHowever, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.)
↑ comment by OrphanWilde · 2013-04-03T01:52:40.395Z · LW(p) · GW(p)
And yet I persistently see requests for explanations downvoted. The advice of the welcome thread does not actually correspond to downvoting behavior.
Replies from: itaibn0, Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-03T08:02:46.848Z · LW(p) · GW(p)
Do the requests remain downvoted? In my experience, they may be downvoted for a while, but then get voted back up.
Replies from: TimS↑ comment by TimS · 2013-04-03T13:54:04.007Z · LW(p) · GW(p)
It depends a bit on why the original post was downvoted. Asking for explanation when the problem is obvious, or on a forbidden topic tends not to get back to neutral.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-04-03T17:28:50.957Z · LW(p) · GW(p)
Obvious to regular users != obvious to new user
Replies from: TimS↑ comment by TimS · 2013-04-03T18:06:42.202Z · LW(p) · GW(p)
Monkeymind knew why he was being downvoted.
Edit: But, I agree with your point that many community norms that will get one downvoted are not accessible to new members.
↑ comment by Viliam_Bur · 2013-04-03T08:10:11.391Z · LW(p) · GW(p)
In my opinion, downvoting is necessary for forum moderation, and people don't downvote enough. It is rather easy to get a lot of karma by simply writing a lot, because the average karma of a comment (this is just my estimate) is around 1. I would prefer if the average was closer to 0.
Asking about downvoting is ok, per se (assuming that the person does not do it with every single damned comment which fell to -1 temporarily). But sometimes it seems to contain a connotation that "you should not downvote my comments unless you explain why". Which I completely disagree with and consider it actively harmful, so I automatically downvote any comment that feels like this. (Yes, there is a chance that I misunderstood the author's intentions. Well, I am not omniscient, and I don't want to get paralyzed by my lack of omniscience.)
↑ comment by drethelin · 2013-04-02T22:50:05.401Z · LW(p) · GW(p)
I just downvote people complaining about downvotes
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-03T00:33:03.243Z · LW(p) · GW(p)
Do you make a distinction between complaining and asking?
Replies from: drethelin↑ comment by drethelin · 2013-04-03T20:49:29.808Z · LW(p) · GW(p)
A little? Most "asking about downvotes" are functionally indistinguishable from complaints although phrased as a question in the sense of "I don't understand why I'm getting downvoted (implying it doesn't make sense and you are wrong to do so)". If someone posts an in depth post and it gets downvoted and they ask for which specific parts of their giant post were bad, I give that more leeway.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-06T14:20:44.831Z · LW(p) · GW(p)
If I see a question about downvotes that's below 0, I'm going to upvote it.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-06T15:23:14.229Z · LW(p) · GW(p)
I don't think I need to ask why that got downvoted.