Open Thread, May 5 - 11, 2014
post by Tenoke · 2014-05-05T10:35:45.563Z · LW · GW · Legacy · 286 commentsContents
You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here. None 286 comments
You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should start on Monday, and end on Sunday.
4. Open Threads should be posted in Discussion, and not Main.
286 comments
Comments sorted by top scores.
comment by Brillyant · 2014-05-07T14:21:50.855Z · LW(p) · GW(p)
Is Less Wrong dying?
Some observations...
- The top level posts are generally well below the quality of early material, including the sequences, in my estimation.
- 'Main' posts are rarely even vaguely interesting to me anymore.
- 'Top Contributors' karma values seem very low compared to what I remember them being ~9-12 months ago.
- 'Discussion' posts are littered with Meetup reminders.
About all I look at on LW anymore is the Open Discussion Thread, Rationality Quotes and the link to Slate Star Codex. I noticed CFAR and MIRI's websites gave me the impression they were getting more traction and perhaps making some money.
Has LW run it's course?
Replies from: NancyLebovitz, blacktrance, shminux, Viliam_Bur, moridinamael, Kawoomba, Kaj_Sotala, ChristianKl↑ comment by NancyLebovitz · 2014-05-07T15:28:14.663Z · LW(p) · GW(p)
I think it's a little early to predict the end, but there's less I'm interested in here, and I'm having trouble thinking of things to write about, though I can still find worthwhile links for open threads.
Is LW being hit by some sort of social problem, or have we simply run out of things to say?
↑ comment by blacktrance · 2014-05-07T15:40:37.535Z · LW(p) · GW(p)
I'd add "Metacontrarianism is on the rise" to your list. Many of the top posts now are contrary to at least the spirit of the sequences, if not the letter, or so it feels to me.
↑ comment by Shmi (shminux) · 2014-05-07T17:10:56.825Z · LW(p) · GW(p)
Has LW run it's course?
It seems to be a common sentiment, actually. I mentioned this a few times on #lesswrong and the regulars there appear to agree. Whether this is a some sort of confirmation bias, I am not sure. Fortunately, there is a way to measure it:
Look at the recent Main entries: http://lesswrong.com/recentposts/
Then look at the entries from about 1 year ago: http://lesswrong.com/recentposts/?count=250&after=t3_gnv
Count interesting articles from each period and compare the numbers.
↑ comment by Viliam_Bur · 2014-05-07T18:13:15.436Z · LW(p) · GW(p)
Maybe it's because the important things have started, and moved to real life, outside of the LW website. There are people writing and publishing papers on Friendly AI, there are people researching and teaching rationality exercises; there are meetups in many countries. -- Although, if this is true, I would expect more reports here about what happens in the real life. (Remember the fundamental rule of bureaucracy: If it ain't documented, it didn't happen.)
Anyway, this is only a guess; it would be interesting to really know what's happening...
↑ comment by moridinamael · 2014-05-07T16:28:43.714Z · LW(p) · GW(p)
I would say LW is evolving.
The Sequences are and always were the finger that points at the objective, not the objective unto itself. The project of LW is "refining the art of human rationality." But we don't have the defininition of human rationality written on stone tablets, needing only diligence in application to obtain good results. The project of LW is thus a dynamic process of discovery, experimentation, incorporating new data, sometimes backtracking when we update on evidence that isn't as solid was we had thought.
You correctly observe that the style of participation has changed over time. This is probably mostly the result of certain specific high volume contributors moving on to other things. It could also be the result of an aggregated shift in understanding as to what kinds of results can actually be produced by discussing rationality in a vacuum, which may perhaps be why these contributors have moved on. Or maybe they just said all they felt they needed to say, I don't know. I have a 101.1 F fever right now.
↑ comment by Kawoomba · 2014-05-07T16:36:11.620Z · LW(p) · GW(p)
I blame Facebook. Many of the discussions that are had there were of the type that used to invigorate these here boards.
Replies from: Brillyant↑ comment by Brillyant · 2014-05-07T18:42:49.824Z · LW(p) · GW(p)
Hm. I think you have a much higher level of sophistication in your FB friend group. I get a lot of Tea Party quotes and pictures of peoples' dinner.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-05-07T18:52:00.484Z · LW(p) · GW(p)
It's mostly that Eliezer has taken to disseminating his current work via open Facebook discussions. I can see how that choice makes sense, from his position, but it's still sad for the identity-paranoid and the nostalgic remnants still roaming these forgotten halls. Did I have a purpose once? It's been so long.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-08T00:07:37.791Z · LW(p) · GW(p)
Also, it's much harder (impossible?) to find older discussions on FB.
Replies from: mare-of-night, NancyLebovitz↑ comment by mare-of-night · 2014-05-08T13:50:29.334Z · LW(p) · GW(p)
And perhaps harder to grow, at least through the usual means - the Facebook discussions wouldn't show up on Google searches (or at least not highly ranked, I think), and it's a less convenient format to link someone to for an explanation of a concept.
↑ comment by NancyLebovitz · 2014-05-15T16:12:48.173Z · LW(p) · GW(p)
It turns out that while there may be no good way to use Facebook to find old discussions on Facebook, I used google and found an old Facebook post.
↑ comment by Kaj_Sotala · 2014-05-13T15:22:20.684Z · LW(p) · GW(p)
I remember people saying things like "Less Wrong is dying" for a long time, from 2010 at least. Which doesn't invalidate the claim that LW's much more quiet than it used to be, of course, but it does challenge the claim that this would be a recent development.
Replies from: Brillyant↑ comment by ChristianKl · 2014-05-07T16:43:41.305Z · LW(p) · GW(p)
The LW census get's every year more participants. If LW would be dying I would expect the opposite.
Replies from: Brillyant↑ comment by Brillyant · 2014-05-07T18:18:09.981Z · LW(p) · GW(p)
I'm not sure total participants is a good metric to use in making that determination. It depends on people's level of participation and engagment, I think.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-08T00:32:19.216Z · LW(p) · GW(p)
When it comes to engagement we do have a bunch of in person meetups that we didn't have a few years ago.
Replies from: Nornagest↑ comment by Nornagest · 2014-05-08T00:48:37.384Z · LW(p) · GW(p)
There do seem to be more meetups globally, but I'd say the SF Bay Area meetup scene -- where MIRI is based and many prominent contributors live or have lived --- is well off its peaks. This is perhaps an unreasonable time to be saying so, since the South Bay and East Bay meetups have just gone through major shakeups and haven't yet stabilized; but even ignoring that we're well down from two or three years ago in terms of engagement with high-karma users, in terms of number of local meetup groups, and probably in terms of people as well.
comment by jackk · 2014-05-08T04:25:54.148Z · LW(p) · GW(p)
As per issue #389, I've just pushed a change to meetups. All future meetup posts will be created in /r/meetups to un-clutter /r/discussion a little bit.
Replies from: Tenoke↑ comment by Tenoke · 2014-05-09T15:21:16.574Z · LW(p) · GW(p)
Hmm, I just noticed that the 'Nearest Meetup' feature is mostly removed (you can still see the field when you refresh before everything has loaded), so you cant see any notification anywhere for local meetups happening soon unless you are specifically checking /meetups or r/meetups.
I understand why Luke and co wanted this change asap (people have been complaining about the clutter), but I suspect that this change will have a big overall impact on LW Meetups turnouts. I'm fairly certain that a lot of non-regulars decide to go to a specific meetup because they are randomly reminded of it in the sidebar or in discussion, and not because they actively check.
Anyway, is there any chance you know why the 'Nearest meetup' area was removed (no mention of the removal in the issues)? I am not sure what the benefit is of having Upcoming Meetups over Nearest Meetups, but the latter at least provides a reminder for people of posted local meetups. Alternatively, is there anything else planned to serve as a reminder?
PS: I would've published this as a comment on the issue itself, but that didn't look very appropriate.
Replies from: philh↑ comment by philh · 2014-05-09T22:33:36.810Z · LW(p) · GW(p)
I currently see 'nearest meetups'.
I've noticed that when I'm at work (but still logged in), it shows me 'upcoming meetups' instead. My first guess, that I've made no attempt to confirm or disconfirm, is that it tries to determine your location from your IP address. If it succeeds it shows you 'nearest meetups', and if it fails it shows you 'upcoming meetups'.
I feel like there should definitely be a link to 'meetups' next to 'main' and 'discussion'. It's so easy to miss things in the sidebar.
I, too, expect this to reduce meetup turnout.
Replies from: Tenoke↑ comment by Tenoke · 2014-05-10T10:49:19.730Z · LW(p) · GW(p)
I've noticed that when I'm at work (but still logged in), it shows me 'upcoming meetups' instead.
Looks like it is the same for me - I posted the above comment from work, however, I see 'Nearest Meetups' now that I am home. Your theory sounds reasonable.
Replies from: jackk↑ comment by jackk · 2014-05-12T00:34:54.790Z · LW(p) · GW(p)
philh is correct, and nothing I pushed should've changed the sidebar behaviour.
For those that are worried about meetup attendance being affected:
- The sidebar should not be changed,
- The list of upcoming meetups is still where it was at http://lesswrong.com/meetups/
How many people discover meetups through /r/discussion as opposed to the sidebar and /meetups? Perhaps I should poll this:
Before this change, how did you discover LW meetups? If none apply, please write in.
[pollid:698]
comment by lukeprog · 2014-05-11T01:48:09.376Z · LW(p) · GW(p)
Below is an edited version of an email I prepared for someone about what CS researchers can do to improve our AGI outcomes in expectation. It was substantive enough I figured I might as well paste it somewhere online, too.
I'm currently building a list of what will eventually be short proposals for several hundred PhD theses / long papers that I think would help clarify our situation with respect to getting good outcomes from AGI, if I could persuade good researchers to research and write them. A couple dozen of these are in computer science broadly: the others are in economics, history, etc. I'll write out a few of the proposals as 3-5 page project summaries, and the rest I'll just leave as two-sentence descriptions until somebody promising contacts me and tells me they want to do it and want more detail. I think of these as "superintelligence strategy" research projects, similar to the kind of work FHI typically does on AGI. Most of these projects wouldn't only be interesting to people interested in superintelligence, e.g. a study building on these results on technological forecasting would be interesting to lots of people, not just those who want to use the results to gain a bit of insight into superintelligence.
Then there's also the question of "How do we design a high assurance AGI which would pass a rigorous certification process ala the one used for autopilot software and other safety-critical software systems?"
There, too, MIRI has lots of ideas for plausibly useful work that could be done today, but of course it's hard to predict this far in advance which particular lines of research will pay off. But then, this is almost always the case for long-time-horizon theoretical research, and e.g. applying HoTT to program verification sure seems more likely to help our chances of positive AGI outcomes than, say, research on genetic algorithms for machine vision.
I'll be fairly inclusive in listing these open problems. Many of the problems below aren't necessarily typical CS work, but they could plausibly be published in some normal CS venues, e.g. surveys of CS people are sometimes published in CS journals or conferences, even if they aren't really "CS research" in the usual sense.
First up are 'superintelligence strategy' aka 'clarify our situation w.r.t. getting good AGI outcomes eventually' projects:
More and larger expert surveys on AGI timelines, takeoff speed, and likely social impacts, besides the one reported in the first chapter of Superintelligence (which isn't yet published).
Delphi study of those questions including AI/ML people, AGI people, and AI safety+security people.
How big is the field of AI currently? How many quality-adjusted researcher years, funding, and available computing resources per year? How many during each past previous decade in AI? More here.
What is the current state of AI safety engineering? What can and can't we do? Summary and comparison of approaches in formal verification in AI, hybrid systems control, etc. Right now there are a bunch of different communities doing AI safety and they barely talk to each other, so it's hard for any one person to figure out what's going on in general. Also would be nice to know which techniques are being used where, especially in proprietary and military systems for which there aren't any papers.
- Surveys of AI subfield experts on “What percentage of the way to human-level performance in your subfield have we come in the last n years”? More here.
- Improved analysis of concept of general intelligence beyond “efficient cross-domain optimization.” Maybe just more specific: canonical environments, etc. Also see work on formal measures of general intelligence by Legg, by Hernandez-Orallo, etc.
- Continue Katja’s project on past algorithmic improvement. Filter not for ease of data collection but for real-world importance of the algorithm. Interesting to computer scientists in general, but also potentially relevant to arguments about AI takeoff dynamics.
- What software projects does the government tend to monitor? Do they ever “take over” (nationalize) software projects? What kinds of software projects do they invade and destroy?
Are there examples of narrow AI “takeoff”? Eurisko maybe the closest thing I can think of, but the details aren't clear because Lenat's descriptions were ambiguous and we don't have the source code.
- Some AI approaches are more and less transparent to human understanding/inspection. How well does each AI approach's transparency to human inspection scale? More here.
- Can computational complexity theory place any bounds on AI takeoff? Daniel Dewey is looking into this; it currently doesn't look promising but maybe somebody else would find something a bit informative.
- To get an AGI to respect the values of multiple humans & groups, we may need significant progress in computational social choice, e.g. fair division theory and voting theory. More here.
Next, high assurance AGI projects that might be publishable in some CS conferences/journals. One way to categorize this stuff is into "bottom-up research" and "top-down research."
Bottom-up research aimed at high assurance AGI simply builds on current AI safety/security approaches, pushing them along to be more powerful, more broadly applicable, more computationally tractable, easier to use, etc. This work isn't necessarily focused on AGI specifically but is plausibly pushing in a more safe-AGI-helpful direction than most AI research is. Examples:
- Extend current techniques in formal verification (overview for AI applications, also see e.g. higher-order program verification and incremental reverification), program synthesis (overview of hybrid system applications: p1, p2, p3, p4), simplex architectures, etc.
- More work following up on Weld & Etzioni's "call to arms" for "Asimovian agents": for a 2014 overview see here.
- More work on how to do principled formal validation (not just verification), see e.g. Rushby on epistemic doubt and especially Cimatti's group on formal validation.
- Apply HoTT to program verification.
- More work on clean-slate hardware/software systems that are built from the ground up for high assurance at every stage, e.g. SAFE and HACMS.
- More verified software libraries and compilers, ala the Verified Software Toolchain.
- More tools to make high assurance methods easier to apply: e.g. better interfaces and training for SPIN.
- More work on making more types of AI systems more transparent, so we understand why they work and what bounds they will operate within, so we can have stronger safety+security guarantees for particular approaches than we have now. Much of this work would probably be in computational learning theory, and in dimensionality reduction techniques. Also see here.
To be continued...
Replies from: lukeprog↑ comment by lukeprog · 2014-05-11T01:48:35.287Z · LW(p) · GW(p)
Continued...
Top-down research aimed at high assurance AGI tries to envision what we'll need a high assurance AGI to do, and starts playing with toy models to see if they can help us build up insights into the general problem, even if we don't know what an actual AGI implementation will look like. Past examples of top-down research of this sort in computer science more generally include:
- Lampson's original paper on the confinement problem (covert channels), which used abstract models to describe a problem that wasn't detected in the wild for ~2 decades after the wrote the paper. Nevertheless this gave computer security researchers a head start on the problem, and the covert channel communication field is now pretty big and active. Details here.
- Shor's quantum algorithm for integer factorization (1994) showed, several decades before we're likely to get a large-scale quantum computer, that (e.g.) the NSA could be capturing and storing strongly encrypted communications and could later break them with a QC. So if you want to guarantee your current communications will remain private in the future, you'll want to work on post-quantum cryptography and use it.
- Hutter's AIXI is the first fully-specified model of "universal" intelligence. It's incomputable, but there are computable variants, and indeed tractable variants that can play arcade games successfully. The nice thing about AIXI is that you can use it to concretely illustrate certain AGI safety problems we don't yet know how to solve even with infinite computing power, which means we must be very confused indeed. Not all AGI safety problems will be solved by first finding an incomputable solution, but that is one common way to make progress. I say more about this in a forthcoming paper with Bill Hibbard to be published in CACM.
But now, here are some top-down research problems MIRI thinks might pay off later for AGI safety outcomes, some of which are within or on the borders of computer science:
- Naturalized induction: "Build an algorithm for producing accurate generalizations and predictions from data sets, that treats itself, its data inputs, and its hypothesis outputs as reducible to its physical posits. More broadly, design a workable reasoning method that allows the reasoner to treat itself as fully embedded in the world it's reasoning about." (Agents build with the agent-environment framework are effectively Cartesian dualists, which has safety implications.)
- Better AI cooperation: How can we get powerful agents to cooperate with each other where feasible? One line of research on this is called "program equilibrium": in a setup where agents can read each other's source code, they can recognize each other for cooperation more often than would be the case in a standard Prisoner's Dilemma. However, these approaches were brittle, and agents couldn't recognize each other for cooperation if e.g. a variable name was different between them. We got around that problem via provability logic.
- Tiling agents: Like Bolander and others, we study self-reflection in computational agents, though for us its because we're thinking ahead to the point when we've got AGIs who want to improve their own abilities and we want to make sure they retain their original purposes as they rewrite their own code. We've built some toy models for this, and they run into nicely crisp Gödelian difficulties and then we throw a bunch of math at those difficulties and in some cases they kind of go away, and we hope this'll lead to insight into the general challenge of self-reflective agents that don't change their goals on self-modification round #412. See also the procrastination paradox and Fallenstein's monster.
- Ontological crises in AI value systems.
These are just a few examples: there are lots more. We aren't happy yet with our descriptions of any of these problems, and we're working with various people to explain ourselves better, and make it easier for people to understand what we're talking about and why we're working on these problems and not others. But nevertheless some people seem to grok what we're doing, e.g. I pointed Nik Weaver to the tiling agents paper stuff and despite not having past familiarity with MIRI he just ran with it.
comment by Kaj_Sotala · 2014-05-06T05:28:48.409Z · LW(p) · GW(p)
Here's a comment that I posted in a discussion on Eliezer's FB wall a few days back but didn't receive much of a response there, maybe it'll prompt more discussion here:
--
So this reminds me, I've been thinking for a while that VNM utility might be a hopelessly flawed framework for thinking about human value, but I've had difficulties putting this intuition in words. I'm also pretty unfamiliar with the existing literature around VNM utility, so maybe there is already a standard answer to the problem that I've been thinking about. If so, I'd appreciate a pointer to it. But the theory described in the linked paper seems (based on a quick skim) like it's roughly in the same direction as my thoughts, so maybe there's something to them.
Here my stab at trying to describe what I've been thinking: VNM utility implicitly assumes an agent with "self-contained" preferences, and which is trying to maximize the satisfaction of those preferences. By self-contained, I mean that they are not a function of the environment, though they can and do take inputs from the environment. So an agent could certainly have a preference that made him e.g. want to acquire more money if he had less than $5000, and which made him indifferent to money if he had more than that. But this preference would be conceptualized as something internal to the agent, and essentially unchanging.
That doesn't seem to be how human preferences actually work. For example, suppose that John Doe is currently indifferent between whether to study in college A or college B, so he flips a coin to choose. Unbeknownst to him, if he goes to college A he'll end up doing things together with guy A until they fall in love and get monogamously married; if he goes to college B he'll end up doing things with gal B until they fall in love and get monogamously married. It doesn't seem sensible to ask which choice better satisfies his romantic preferences as they are at the time of the coin flip. Rather, the preference for either person develops as a result of their shared life-histories, and both are equally good in terms of intrinsic preference towards someone (though of course one of them could be better or worse at helping John achieve some other set of preferences).
More generally, rather than having stable goal-oriented preferences, it feels like we acquire different goals as a result of being in different environments: these goals may persist for an extended time, or be entirely transient and vanish as soon as we've left the environment.
As an another example, my preference for "what do I want to do with my life" feels like it has changed at least three times today alone: I started the morning with a fiction-writing inspiration that had carried over from the previous day, so I wished that I could spend my life being a fiction writer; then I read some e-mails on a mailing list devoted to educational games and was reminded of how neat such a career might be; and now this post made me think of how interesting and valuable all the FAI philosophy stuff is, and right now I feel like I'd want to just do that. I don't think that I have any stable preference with regard to this question: rather, I could be happy in any career path as long as there were enough influences in my environment that continued to push me towards that career.
It's as Brian Tomasik wrote at http://reducing-suffering.blogspot.fi/2010/04/salience-and-motivation.html :
There are a few basic life activities (eating, sleeping, etc.) that cannot be ignored and have to be maintained to some degree in order to function. Beyond these, however, it's remarkable how much variation is possible in what people care about and spend their time thinking about. Merely reflecting upon my own life, I can see how vastly the kinds of things I find interesting and important have changed. Some topics that used to matter so much to me are now essentially irrelevant except as whimsical amusements, while others that I had never even considered are now my top priorities.
The scary thing is just how easily and imperceptibly these sorts of shifts can happen. I've been amazed to observe how much small, seemingly trivial cues build up to have an enormous impact on the direction of one's concerns. The types of conversations I overhear, blog entries and papers and emails I read, people I interact with, and visual cues I see in my environment tend basically to determine what I think about during the day and, over the long run, what I spend my time and efforts doing. One can maintain a stated claim that "X is what I find overridingly important," but as a practical matter, it's nearly impossible to avoid the subtle influences of minor day-to-day cues that can distract from such ideals.
If this is the case, then it feels like trying to maximize preference satisfaction is an incoherent idea in the first place. If I'm put in environment A, I will have one set of goals; if I'm put in environment B, I will have another set of goals. There might not be any way of constructing a coherent utility function so that we could compare the utility that we obtain from being put in environment A versus environment B, since our goals and preferences can be completely path- and environment-dependent. Extrapolated meta-preferences don't seem to solve this either, because there seems to be no reason to assume that they'd any less stable or self-contained.
I don't know what we could use in place of VNM utility, though. At it feels like the alternate formalism should include the agent's environment/life history in determining its preferences.
Replies from: Qiaochu_Yuan, Metus, jimmy, Squark↑ comment by Qiaochu_Yuan · 2014-05-07T08:53:05.852Z · LW(p) · GW(p)
I also have lots of objections to using VNM utility to model human preferences. (A comment on your example: if you conceive of an agent as accruing value and making decisions over time, to meaningfully apply the VNM framework you need to think of their preferences as being over world-histories, not over world-states, and of their actions as being plans for the rest of time rather than point actions.) I might write a post about this if there's enough interest.
Replies from: jimmy, Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-07T09:16:34.060Z · LW(p) · GW(p)
I would be very interested in that.
↑ comment by Metus · 2014-05-06T10:18:46.523Z · LW(p) · GW(p)
Robin Hanson writes about rank linear utility. This formalism asserts that we value options by their rank in a list of options available at any one time, making it impossible to construct a coherent classical utility function.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-06T15:28:29.400Z · LW(p) · GW(p)
Yeah, that was my first link in the comment. :-) Still good that you summarized it, though, since not everyone's going to click on the link.
Replies from: Metus↑ comment by Metus · 2014-05-06T18:21:13.472Z · LW(p) · GW(p)
Oops, I frankly did not see the link. The one time I thought I could contribute ...
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-07T09:15:48.170Z · LW(p) · GW(p)
Well, like I said, it was probably a good thing to post and briefly summarize anyway. If you missed the link, others probably did too.
↑ comment by jimmy · 2014-05-07T14:46:03.285Z · LW(p) · GW(p)
I don't think of things like "what I want to do with my life" as terminal preferences - just instrumental preferences that depend on the niche you find yourself in. Terminal stuff is more likely to be simple/human universal stuff (think Maslow's hierarchy of needs)
I think you'll probably find Kevin Simler's essays on personality interesting, and he does a good job explaining and exploring this idea.
http://www.meltingasphalt.com/personality-the-body-in-society/ http://www.meltingasphalt.com/personality-an-ecosystems-perspective/ http://www.meltingasphalt.com/personality-beyond-social-and-beyond-human/
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-13T15:17:07.912Z · LW(p) · GW(p)
Thanks, those are good essays. :-)
↑ comment by Squark · 2014-05-09T18:28:38.937Z · LW(p) · GW(p)
What I think is happening is that we're allowed to think of humans as having VNM utility functions ( see also my discussion with Stuart Armstrong ), but the utility function is not constant over time (since we're not introspective recursively modifying AIs that can keep their utility functions stable).
comment by Richard_Kennaway · 2014-05-09T18:38:14.461Z · LW(p) · GW(p)
I recently saw an advertisement which was such a concentrated piece of antirationality I had to share it here. Imagine a poster showing a man's head and shoulders gazing inspiredly past the viewer into the distance, rendered in posterised red, white, and black with a sort of socialist realism flavour. The words: "No Odds Too Long. No Dream Too Great. The Believer."
If that was all, it would just be a piece of inspirational nonsense. But what was it advertising?
Ladbrokes. A UK chain of betting shops.
Replies from: pragmatist↑ comment by pragmatist · 2014-05-16T14:27:23.216Z · LW(p) · GW(p)
That is a hilariously apposite name for a chain of betting shops.
Replies from: gwerncomment by lukeprog · 2014-05-08T01:08:03.357Z · LW(p) · GW(p)
I can't figure out who runs the Less Wrong Twitter. Does anyone know?
comment by Omid · 2014-05-05T12:08:27.108Z · LW(p) · GW(p)
The United States green card lottery is one of the best lotteries in the world. The payoff is huge (green cards would probably sell for six figures if they were on the market), the cost of entry is minimal ($0 and 30 minutes) and the odds of winning are low, but not astronomically low. If you meet the eligibility criterion and are even a little interested in moving to America, you should enter the lottery this October.
Replies from: Kaj_Sotala, niceguyanon, ChristianKl, army1987, MarkL↑ comment by Kaj_Sotala · 2014-05-05T14:55:08.937Z · LW(p) · GW(p)
Having entered the lottery may make it harder to receive nonimmigrant visas in the future, however.
Replies from: Vulture↑ comment by Vulture · 2014-05-06T16:53:40.812Z · LW(p) · GW(p)
Since this cost and the payoff of the original lottery are in like units, could someone compute whether it's still worth it to enter?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-06T19:06:09.814Z · LW(p) · GW(p)
The cost is a completely qualitative claim, so, no, no one can do this computation.
Replies from: Vulture↑ comment by Vulture · 2014-05-06T19:08:58.159Z · LW(p) · GW(p)
Oh, whoops, misread as "immigrant visa" rather than "nonimmigrant visa". Disregard.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-06T20:11:45.674Z · LW(p) · GW(p)
Well, it's true that they aren't quite the same units, but I was ignoring that. The cost is that the State Department pays attention and applies a penalty to the highly nontransparent visa process. These are qualitative claims. In principle they could be measured by outside observers. In fact, my best measurement is zero: they don't pay attention nor penalize nonimmigrant visas.
Replies from: Vulture↑ comment by Vulture · 2014-05-06T20:23:07.691Z · LW(p) · GW(p)
Ah, okay. That is good to know, and could help people's calculi. Thanks! I only retracted because if my initial understanding had been right, then the tradeoff could be calculated really unambiguously, whereas now it's less clear that looking up the numbers and doing straight comparisons would be as much use.
↑ comment by niceguyanon · 2014-05-05T16:08:03.215Z · LW(p) · GW(p)
The payoff is huge ...,the cost of entry is minimal
This reminds me of another pretty decent lottery that some U.S. residents can take advantage of. Many major cities, including NYC, have affordable housing programs in brand new buildings. The cost to apply is $0, the payoff of is paying 20% - 25% of market rate of housing in that area. No, it's not for poor people, there are other programs for that, the income requirements vary but in general is set to qualify the working residents of the city (maybe 50k - 95k).
Some of the most desirable and stunning locations in the city, where rents are 4k for 600 sq/f, can go for $700. Just Google the city you live in to see the specific requirements.
↑ comment by ChristianKl · 2014-05-05T12:36:33.502Z · LW(p) · GW(p)
I don't think you can resell green cards so their open market price should be irrelevant.
Replies from: knb↑ comment by A1987dM (army1987) · 2014-05-06T18:43:40.108Z · LW(p) · GW(p)
The payoff is huge
Moving somewhere that uses Fahrenheit degrees and sixteen-ounce pints doesn't sound that great to me...
↑ comment by MarkL · 2014-05-05T12:16:46.583Z · LW(p) · GW(p)
Anything remotely like this for EU countries?
Replies from: Metus↑ comment by Metus · 2014-05-05T18:34:18.762Z · LW(p) · GW(p)
It is not a lottery as the green card lottery but if you are of European descent there is the chance you can apply for citizenship. Look out for Italian, Spanish, Hungarian and Irish ancestors in particular.
Edit: This scheme is ridiculously complicated in the EU and I know of no coherent source. If anyone is specifically interested in having the right of abode to work in the EU, contact me with a hint to family history and we can work something out. In the interest of the community I urge you to do this publicly.
comment by Shmi (shminux) · 2014-05-06T22:34:04.528Z · LW(p) · GW(p)
Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI's fundraising efforts.
Replies from: Kaj_Sotala, Qiaochu_Yuan↑ comment by Kaj_Sotala · 2014-05-07T09:31:39.140Z · LW(p) · GW(p)
See also this old post where Robin Hanson basically predicted that this would happen.
The contrarian will have established some priority with these once-contrarian ideas, such as being the first to publish on or actively pursue related ideas. And he will be somewhat more familiar with those ideas, having spent years on them.
But the cautious person will be more familiar with standard topics and methods, and so be in a better position to communicate this new area to a standard audience, and to integrate it in with other standard areas. More important to the "powers that be" hoping to establish this new area, this standard person will bring more prestige and resources to this new area.
If the standard guy wins the first few such contests, his advantage can quickly snowball into an overwhelming one. People will prefer to cite his publications as they will be in more prestigious journals, even if they were not quite as creative. Reporters will prefer to quote him, students will prefer to study under him, firms will prefer to hire him as a consultant, and journals will prefer to publish him, as he will be affiliated with more prestigious institutions. And of course the contrarian may have a worse reputation as a "team player."
↑ comment by Qiaochu_Yuan · 2014-05-07T08:58:44.679Z · LW(p) · GW(p)
I think this is fine. Convincing people that this is a Real Thing and then specifically making them aware of Eliezer and MIRI should be done separately anyway. Doing the second thing too soon may make the first thing harder, while doing the second thing late makes the first thing easier (because then AI x-risk can be put in a mental category other than "that weird thing that those weird people care about").
comment by Anders_H · 2014-05-05T16:05:06.741Z · LW(p) · GW(p)
There is a lot of interest in prediction markets in the Less Wrong community. However, the prediction markets that we have are currently only available in meatspace, they have very low volume, and the rules are not ideal (You cannot leave positions by selling your shares, and only the column with the final outcome contributes to your score)
I was wondering if there would be interest in a prediction market linked to the Less Wrong account? The idea is that we use essentially the same structure as Intrade / Ipredict. We use play money - this can either be Karma or a new "currency" where everyone is assigned the same starting value. If we use a currency other than Karma, your balance would be publicly linked to your account, as an indicator of your predictive skills.
Perhaps participants would have to reach a specified level of Karma before they are allowed to participate, to avoid users setting up puppet accounts to transfer points to their actual accounts
I think such a prediction market would act as a tax on bullshit, it would help aggregate information, it would help us identify the best predictors in the community, and it would be a lot of fun.
Replies from: gwern, Stefan_Schubert, ChristianKl, Lumifer↑ comment by gwern · 2014-05-06T02:56:41.751Z · LW(p) · GW(p)
Why would LWers use such a prediction market more than PredictionBook?
Replies from: Jayson_Virissimo, Anders_H↑ comment by Jayson_Virissimo · 2014-05-06T03:20:38.501Z · LW(p) · GW(p)
Because karma?
Replies from: gwern↑ comment by gwern · 2014-05-06T03:33:13.056Z · LW(p) · GW(p)
I don't think karma matters as much as people think it does, but if that's the only reason, LW could be programmed to look on PB.com for a matching username and increase karma based on the scores or something, much more easily than an entire prediction market written.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-05-07T03:52:18.883Z · LW(p) · GW(p)
That has the problem that people can inflate their scores by repeatedly predicting that the sun will rise tomorrow.
Replies from: gwern↑ comment by gwern · 2014-05-07T14:42:13.640Z · LW(p) · GW(p)
Karma is even more easily - and invisibly - gameable.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-07T16:06:07.439Z · LW(p) · GW(p)
Up vote.
↑ comment by Anders_H · 2014-05-06T04:25:08.601Z · LW(p) · GW(p)
Good point . I actually didn't know about PredictionBook. Now that it has been pointed out to me, I see that there is already a decent option, so my suggestion would be less valuable. However, I still think it would be useful to have a prediction market that operates with Intrade rules. Whether that is worth writing the code is another matter..
↑ comment by Stefan_Schubert · 2014-05-05T16:45:42.055Z · LW(p) · GW(p)
I think it's a very good idea. I also like the "tax on bs" metaphor. I like the idea of bullshitters getting punished! :)
I think it should be remembered, though, that wrt many predictions, luck is as least as important as skill/knowledge. Of course if you have many question the luck/noise element is reduced and the signal/skill element is strengthened, but it nevertheless is something to consider.
↑ comment by ChristianKl · 2014-05-07T16:07:00.869Z · LW(p) · GW(p)
I would personally allow free account creation but give people an ingame salary of currency for every day in which they engage into trades.
Developing a good prediction market that's people actually want to use is a bigger problem. PredictionBook sort of works but it could work better than it works at the moment.
PredictionBook already exists and is opensource. If you want you could probably write a plugin that adds prediction market functionality on top of what already exists in predictionbook.
↑ comment by Lumifer · 2014-05-05T16:08:51.452Z · LW(p) · GW(p)
However, the prediction markets that we have are currently only available in meatspace, they have very low volume, and the rules are not ideal
The global financial markets are basically prediction markets.
If you have a prediction (a "view") on something important, you can often express that view in financial markets.
tax on bullshit
Not with "play money", it won't.
Replies from: Oscar_Cunningham, 9eB1, Anders_H↑ comment by Oscar_Cunningham · 2014-05-05T18:02:31.811Z · LW(p) · GW(p)
If I wanted to know how likely it was that Republicans would win the next election, how could I go about estimating this from the financial markets?
Replies from: gwern↑ comment by gwern · 2014-05-06T02:49:21.124Z · LW(p) · GW(p)
- http://www.fma.org/Porto/Papers/Effects_of_Partisanship_on_Sector_Performance_Paper.pdf
- http://scholar.google.com/scholar?biw=1600&bih=809&um=1&ie=UTF-8&lr=&q=related:_PUN09hrCP0gsM:scholar.google.com/
- http://blogs.cfainstitute.org/investor/2012/08/31/weekend-reading-does-the-us-presidential-election-affect-the-stock-market/
- http://www.kiplinger.com/article/investing/T043-C007-S001-how-the-stock-market-can-predict-who-will-win-the.html
The summary seems to be: look at overall index gains to predict incumbent odds; look at sectors & candidates to predict by party.
↑ comment by 9eB1 · 2014-05-05T17:50:26.151Z · LW(p) · GW(p)
If you have a prediction (a "view") on something important, you can often express that view in financial markets.
This seems false to me. Suppose there is a probability of Russia using a nuclear weapon on Crimea. You have a view on this probability, and other market participants also have a view on this probability, but you don't know what their views are. In order to determine which way you need to invest in Ukrainian and Russian stocks/currencies/etc. to express your view, you have to become an absolute expert in any of the assets in question so that you can estimate the implied probability of current market prices. Since you don't have time to become an expert on every aspect of economics remotely tied to every view you have, you generally will not be able to express your views in the financial markets, unless your views all happen to revolve around the broad movements of asset prices.
The number of assets we have to invest in is really quite limited (realistically, you can invest cheaply only in equities, bonds, currencies, commodity futures, and rate futures, and other things tied to these), and in many cases there are "important events" which can have ambiguous impacts on those assets. The US presidential election, which seems to be a favorite of prediction market enthusiasts, has only an ambiguous impact on US equity market prices, and yet many people consider it an important topic. It occurs to me that in particular, events related to and actions by governments will often be ambiguously reflected by private markets.
Replies from: gwern↑ comment by gwern · 2014-05-06T02:54:39.166Z · LW(p) · GW(p)
Suppose there is a probability of Russia using a nuclear weapon on Crimea. You have a view on this probability, and other market participants also have a view on this probability, but you don't know what their views are. In order to determine which way you need to invest in Ukrainian and Russian stocks/currencies/etc. to express your view, you have to become an absolute expert in any of the assets in question so that you can estimate the implied probability of current market prices.
Efficient markets. Either you think you have non-public information/superior analyses to current participants or you don't; in the latter, you should not trade at all. In the former situation, then the current prices reflect all publicly-available information about the net future prospects of Russian/Ukrainian-related assets, and you don't need to become an expert on anything except the use of nuclear weapons (which you believe the markets are currently ignorant of) since the prices of those assets are already correctly priced with neither excess gains nor losses expected. You can simply buy/sell as your unique insight tells you to.
(Your real problem is whether you can buy enough to make it worthwhile and lack of diversification & volatility means you may be right, buy appropriately, and lose anyway, but that's why you work for a hedge fund.)
Replies from: 9eB1↑ comment by 9eB1 · 2014-05-06T16:04:28.928Z · LW(p) · GW(p)
The efficient markets criticism works if you have non-public information that clearly points to a greater or lesser risk than what market participants think, but it doesn't work for non-public information that is different from market sentiments by only a degree. If you have private information that the Russian government has a 2% chance of using a nuclear weapon on Crimea (perhaps you know they will roll a 50-sided die and use them on a 1), but you can't tell whether the current market prices imply a 0% to 4% probability, you have no way of using your private information without performing a full analysis of the asset. If there were a prediction market it would be straightforward to do so, however. The same is true of private superior analyses, because they will generally differ in terms of probability by a relatively small amount.
It's really just a question of efficiency. A market for a single asset will be less efficient than the market for that asset and 10 related questions because the information costs are lower for people who have private information that bears on those questions.
Replies from: gwern↑ comment by gwern · 2014-05-17T19:06:04.355Z · LW(p) · GW(p)
but it doesn't work for non-public information that is different from market sentiments by only a degree. If you have private information that the Russian government has a 2% chance of using a nuclear weapon on Crimea (perhaps you know they will roll a 50-sided die and use them on a 1), but you can't tell whether the current market prices imply a 0% to 4% probability, you have no way of using your private information without performing a full analysis of the asset.
I disagree for several reasons:
- your example is extremely unrealistic. When do people interested in geopolitics ever get new information expressed as a precise limiting frequency like your die example? That sort of estimate doesn't exist outside of prediction markets.
- you are setting up a strawman when you say one needs to compare a 2% to a 4% estimate: you don't need to know the market's exact estimate, you merely need to know whether yours is lower. Usually, estimating a bound or inequality is a lot easier than estimating an exact value...
and specifically, by the reasoning I gave before about market efficiency, estimation is easy: when you get private information, you only need to know whether it would increase or decrease probabilities on its own. All other information is already priced in but your new information is not, by definition, and hence will shift the market price in the estimated direction, allowing you to profit.
To make your example more realistic: you learn from an informant that tactical nuclear bombs came up at the latest private discussion of the Russian cabinet; you know that the market prices on Ukrainian/Russian assets implicitly assign some probability to the use of nukes, and hence some smaller probability that the Russian cabinet is discussing their use, but the market does not know that the cabinet actually has discussed the use of nukes; you do. Now, you may have no idea whether the market assigns 0.5 or 50% to the use of nukes, but its assignment is being done in the absence of this information about their discussion. All you have to do is decide: is the Russian cabinet discussing the use of nukes evidence for or against the future use of nukes, in a purely-evidential odds or decibel Bayesian sense, independent of priors or posteriors? If it's 'for', then whatever the market probability is (you may have no idea what it is and no ability to figure it out), it will shift upwards; and since the prices reflect the probability, you have an opportunity to short.
If there were a prediction market it would be straightforward to do so, however.
You can do the same thing with prediction markets, assuming they're big enough that you can treat them as efficient. Did you learn new information which is positive? Buy. Negative? Short. Knowing your own subjective probability is useful mostly when you suspect markets are inefficient and you can make a profit without learning any new information. (Typically, whenever I'm trading on a prediction market, I don't even try to elicit my own subjective probability, I just anchor on the market probability and look for signs of bias or ignorance.)
It's really just a question of efficiency. A market for a single asset will be less efficient than the market for that asset and 10 related questions because the information costs are lower for people who have private information that bears on those questions.
That's not what the question was. The question was whether you were right that "to express your view, you have to become an absolute expert in any of the assets in question so that you can estimate the implied probability of current market prices". Certainly, more targeted assets will make it easier to make money off targeted insights. But there's no possible/impossible distinction where you can make money off your nuke insights on a prediction market but no one can make money off the same nuke insight on equity markets.
Replies from: 9eB1↑ comment by 9eB1 · 2014-05-18T10:12:52.246Z · LW(p) · GW(p)
It seems we agree in many areas (you seem to disagree a great deal with my tone and examples, however), so I will focus on what appears to be the core of the disagreement. You are using a framework that assumes strongly efficient markets with respect to private information, and where most private information is of the sort that has a clear impact with respect to the priors implied on the market. I am using a framework of limited market efficiency, where only information that can be profitably exploited, because it e.g. provides a high enough Sharpe ratio, will be reflected in market prices, and where private information can often have an ambiguous relationship to the current odds implied by market prices. Note that my example was based on information of a probabilistic nature, analogous to using a novel statistical model or the like, whereas your example was based on discrete information. Note also that a novel statistical model can still have elements of discrete private information, as when hedge fund analysts set up cameras to monitor the comings and goings of hotel patrons, but where such information still cashes out in terms of a probability.
As part of my framework, the question of whether you can profitably reveal information to the market and the efficiency of said market are intrinsically linked, and this is not a form of dodging the core issue.
I agree that there is not a bright line separating possible and impossible when it comes to whether information can be profitably raised in the market, but there are clearly things that fall on one side or the other of the fuzzy line. My contention is that there are many matters of importance that one can have views and information on that fall on the "not profitable" side of the line. I will retract that you necessarily have to become an absolute expert on the asset. Technically, you only need enough expertise to correctly estimate the impact of your information, but realistically you will in most cases need to become an expert on the asset (and likely multiple assets, since you will want to long some and short some in order to extract as much value as possible) in order to create those estimates (As an aside, if you didn't have to be an expert, there would be books and seminars about how to build investment strategies around more accurately predicting world events, because there are books and seminars on every conceivable investment strategy that doesn't require one to be an expert. And yet you yourself presumably profitably participate in prediction markets instead of just using those same predictions to invest in capital markets.).
Is it important who becomes the next president of the United States? Many would say that it is, and it is a perennial favorite of prediction markets. Could you build a profitable investment strategy if you ONLY knew who was going to become the president a day ahead of time? You better sit down now and do a lot more work (i.e. invest in a tremendous amount of information cost, i.e. become an expert in the relevant assets), because the impact that will have on the markets is by no means unambiguous.
As a point which hasn't been directly addressed, with respect to Lumifer's original statement:
If you have a prediction (a "view") on something important, you can often express that view in financial markets.
I assume that this can be translated to something like, "If you have views on something important, you can often express that view in the financial markets to achieve a higher risk-adjusted return than you could in absence of those views." A possible problem with estimating probabilities of events which only make up a tiny portion of the expected value of a given asset, is that the expected return is totally swamped out by the volatility. You alluded to this earlier in the parenthetical:
(Your real problem is whether you can buy enough to make it worthwhile and lack of diversification & volatility means you may be right, buy appropriately, and lose anyway, but that's why you work for a hedge fund.)
Suppose that you decide a Democratic president will win with higher probability than is expected by the market, due to either your information model or mine, and after analyzing the markets you determine that the best way to take advantage of this is to go short developed country stocks and bonds, and go long the stocks and bonds of the United States. It could easily be the case that despite having good information and the optimal strategy, the residual volatility between your hedge positions makes this a bad strategy on a risk-reward basis, because it overrides the return differential, particularly after costs. This makes it possible but unprofitable to reflect your views in the financial markets, and is a fairly fundamental issue with Lumifer's idea, since in general we expect our views to be only marginally more accurate than the market, but we observe fairly large volatility in the differences of even correlated assets. For example, correlations between US and European stock markets are on the order of .6, which leaves a substantial amount of residual volatility if you are hedging them. Fundamentally, a perfect prediction market would require two assets which are perfectly anti-correlated EXCEPT for the scenario in question, and the further you get from that ideal, the harder it is to create profitable predictions on a risk-reward basis. And the more assets that are required to build your trade, the more assets you require expertise in, and the more you pay in fees. I would contend that this is generally the case for specific predictions that do not bear directly on the movement of major assets.
Replies from: gwern↑ comment by gwern · 2014-08-05T21:56:46.196Z · LW(p) · GW(p)
You are using a framework that assumes strongly efficient markets with respect to private information, and where most private information is of the sort that has a clear impact with respect to the priors implied on the market. I am using a framework of limited market efficiency, where only information that can be profitably exploited, because it e.g. provides a high enough Sharpe ratio, will be reflected in market prices, and where private information can often have an ambiguous relationship to the current odds implied by market prices. Note that my example was based on information of a probabilistic nature, analogous to using a novel statistical model or the like, whereas your example was based on discrete information.
OK. And I think you are wrong in thinking that markets are limited in efficiency and that clearly relevant private information nevertheless has ambiguous implications.
I agree that there is not a bright line separating possible and impossible when it comes to whether information can be profitably raised in the market, but there are clearly things that fall on one side or the other of the fuzzy line.
Transaction costs, available capital, risk, market efficiency, and other factors set a fuzzy line for what kind of information and how large a change in probability will be profitable, yes. We're dealing with real world markets and events, after all.
Technically, you only need enough expertise to correctly estimate the impact of your information, but realistically you will in most cases need to become an expert on the asset (and likely multiple assets, since you will want to long some and short some in order to extract as much value as possible) in order to create those estimates
I think most news is fairly easily evaluated. The Russian atomic bomb example may be too clean an example, but it doesn't seem terribly hard to guess whether Steve Jobs unexpectedly going to a hospital is bad or good for APL.
Is it important who becomes the next president of the United States? Many would say that it is, and it is a perennial favorite of prediction markets. Could you build a profitable investment strategy if you ONLY knew who was going to become the president a day ahead of time? You better sit down now and do a lot more work (i.e. invest in a tremendous amount of information cost, i.e. become an expert in the relevant assets), because the impact that will have on the markets is by no means unambiguous.
I've already linked to papers which have done the footwork for one interested in that question. It'd take maybe an hour to read them. Is that 'a lot more work'? And how hard would a finance professional with access to the relevant databases find to replicate the analysis, even if they had no a priori beliefs about how a Democratic victory might affect various stocks?
It could easily be the case that despite having good information and the optimal strategy, the residual volatility between your hedge positions makes this a bad strategy on a risk-reward basis, because it overrides the return differential, particularly after costs. This makes it possible but unprofitable to reflect your views in the financial markets, and is a fairly fundamental issue with Lumifer's idea, since in general we expect our views to be only marginally more accurate than the market, but we observe fairly large volatility in the differences of even correlated assets. For example, correlations between US and European stock markets are on the order of .6, which leaves a substantial amount of residual volatility if you are hedging them.
I don't follow this point. Why can't I find an appropriate set of hedges? Doesn't that imply inefficiencies?
Fundamentally, a perfect prediction market would require two assets which are perfectly anti-correlated EXCEPT for the scenario in question, and the further you get from that ideal, the harder it is to create profitable predictions on a risk-reward basis.
I assume you mean by 'perfect' a prediction market in which even the slightest bit of new evidence can be profitably exploited because there are no kinds of transaction costs or other friction? That may be true, but I don't think it meaningful refutes Lumifer's observation that markets allow for expression of views on "something important".
↑ comment by Anders_H · 2014-05-05T16:11:57.372Z · LW(p) · GW(p)
This is true, but I was talking about the prediction markets that we have within the Less Wrong community. We may be interested in predictions other than those that go into financial markets.
From what I understand, prediction markets are available at many rationalist houses and other places Less Wrong members gather. It would be great if we could link these together
As for the "tax on bullshit", it wouldn't be a tax that is paid in money, but as long as your balance is publicly linked to your Less Wrong account, it would be a tax that is paid in credibility points. I concede that maybe I should have used a better word than "tax"
Replies from: Lumifer↑ comment by Lumifer · 2014-05-05T16:17:25.502Z · LW(p) · GW(p)
prediction markets are available at many rationalist houses and other places Less Wrong members gather.
What does that mean? That people make public bets against each other? I'm not sure that's enough to qualify as a "market".
it would be a tax that is paid in credibility points
First, I don't think LW karma = credibility. Second, karma is, let's say, prone to inflation :-/
Replies from: Anders_H↑ comment by Anders_H · 2014-05-05T16:25:49.110Z · LW(p) · GW(p)
These are public bets that have some features of a prediction market. People assign their probabilities to different possible outcomes, making a bet against the previous participant in the market. Participants receive "Bayes points" based on the accuracy of their predictions, and there is a scoreboard which aggregates the results from closed markets
These things are not perfect, they lack certain features of a real market. This is why I am proposing to introduce actual prediction markets
Replies from: Lumifer↑ comment by Lumifer · 2014-05-05T16:33:04.415Z · LW(p) · GW(p)
proposing to introduce actual prediction markets
Markets work because there are strong incentives to be right and it's quite painful to be wrong. This means that you must put at risk valuable things, usually money.
If you want the prediction markets to operate using "Bayes points", these points must be valuable and their supply must be limited. In other words, they must be like money. That's... going to be a problem.
comment by iarwain1 · 2014-05-05T15:54:40.074Z · LW(p) · GW(p)
Recently I've been trying to catch up in math, with a goal of trying to get to calculus as soon as possible. (I want to study Data Science, and calculus / linear algebra seems to be necessary for that kind of study.) I found someone on LW who agreed to provide me with some deadlines, minor incentives, and help if I need it (similar to this proposal), although I'm not sure how well such a setup will end up working.
Originally the plan was that I'd study the Art of Problem Solving Intermediate Algebra book, but I found that many of the concepts were a little advanced for me, so I switched to the middle of the Introduction to Algebra book instead.
The Art of Problem Solving books deliberately make you think a lot, and a lot of the problems are quite difficult. That's great, but I've found that after 2-3 hours of heavy thinking my brain often feels completely shot and that ruins my studying for the rest of the day. It also doesn't help that my available study time usually runs from about 10am-2pm, but I often only start to really wake up around noon. (Yes, I get enough sleep usually. I also use a light box. But I still often only wake up around noon.)
One solution I've been thinking of would be to take the studying slower: I'd study math only from 12-2, and before that I'd study something else, like programming. The only problem with that is that cutting my study time in half means it'll take twice as long to get through the material. At that rate I estimate it'll take approximately a year, perhaps a bit more, before I can even start Calculus. Maybe that's what's needed, but I was hoping to get on with studying data science sooner than that.
Another possible solution would be to try an easier course of study than the AoPS books. I've had some good experiences with MOOCs, so perhaps that might be a good route to take. To that end I've tentatively signed up to this math refresher course, although I don't really know anything about it. Or perhaps I could just CliffNotes my way through Algebra II and Precalculus, and then take a Calculus MOOC. I wouldn't get the material nearly as well, of course, but at least I'd be able to get to Calculus and move on with my data science studies from there. I could even do one of these alternatives while also doing the AoPS books at a slower pace. That way I could get to data science studying as soon as possible, and I'd also eventually get a more thorough familiarity with the material through the AoPS books.
What would you suggest?
Replies from: passive_fist, zedzed↑ comment by passive_fist · 2014-05-05T22:40:01.538Z · LW(p) · GW(p)
Be very very careful of studying beyond the level you think is comfortable. My experience has been that you cannot push yourself to learn difficult things, especially math, faster than a certain pace. Sure, your limit may be 20% higher than what you think it is, but it's not 200% higher. Spending more time on a task when you just don't feel up to it is useless, because instead of thinking you'll just be spending more time staring at the page and having your mind drift off.
I've found that the various methods of 'productivity boosting' (pomodoros, etc) are largely useless and do one of two things: Either decrease your productivity, or momentarily increase it at the expense of a huge decrease later on (anything from 'feeling fuzzy for a couple of days' to 'total burnout for 3 weeks'). Unless you have a mental illness, your brain is already a finely-tuned machine for learning and doing. Don't fool yourself into thinking you can improve it just by some clever schedule rearrangement.
The point to all of this is that you should refrain from 'planning ahead' when it comes to learning. Sure, you should have some general overall sketch of what you want to learn, but at each particular moment in time, the best strategy is to simply pick some topic and try to learn it as best you can, until you get tired. Then rest until you feel you can go at it again. And avoid internet distractions that use up your mental energy but don't cause you to learn anything.
Replies from: raisin↑ comment by raisin · 2014-05-06T17:00:04.744Z · LW(p) · GW(p)
your brain is already a finely-tuned machine for learning and doing.
Does this by extension imply that the type of instrumental rationality training advocated by LW is useless? Why, why not?
Replies from: Risto_Saarelma, lmm, passive_fist↑ comment by Risto_Saarelma · 2014-05-06T17:46:33.906Z · LW(p) · GW(p)
The general rule of thumb for raw intelligence probably applies, you can damage it with unwise actions (like eating lead paint or taking up boxing), but there aren't really any good ways to boost it beyond its natural unimpeded baseline. Good instrumental rationality can help you look out for and avoid self-sabotaging behavior, like overworking your way into burnout.
Replies from: passive_fist↑ comment by passive_fist · 2014-05-06T22:11:21.471Z · LW(p) · GW(p)
Decreasing work-load when you feel tired - the thing you naturally want to do - is also a reliable way to avoid burnout.
↑ comment by lmm · 2014-05-06T20:51:31.817Z · LW(p) · GW(p)
Largely, but not entirely. There are cases where evolution optimises for something different from what you want. And there are cases where the environment has changed faster than evolution can track.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T21:27:08.405Z · LW(p) · GW(p)
There are cases where evolution optimises for something different from what you want.
Evolution always optimizes for the same thing :-/
If you want something different, that's your problem :-D
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-05-07T07:26:38.521Z · LW(p) · GW(p)
Is it time to restart the "Read the Sequences" meme?
Specifically: The Tragedy of Group Selectionism
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-07T16:09:04.808Z · LW(p) · GW(p)
Well, at least read the wiki entry.
↑ comment by passive_fist · 2014-05-06T22:09:05.609Z · LW(p) · GW(p)
If some particular method of learning can be shown, through evidence, to be an improvement long-term, then by all means go for it. But until then, your prior belief has to be that it isn't.
↑ comment by zedzed · 2014-05-05T16:24:02.238Z · LW(p) · GW(p)
One of my professors once mentioned that there's an upper limit to how much learning you can do in a sleep cycle [citation needed]. This is congruent with my experience, both before and after he mentioned that, so I tend to believe it. Personally, I tend to max out around 3-4 hours, so the times you're talking about seem reasonable. If you can restructure your work times, napping is a good strategy; I've talked to a few people who report getting through grad school by napping once they'd saturated their brain's capacity to learn new stuff.
Interleaved practice is good. This study had subjects practice finding the volume of unconventional geometric solids. One group clustered their practice; they found the volumes of a bunch of wedges, then a bunch of spheroids, etc. The other group had their practice problems mixed. On a final test, the former group got 20% right, and the latter group got 63% right. citation.
What this suggests is you should perhaps study programming and algebra at the same time, switching between the two fairly frequently. It feels like you're going slower, but, as the authors of the book emphasize, you're trading the illusion of learning for more durable learning.
The AoPS textbooks are really, really good. In fact, I'm pretty sure they're the only good algebra textbooks you're going to find, unless you count abstract or linear algebra; most textbooks at that level are mediocre. As luke_prog has mentioned, good textbooks are the usually the quickest and best way to learn new material. Quality learning takes time, and you're doing yourself no favors by spending that time looking for faster alternatives.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-05T16:33:57.599Z · LW(p) · GW(p)
there's an upper limit to how much learning you can do in a sleep cycle
By "learning" do you actually mean "memorization"?
Replies from: zedzed↑ comment by zedzed · 2014-05-05T16:54:57.494Z · LW(p) · GW(p)
By learning, I actually mean
The act of acquiring new, or modifying and reinforcing, existing knowledge, behaviors, skills, values, or preferences and may involve synthesizing different types of information
My experience is in math (and the prof in question taught math), which is fairly light on the memorization. Sure, you memorize definitions, but most of the effort is in internalizing new ways of thinking about things. Like, the derivation of the quadratic formula doesn't contain any new information to memorize, and I definitely didn't memorize the steps, but when I learned it, I spent a bunch of time looking at what the steps were, why they were legal, and why we decided to use those particular manipulations to solve the problem, and internalizing those things. And after doing enough stuff like that, I'd try to internalize some new stuff, and my brain would say "No!" And then I stopped.
ETA: I'm not sure if I'd call that memorization. I'm certainly talking about putting things in your head that weren't there before, but it's not the type of thing you could easily make into an Anki card.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-05T17:28:10.905Z · LW(p) · GW(p)
I'm not sure if I'd call that memorization. I'm certainly talking about putting things in your head that weren't there before
Well, that's the thing. Learning can be quite different. Some of it is putting new things into your head. But some of it is rearranging your internal maps. And some of it is generating new connections between things inside your head. A whole bunch of it is all of the above.
I understand the idea of limited capacity per sleep cycle -- I'm curious whether it works in different ways for different kinds of learning.
Replies from: Barry_Cotter↑ comment by Barry_Cotter · 2014-05-06T13:53:48.474Z · LW(p) · GW(p)
I understand the idea of limited capacity per sleep cycle -- I'm curious whether it works in different ways for different kinds of learning.
Personally I'd be surprised if it did. The maximum amount of deliberate practice you can get in a day tops out at 3-4 hours, according to K. Anders Ericsson. I think that's quite close to the limits of what the brain can do. I'll honestly be surprised if napping tesets that clock or he or other psychologists woul have uncovered them.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T16:03:54.630Z · LW(p) · GW(p)
The maximum amount of deliberate practice you can get in a day tops out at 3-4 hours, according to K. Anders Ericsson.
Do you have a link?
Replies from: badger↑ comment by badger · 2014-05-06T16:46:37.225Z · LW(p) · GW(p)
See pg. 391-392 of The Role of Deliberate Practice in the Acquisition of Expert Performance.pdf), the paper that kicked off the field. A better summary is that 2-4 hours is the maximum sustainable amount of deliberate practice in a day.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T17:03:35.459Z · LW(p) · GW(p)
Ah, so that's where you are coming from.
Well, first of all "deliberate practice" is different from "learning". The paper is concerned with ability to perform which is the goal of the deliberate practice, not with understanding which is the goal of learning.
Second, the paper is unwilling to commit to this number saying (emphasis mine) "...raising the possibility of a more general limit on the maximal amount of deliberate practice that can be sustained over extended time without exhaustion."
I certainly accept the idea that resources such as concentration, attention, etc. are limited (though they recover over time) and you can't just be at your best all your waking time. But there doesn't seem to be enough evidence to fix hard numbers (like 2-4 hours) for that. And, of course, I expect there to be fair amount of individual variation, as well as some dependency on what exactly is it that you're learning or practicing.
comment by NancyLebovitz · 2014-05-12T01:46:58.742Z · LW(p) · GW(p)
The five defined depression biotypes are:
“It’s not serotonin deficiency, but an inability to keep serotonin in the synapse long enough. Most of these patients report excellent response to SSRI antidepressants, although they may experience nasty side effects,” Walsh said.
Pyrrole Depression: This type was found in 17 percent of the patients studied, and most of these patients also said that SSRI antidepressants helped them. These patients exhibited a combination of impaired serotonin production and extreme oxidative stress.
Copper Overload: Accounting for 15 percent of cases in the study, these patients cannot properly metabolize metals. Most of these people say that SSRIs do not have much of an effect—positive or negative—on them, but they report benefits from normalizing their copper levels through nutrient therapy. Most of these patients are women who are also estrogen intolerant.
“For them, it’s not a serotonin issue, but extreme blood and brain levels of copper that result in dopamine deficiency and norepinephrine overload,” Walsh explained. “This may be the primary cause of postpartum depression.”
Low-Folate Depression: These patients account for 20 percent of the cases studied, and many of them say that SSRIs worsened their symptoms, while folic acid and vitamin B12 supplements helped. Benzodiazepine medications may also help people with low-folate depression.
Walsh said that a study of 50 school shootings over the past five decades showed that most shooters probably had this type of depression, as SSRIs can cause suicidal or homicidal ideation in these patients.
Toxic Depression: This type of depression is caused by toxic-metal overload—usually lead poisoning. Over the years, this type accounted for 5 percent of depressed patients, but removing lead from gasoline and paint has lowered the frequency of these cases.
Those people ranting about anti-depressants and school shootings may have been partially on to something.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-13T15:26:24.139Z · LW(p) · GW(p)
Unfortunately, the source for this information about biotypes and depression is looking very sketchy
Replies from: TylerJay↑ comment by TylerJay · 2014-05-16T23:57:05.992Z · LW(p) · GW(p)
That's unfortunate. Knowledge like this would be incredibly useful for treatment. Rather than just throwing drugs at a problem and seeing what sticks, doctors and psychiatrists could actually try treating each type in turn, or even better, test for markers of each condition.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-17T00:06:55.468Z · LW(p) · GW(p)
I'm hoping the information will pan out. Sketchy source doesn't = guaranteed false.
comment by DanielLC · 2014-05-05T19:47:43.612Z · LW(p) · GW(p)
According to the principle of enlightened self-interest, you should help other people because this will help you in the long run. I've seen it argued that this is the reason why people have an instinct to help others. I don't think that this would mean helping people the way an Effective Altruist would. It would mean giving the way people instinctually do. You give gifts to friends, give to your community, give to children's hospitals, that sort of thing.
This makes me wonder about what I'm calling enlightened altruism. If you get power from helping people in that way, then you can use the power to help people effectively.
Replies from: Manfred↑ comment by Manfred · 2014-05-05T21:56:28.068Z · LW(p) · GW(p)
Well, we can use the outside view here. If we look at people who are particularly successful, did they get that way by helping others? What's the proportion relative to poor people?
I don't think this backs up the idea of enlightened self-interest very well. Sure, you have to "play by the rules" to be successful, but going above and beyond doesn't seem to lead to additional success.
Another question we might ask is "where do peoples' instincts for giving come from?" If you believe Dawkins et al., it's the selfishness of genes, which does not have to causally pay of for the organism (instead, the payoff is acausal). This is not the sort of thing where giving according to our instincts will lead to us getting more money.
Replies from: army1987, DanielLC↑ comment by A1987dM (army1987) · 2014-05-06T18:39:06.143Z · LW(p) · GW(p)
If we look at people who are particularly successful
Survivorship bias alert!
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-05-06T19:13:14.684Z · LW(p) · GW(p)
He qualified that by "What's the proportion relative to poor people?" thus not just looking at the survivors.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-05-07T07:25:04.672Z · LW(p) · GW(p)
Imagine a planet with one billion people each of whom has $1000, except the 99,999,990 people who played the lottery and lost and now have $990 each and the 10 people who played the lottery and won and now have $1,000,990 each. 100% of the rich people played the lottery whereas only 10% of the poor people did so, but that doesn't mean playing the lottery was a good idea.
↑ comment by DanielLC · 2014-05-06T01:30:24.348Z · LW(p) · GW(p)
Sure, you have to "play by the rules" to be successful, but going above and beyond doesn't seem to lead to additional success.
My point is more about giving the standard amount to the standard charities, rather than earmarking it all for the most efficient one.
which does not have to causally pay of for the organism (instead, the payoff is acausal)
I'm not sure what you mean here. Can you give an example?
Replies from: Manfred↑ comment by Manfred · 2014-05-07T01:25:08.203Z · LW(p) · GW(p)
which does not have to causally pay of for the organism (instead, the payoff is acausal)
I'm not sure what you mean here. Can you give an example?
Suppose I have a gene that makes me cooperate in a prisoner's dilemma with my relatives. This gene benefits me, because now I can cooperate with my cousins and get the better payoff (assuming my cousins also have this gene!). But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.
But if I had a gene that said "defect with my cousins," that would mean my cousins defect back, and so we all lose. So our instincts be beneficial even when the individual best strategy doesn't line up with them (Because our instincts can be correlated with other humans').
Replies from: Lumifer↑ comment by Lumifer · 2014-05-07T02:04:22.077Z · LW(p) · GW(p)
But you know what would be even better? If my cousins cooperated with me but I defected. So from a causal decision theory standpoint, my best route is to ignore my instincts and defect.
This reasoning assumes that you are special and significantly different from your cousins. If you're not, your cousins follow the same strategy and you all defect, gene or no gene.
Replies from: DanielLC↑ comment by DanielLC · 2014-05-07T16:46:50.050Z · LW(p) · GW(p)
That's what acausal benefit means.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-07T17:01:11.447Z · LW(p) · GW(p)
Google: No results found for "acausal benefit"
Can you elaborate?
Replies from: DanielLC↑ comment by DanielLC · 2014-05-07T20:36:29.259Z · LW(p) · GW(p)
It's mostly limited to this site, and I don't know how much that exact wording is used, but it refers to things like Newcomb's problem, where you can get some benefit from what you do, but you're not actually causing it.
I should add that when I told Manfred, I didn't understand, it was more that I didn't understand how it applied to that situation.
Replies from: Lumifer, Nornagest↑ comment by Nornagest · 2014-05-07T21:00:12.523Z · LW(p) · GW(p)
The wiki article on acausal trade may prove helpful.
comment by gjm · 2014-05-09T10:31:41.841Z · LW(p) · GW(p)
Elsewhere in comments here it's suggested that one reason why LW (allegedly) has less interesting posts and discussions than it used to is that "Eliezer has taken to disseminating his current work via open Facebook discussions". I am curious about how the rest of the LW community feels about this.
Poll! The fact that Eliezer now tends to talk about his current work on Facebook rather than LW is ...
[pollid:697]
(For the avoidance of doubt, I am not suggesting that Eliezer has any obligation to do what anyone votes for here. Among many reasons there's this: If he's posting things on FB rather than LW because there are lots of people who want to read his stuff but for whatever reason will never read anything on LW then this poll can't possibly detect that other than weakly and indirectly.)
Replies from: wedrifid↑ comment by wedrifid · 2014-05-09T17:42:06.246Z · LW(p) · GW(p)
The main problem is that facebook encourages drastically different quality of thought and expressions than lesswrong does. The quality of thought in Eliezer's comments on facebook is sloppy. I chose to unfollow him on facebook because seeing Eliezer at his worst makes it rather a lot more difficult to appreciate Eliezer at his best (contempt is the mind killer). I assumed that any particularly insteresting work he did (that is safe to share with the public) would end up finding its way into a less transient medium than facebook eventually...
...Have I been missing anything exciting?
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-05-12T08:23:41.918Z · LW(p) · GW(p)
facebook encourages drastically different quality of thought and expressions
Not sure if this applies to Eliezer's debate threads, but not having downvotes is a horrible setup for a debate. Every stupid comment is either ignored, which seems like "silence is consent", or starts a flamewar. There is simply no way to reduce noise.
Replies from: wedrifid, army1987↑ comment by wedrifid · 2014-05-12T16:03:04.290Z · LW(p) · GW(p)
Not sure if this applies to Eliezer's debate threads, but not having downvotes is a horrible setup for a debate. Every stupid comment is either ignored, which seems like "silence is consent", or starts a flamewar. There is simply no way to reduce noise.
There is no way to reduce noise for everyone else. For myself I've adopted a strategy of using the 'block use' feature whenever I encounter a comment that I especially wish to downvote. These days I consider 'block' to be a far more critical feature than downvoting is (despite remaining a big fan of downvoting liberally).
↑ comment by A1987dM (army1987) · 2014-05-12T11:24:55.217Z · LW(p) · GW(p)
You can delete comments to your posts, and IIRC EY has endorsed doing so.
comment by Raythen · 2014-05-06T15:38:29.836Z · LW(p) · GW(p)
I wonder what you think of the question of the origin of consciousness i. e. "Why do we have internal experiences att all?" and "How can any physical process result in an internal/subjective experience?"
I've read some material on the subject before, and reading the quantum physics and identity sequence got me thinking about this again.
Replies from: None, Alejandro1↑ comment by [deleted] · 2014-05-06T16:42:09.890Z · LW(p) · GW(p)
Douglas Hofstadter is the go to, mainstream, "hey I recognize that name" authority, though it obviously should be noted that he is a cognitive scientist, not a biologist, neurologist, or nuero-biologist. So, you couldn't build a brain from reading Godel, Escher, and Bach. The only other material I intimately know that discusses the origin of consciousness is Carl Sagan's The Dragons of Eden, which, again, is mainstream and pop science. It's fun reading and enjoyable, but you can't build a brain from it. Someone else can probably suggest better sources for more study.
Of course, some components of these questions can be answered by reducing the question to find out more about what you're looking for.
What's the make up of an internal experience? What are its moving parts? How do you build it?
How are subjective experiences not physical processes? If they aren't physical, what are they?
Taboo "internal/subjective experiences." What are you left with to solve? What mechanics remain to be understood?
Since you've read through the quantum physics sequence, I'm sure you've been exposed to these ideas already. I'm not a neuroscientist or a cognitive scientist. I know very little about the brain that wasn't used for blunt symbolism in Neon Genesis or Xenogears. But I'd guess that, whatever mechanism(s) allows for consciousness, it's built using the matter available. No tricks or slight of hand.
Replies from: Raythen↑ comment by Alejandro1 · 2014-05-07T02:36:32.192Z · LW(p) · GW(p)
My suggestion would be to start with Dennett's Consiousness Explained. It tackles exactly the questions you are interested in, and it is much more entertaining than the average philosophy/neurology book on the topic.
comment by witzvo · 2014-05-08T05:11:20.326Z · LW(p) · GW(p)
Links: Young blood reverses age-related impairments in cognitive function and synaptic plasticity in mice (press release)(paper)
I think the radial arm water maze experiment's results are particularly interesting; it measures learning and memory (see fig 2c which is visible even with the paywall). There's a day one and day two of training and the old mice (18 months) improve somewhat during the first day and then more or less start over on the second day in terms of the errors they are making. This is also true if the old mice are treated with 8 injections of old blood over the course of 3 weeks (the new curves lie pretty much on top of the old curves in supplemental figure 7d). Young mice (3 months) perform better than the old mice (supplemental figure 5d) they learn faster on the first day and retain it when the second day starts (supp 7d).
However, if you give 8 injections of 100 micro liters of blood from 3 month old mice to 18 month old mice, the treated mice perform dramatically better than the old-blood treated old mice (2c) and much more like young mice (this comparison is less certain; I'm comparing one line from 2c to one line from supp. 7d, but that's how it looks by eye).
One factor in the new blood that plays a role is GDF11. From another paper: "we show that GDF11 alone can improve the cerebral vasculature and enhance neurogenesis"
The New York Times gives an overview and other known effects of young blood such as rejuvenating the musculature / heart / vasculature of old mice with young blood. Young Blood May Hold Key to Reversing Aging, e.g. Restoring Systemic GDF11 Levels Reverses Age-Related Dysfunction in Mouse Skeletal Muscle
comment by Vulture · 2014-05-06T16:49:43.101Z · LW(p) · GW(p)
Idea for a question for the next LW survey: Have you ever been diagnosed with a mental disorder? If so, what was it? [either a list of some common ones and an "other" box, or, ideally, a full drop-down of DSM-5 diagnoses. Plus a troll-bait non-disorder and a "prefer not to say", of course]
Replies from: Vulture, NancyLebovitz, Lumifer↑ comment by NancyLebovitz · 2014-05-08T15:42:44.104Z · LW(p) · GW(p)
I think I'd want a second question about the severity of the disorder, including whether person thinks the disorder has some advantages.
↑ comment by Lumifer · 2014-05-06T17:14:23.267Z · LW(p) · GW(p)
Idea for a question for the next LW survey: Have you ever been diagnosed with a mental disorder?
...and a follow-up question: Have you ever self-diagnosed yourself with a mental disorder?
:-)
Replies from: Vulture↑ comment by Vulture · 2014-05-06T18:38:56.827Z · LW(p) · GW(p)
Would that be interesting enough as a question to be worth including? I imagine there's a lot of variability in self-diagnosis.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T18:57:20.471Z · LW(p) · GW(p)
The first interesting point is the one-bit yes/no answer.
I would not expect a majority of the general population to self-diagnose itself with a mental disease at any point in their life. However for certain specific groups this changes. One group of interest is high-IQ reflexive self-doubting people. Another group is freaks, that is, people who are clearly weird/strange/different from those around them for whatever reason. Yet another group is borderline cases, those whose symptoms are not strong or pronounced enough for a clinical diagnosis and yet they are not entirely "normal" anyway. And another group is a variety of neurodiverse people.
Replies from: mare-of-night, Vulture↑ comment by mare-of-night · 2014-05-08T14:06:20.381Z · LW(p) · GW(p)
There are also people who do have a disorder, but have reasons for not seeing a doctor about it. (Lack of funds, not expecting treatment to help, not needing treatment, etc.)
Replies from: Lumifer↑ comment by Lumifer · 2014-05-08T14:50:59.050Z · LW(p) · GW(p)
Do you mean "reasons" or do you mean "rational reasons"?
The opinion of someone who does have a mental disorder on whether treatment will help or is needed, that opinion is... suspect.
Replies from: mare-of-night↑ comment by mare-of-night · 2014-05-09T05:31:11.577Z · LW(p) · GW(p)
In this context, they don't have to be good reasons - my point was that a self diagnosis doesn't necessarily disagree with what a doctor would say if asked.
↑ comment by Vulture · 2014-05-06T20:18:37.416Z · LW(p) · GW(p)
Okay, that makes sense. And although it might take some clever structuring, I think it might be interesting to try to determine how frequently those self-diagnoses were accurate... something about "confirmed by a medical professional", perhaps?
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T20:49:43.902Z · LW(p) · GW(p)
something about "confirmed by a medical professional", perhaps?
This is tricky ground. If you want more follow-up questions, the first probably should be "Have you, of your own will, talked to a mental health professional about an assessment or a diagnosis?". Again, the majority of the general population would answer "no" to this.
Replies from: Nornagest, Vulture↑ comment by Nornagest · 2014-05-06T21:23:29.394Z · LW(p) · GW(p)
I seem to recall something like 30% of the adult American population being in therapy or having been recently. That's not a majority, but it's pretty substantial, and they didn't get there by magic.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T21:29:20.328Z · LW(p) · GW(p)
seem to recall something like 30% of the adult American population being in therapy or having been recently.
My impression is that mostly involves people going to their doctor and saying "Doctor, I feel horrible!". And the good doctor says "Sure, try these antidepressants!" (yes, I know I'm exaggerating).
That's a different thing from "Doctor, I believe I'm mentally ill".
Replies from: Nornagest↑ comment by Nornagest · 2014-05-06T21:37:00.022Z · LW(p) · GW(p)
Depression is a mental illness. You might not go to the doctor and ask about depression (though I doubt this is anywhere near as uncommon as you're making it out to be), but going to the doctor and saying "Doc, I can't sleep, feel sad all the time, everything I do seems pointless, etc." is as much asking for a consultation on mental illness as "Doc, I've got this nasty bullseye-shaped rash on my leg and I've got a fever and a bad headache" is asking for a consultation on Lyme disease.
The standards of diagnosis might not be as rigorous, but that's a separate issue.
Replies from: fubarobfusco, Lumifer↑ comment by fubarobfusco · 2014-05-07T04:42:33.293Z · LW(p) · GW(p)
Then there's me.
"Doctor, I can't sleep!" "Here, take this Ambien." "Ambien scares the crap out of me; it makes my friend call me up late at night and ramble incoherently at me, and I've heard it makes people have sex and forget it happened." "Eh. Take it anyway, that doesn't happen to most people."
"Doctor, I still can't sleep, I worry all the time, and it's wrecking my motivation at work. And the Ambien works, but it makes me trip out more than I probably should most nights." "You have an anxiety disorder. Here, go to this psychiatrist, Doctor #2. And don't take so much Ambien."
"Doctor #2, I can't sleep, I worry all the time, and it's wrecking my motivation at work. Oh, and Ambien makes me trip out before I fall asleep." "You have anxiety and depression. Here, take these antidepressants, and these benzodiazepines if you need them, plus these folates and vitamin D ... oh, and replace that Ambien with this Lunesta, and come back every week. And let's talk about the work situation, something's messed up there ..."
Anecdotal, sure; and pretty recent. But I didn't start out with the idea "I'm depressed and should seek antidepressants". I thought I had a sleep disorder, but it turns out our reality doesn't issue time machines for those.
↑ comment by Lumifer · 2014-05-07T01:03:09.972Z · LW(p) · GW(p)
Yes, if the question were "How many people go to a doctor to complain of symptoms of mental illnesses" then sure, a large chunk of the general American population (still don't know if a majority) would qualify.
However recall the context. We started with the question "Have you ever self-diagnosed yourself with a mental disorder?" and are talking about the follow-up to it. Here the question about going to the doctor means mostly "Did you take your self-diagnosis seriously enough to talk to a medic about it?" And, still within this context, the question is much more like "I think I'm mentally ill, is that so?" than "I can't sleep and life is pointless, how do I fix that?"
Replies from: Nornagest↑ comment by Nornagest · 2014-05-07T01:38:14.069Z · LW(p) · GW(p)
I was mostly replying to the bit about the general population. In the context of a follow-up question, you might get some quite different results.
Actually, I'm not at all sure if you'd even get a higher percentage of yes respondents than you would in the general population; there's a lot of things I get the sense that a self-diagnosis could be pointing to, most of them likely anticorrelated with seeking formal diagnosis. Charitably, it might indicate an attempt to find out what's going on with one's head in an absence of resources or motivation, or in the presence of social or communication issues or other life circumstances that make one less likely to immediately seek help. Less charitably, it might indicate attention-seeking behavior of some sort, or a trivial approach to the whole issue.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-07T01:55:29.595Z · LW(p) · GW(p)
I agree that self-diagnosis could be pointing to multiple, different things. Don't know if there'll be much attention-seeking in the current crowd -- "I'm so cool I'm depressed and I'll say I have MDP to make me extra cool" is a kinda early high school thing and most people grow out of it fairly quickly. People who don't grow out of it are, um, easily recognizable.
↑ comment by Vulture · 2014-05-06T21:14:07.843Z · LW(p) · GW(p)
Again, the majority of the general population would answer "no" to this
Are you sure about that?
Replies from: Lumifer↑ comment by Lumifer · 2014-05-06T21:19:48.003Z · LW(p) · GW(p)
I don't have data, but my prior is fairly strong.
There are a lot of (temporarily) depressed teenagers, but it's rarely clinical and they rarely go for a formal evaluation to a psychiatrist or a psychotherapist.
How many people, do you think, go to a doctor and say "I think I'm mentally ill"?
Replies from: Vulture↑ comment by Vulture · 2014-05-06T21:43:30.522Z · LW(p) · GW(p)
How many people, do you think, go to a doctor and say "I think I'm mentally ill"?
Ah, when you phrase it like that I realize that my estimate is rather low. Near vs. Far mode, I guess. Since it's relatively unlikely that someone would do that if they weren't actually mentally ill, and some mental illness is mild enough that one wouldn't bother, and a lot of the severe ones could prevent someone from consulting a doctor on their own, a pretty low proportion of the population seems reasonable.
Does that line up with your reasoning?
edit: I think that part of what was muddling me was that your original phrasing ("talked to a mental health professional about an assessment or a diagnosis") was sort of unclear, so I resorted to nearby heuristics rather than trying to parse it properly. We might want to fix that up before putting it on the survey.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-07T01:06:41.414Z · LW(p) · GW(p)
Well, I meant this in the context of being a follow-up to the previous question about self-diagnosis. So it mostly means "Did you take your self-diagnosis seriously enough to go to a doctor?"
Such a question outside of this context needs to be more precisely formulated, I think. As we were discussing with Nornagest, going to a doctor and saying "I can't sleep, life sucks, can you help with that?" is sufficiently common.
comment by pan · 2014-05-06T11:38:02.752Z · LW(p) · GW(p)
I've been wondering a lot about whether or not I'm acting rationally with regards to the fact that I will never again be as young as I am now.
So I've been trying to make a list of things I can only do while I'm young, so that I do not regret missing the opportunity later (or at least rationally decided to skip it). I'm 27 so I've already missed a lot of the cliche advice aimed at high school students about to enter college, and I'm already happily engaged so that cuts out some other things.
Any thoughts on opportunities only available at a certain age?
Replies from: None↑ comment by [deleted] · 2014-05-06T12:45:40.416Z · LW(p) · GW(p)
One point, just a nitpick: I would suggest not to aim to act "rationally." Aim to win. I may be assuming overmuch about your intended meaning, but remember, if your goal is to do what is rational rather than to do what is best/right/winning, you'll be confused.
That said, I understand what you mean. There are activities I know can done now, in youth, that, while maybe not impossible in my 40s, 50s, or 60s, would be more difficult.
First, your health. Work out, eat right, stay clean. Do everything that can maximize your health NOW and do it to the utmost that you can. If you start working on your health now, the long term payoffs will be exponential rather than linear. The longer you wait to maximize your health, the greater your disadvantage, the less your payoff. EDIT: (As I have no citation to back this claim up, it'd be best not to take my word on this. I would still suggest not delaying improving your health because doing so will result in benefits now, regardless of whether health improvements are exponential or linear with age.)
Second, try everything. We have a whole article on this that spells it out better than I can. And I'll be the first to admit I haven't dove into its methods full force so I can't vouch for them. But, basically, expose yourself to the world. Not in any mean or gross sense, but as a human being, gathering experience. Go to art classes, go to yoga classes, go to MIRI classes, take karate, learn to dance, learn to sing, play an instrument, learn maths, learn history, go to LW meetups.
Of course, you will be limited, and should be limited, by circumstances. You aren't a brain with infinite capacity yet, so you can't literally do everything. So, focus on a few things at a time. Set a schedule to try out new activities while continuing old, beneficial ones. For example, you might have three days for working out, two days for programming learning (as a hobby), one for online studying, one for social networking. Replace with whatever activities most interest or most benefit you (and don't be afraid of overlap if you want to double up). I live in a place with very little stimulus, so I double up on audiobooks and exercise and use recreational times (gaming or working out) to listen to audiobooks or expose myself to new music. The point is to jump in with both feet and do whatever you do well.
Ultimately, your youth gives you two real things: health (presumably) and energy. Now, I have seen 60 year old men in better shape and with more pep than me (marathon runners!), but for the average, your health and energy will come easier to you now than later. Use it.
Replies from: Lumifercomment by [deleted] · 2014-05-06T13:37:02.798Z · LW(p) · GW(p)
Is anyone familiar with any effective-altruist work on pushing humanity towards becoming a spacefaring species? Seems relevant given the likely difference between a civilization that develops it vs. one that doesn't.
Replies from: ChristianKl, Izeinwinter↑ comment by ChristianKl · 2014-05-07T15:46:05.661Z · LW(p) · GW(p)
I think it might even have negative return. If you do PR in that regard you are going to encourage misallocation of NASA funds. NASA should spend more resources on tracking near-earth objects and less on PR moves like trying to put a man on Mars. Understanding the climate of our own planet better is also an useful target for NASA spending.
Building human civilisation in Alaska is much easier than doing it on Mars. We don't even get things right in Africa where there fertile ground on which plants grow.
Colonizing Mars will need much better biotech and smarter robots than we have at the moment.
↑ comment by Izeinwinter · 2014-05-07T06:46:44.432Z · LW(p) · GW(p)
.. the obvious E-A answer to this question is "Don't do any pushing". - Increased space presence is a nigh-certain consequence of a more generally prospering and peaceful world, and diverting resources towards pushing this above trend is going to have just awful returns in utility per dollar. Space will happen on it's own accord as people find useful things to do there (I figure telescopes will be the main thing, tbh.) but beyond that? People are already mapping the asteroid trajectories, which is the only issue really directly relevant to E-A work. If the world dies, and a remnant lives on in tincans in space, that is.. not actually very helpful.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-05-07T09:24:47.385Z · LW(p) · GW(p)
If the world dies, and a remnant lives on in tincans in space, that is.. not actually very helpful.
But arguably still vastly better than everybody dying, particularly if that tincan civilization can eventually rebuild and recolonize habitable planets.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2014-05-07T10:03:27.654Z · LW(p) · GW(p)
Nope. The species has no utility because it has no identity. It would be better to precisely the amount of the sum utility of the very few, very traumatized survivors. So it is a paltry payout on an expensive insurance policy against a hopefully unlikely eventuality with quite a low chance of paying out.
I figure most threats that take out planet earth would end any plausible space presence as well, as they suddenly find that the control chips for the water purifiers need replacing and were all made in the republic of korea or similar outcomes.
.. Look, space industrialization and exploration are cool, useful and interesting. I would really like to see solar gravity lens telescopes, and there may indeed be industrial processes that are most sensibly moved offworld where accidents cant impact any ecosystems. But the whole "Eggs in basket" argument? It is just not very good, and the things worth doing in space cannot meaningfully be considered charity unless you so class the entire scientific endeavor, so this is just one of the many aspects of human existence which fall outside the purview of EA.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-07T12:36:27.900Z · LW(p) · GW(p)
The distinction between space presence (hard enough already) and space presence independent of earth (much harder) is worth making.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2014-05-07T19:47:18.773Z · LW(p) · GW(p)
Just for amusement value: Consider what it would take for a space presence to survive independently any threat likely to clean out all 7 billion people on earth. - There are no suitable biosphere's anywhere in reach, so such an outpost would need to build and maintain one with no external resupplies. It also needs to be distant enough, and isolated enough from earth that contagions and conflicts are unlikely to involve them. So, it needs to be a complete industrial society, and it is obligated to be a hermit kingdom/republic. The complete industrial society is kind of a killer problem, because supporting that needs quite a substantial population, so it requires something on the order of moving dozens of millions of people to the moon system of Uranus. So, yhea, not seeing the point
comment by pewpewlasergun · 2014-05-07T05:28:43.838Z · LW(p) · GW(p)
So I often find that interesting people live near me. Anyone have tips on asking random people to meet up? Ask them for coffee? I suppose a short email is better than a long one, which may come off creepy? Anyone have friends they met via random emails?
Replies from: Alicorn↑ comment by Alicorn · 2014-05-07T06:29:38.617Z · LW(p) · GW(p)
I have a lot of friends who I met through fan mail - people contacting me to tell me they like something about my online footprint. My recommendation is to establish online correspondence for a while, then when they don't send "leave me alone" signals like terse or perfunctory responses you can ask to hang out.
Replies from: pewpewlasergun↑ comment by pewpewlasergun · 2014-05-07T06:43:26.089Z · LW(p) · GW(p)
Thanks.
After a bit of random googling it seems there are a lot of results about 'saying no to people who want to get coffee/pick your brain' so it seems like reasonably successful people with an internet presence get a lot of these requests.
Replies from: Alicorn↑ comment by Alicorn · 2014-05-07T07:13:02.302Z · LW(p) · GW(p)
I imagine different successful people with internet presences have different intersections of request quantity and request tolerance. I don't get people paying attention to me and wanting to hang out with me as often as I'd like yet so that's probably biasing my recommendations.
comment by Ben Pace (Benito) · 2014-05-07T12:20:58.999Z · LW(p) · GW(p)
Dear LW,
I've just this morning been offered funding for a research placement in a British University this summer (I'm 17). I have to contact researchers myself, and it generally has to be in a STEM subject area. I am looking very generally for any recommendations of researchers to contact in areas of Maths, Physics and Computer Science. If you know any people who do research that would be of interest to the average LessWronger, especially in the aforementioned fields, I would appreciate it greatly.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2014-05-07T15:50:39.432Z · LW(p) · GW(p)
Obviously there are hundreds of possibilities, but the Future of Humanity Institute springs to mind.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-07T16:04:57.285Z · LW(p) · GW(p)
I checked them out actually, and it doesn't seem like they normally do that kind of thing. Still, I've sent them an email, and I'll see what they say :)
Added: They've said they're happy I'm interested, but haven't got anything for me at the minute,
comment by palladias · 2014-05-05T20:19:43.591Z · LW(p) · GW(p)
Looks like when my current job ends (May 31), I'll have the summer free before my next one starts (Sept). My June is pretty much booked with a big writing project with a looming deadline, but I get to decide how to fill July and August, and I'd appreciate crowdsourced suggestions.
I'm lucky enough to not need to find alternate work to cover living expenses for those two months, so I'm not particularly in the market for short-term work suggestions. I'll be based out of D.C. during this period. Not super interested in travel. I'm considering some self-study but I'm not planning to become a programmer.
Here are some of the things I currently have in mind (not highly optimized, the things that occur immediately when I think "What do I want to do this summer?":
- ASL class at Gallaudet
- Ignatian retreat
- Stepping up my freelance writing work
- Looking into any shop classes for adults
- Sewing, embroidering, soldering a fairly involved Halloween costume
What kinds of things am I not thinking of that might be delightful?
Replies from: Emile, iarwain1, Baughn↑ comment by iarwain1 · 2014-05-07T13:54:48.036Z · LW(p) · GW(p)
Maybe try doing nothing. For some people that would drive them crazy, but for others a month of rest and peacefulness can be life-changing. Try turning off the computer for a month. Take walks. Read a book under a tree. Smell the flowers. Meditate.
Also, I'm not sure if this counts as travel, but Shenandoah is only about 1:30 from DC. Getting a small cabin or a room in a bed and breakfast for a month is not so expensive. Immersing yourself in a more natural, less hectic environment can itself be extremely restorative. And you can even sew / embroider while you're doing it.
Replies from: palladias↑ comment by palladias · 2014-05-07T14:23:46.678Z · LW(p) · GW(p)
I am definitely in the "would drive them crazy" camp. One of the worst vacations I've taken was to St. John with my family. It's a long way to go just to read on the beach rather than read in a park or a library.
I do have Ignatian retreat on my list, though.
↑ comment by Baughn · 2014-05-06T01:10:56.051Z · LW(p) · GW(p)
I don't know what falls under 'freelance writing', but have you considered writing fiction?
It's a huge time-sink even if you're deliberately trying to improve your speed, but the skills are also surprisingly applicable - modelling your characters in your head isn't dramatically different from modelling other real people, even if you ignore the skills that merely fall under "knowing how to write". I've had a great deal of fun with that, lately.
You don't necessarily need to immediately jump into original fiction, either. Fanfiction is often considered "training wheels", but that doesn't just mean it's easier - well, it is, but it's also much easier to tell if you're getting characterisation right when there's the original work to compare to (and rabid fans to do the comparison), while the usual "benefit" of writing fanfiction (not needing to invent your own setting) can be trivially set aside if you feel like it.
Replies from: palladias↑ comment by palladias · 2014-05-06T02:02:09.238Z · LW(p) · GW(p)
I've written fanfiction, but I've only enjoyed writing fiction with a writing partner, as I did for those two stories. I get very very bored writing things that aren't dialogue.
I'm currently at a magazine for a journo internship, and have done some freelance book/theatre reviews for pay.
Replies from: Vulture↑ comment by Vulture · 2014-05-06T18:46:20.143Z · LW(p) · GW(p)
If you're planning to link this account to your real world identity, or already have, you might think twice about linking to those writings. Sorry if this was already obvious and considered.
edit: that said, I'm really enjoying APoF :-)
Replies from: palladias↑ comment by palladias · 2014-05-06T20:25:29.030Z · LW(p) · GW(p)
Glad to hear it! I'm traceable to those writings, but not though easy googling. The nice thing about being a writer with daily blog updates is security through obscurity. It's hard to trawl through to find whatever would be the worst thing ;)
comment by Douglas_Knight · 2014-05-05T15:35:51.251Z · LW(p) · GW(p)
What is the meaning and use of (total) GDP, adjusted PPP?
I cannot think of a single use for it (unlike nominal total GDP or PPP GDP per capita).
Replies from: Lumifer, lmm↑ comment by Lumifer · 2014-05-05T16:06:21.840Z · LW(p) · GW(p)
Well, PPP has meaning only in the context of multiple currencies, so presumably you're trying to get a handle on some country's nominal GDP expressed in a different currency. This means you need a foreign exchange rate, a multiplier to convert units to different units.
At this point things start getting murkier. Sometimes there's a market FX rate. Sometimes there is an official FX rate (and a different black market one). Sometimes there is no reasonable FX rate at all.
The PPP rate is just one of the possibilities. Depending on the circumstances it might be more or less appropriate.
The crude meaning of GDP converted at PPP rate is "how much stuff at local prices does this country produce/consume".
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-05T17:10:33.426Z · LW(p) · GW(p)
PPP has meaning only in the context of multiple currencies
That's not true. One does not use a uniform conversion factor across the Eurozone.
The crude meaning of GDP converted at PPP rate is "how much stuff at local prices does this country produce/consume".
Why would you ever want this?
Replies from: Lumifer↑ comment by Lumifer · 2014-05-05T17:19:07.621Z · LW(p) · GW(p)
That's not true.
It is true. If your nominal GDP of, say, Germany is different from the PPP-based GDP then you're measuring the same GDP in two different units. One of them is the standard Euro, what is the other unit?
Why would you ever want this?
I don't understand the question. For example, I find it useful to know that China's GDP using the official rate is very different from the same GDP using the PPP rate. It gives me better understanding of the Chinese economy and its place in the world.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-05T20:44:48.561Z · LW(p) · GW(p)
Perhaps one could talk about the units of PPP by talking about converting Greek GDP from "Greek Euros" to "German Euros." But that doesn't mean that the "Greek Euro" is a different currency.
It is useful to know that the cost of living in Greece is lower than the cost of living in Germany. That is, it is useful to know the PPP conversion factor. It is useful to think about GDP per capita in both nominal and PPP terms, to understand what life is like for the average individual. But what use is total GDP in PPP terms? Merely knowing that it differs from nominal GDP is a roundabout way of finding the PPP conversion factor. That's like saying that BMI is useful because, with height, it allows me to compute weight.
Replies from: Lumifer↑ comment by lmm · 2014-05-07T02:35:55.863Z · LW(p) · GW(p)
What's the use of nominal total GDP? I would expect the argument for PPP total GDP to be that it's a more accurate measure of the same thing, but I'm not actually seeing what the use is.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-07T04:02:38.295Z · LW(p) · GW(p)
Yes, total GDP is problematic. For utilitarian purposes PPP is better, but why do it one country at a time? (aside from making utility linear in money)
My comment was triggered by the announcement that China is now "the biggest economy" in PPP terms. One thing "the biggest economy" does is set prices. China can afford to buy more steel than India, so much that it drives the world price of steel. But the fact that there is a world price is closely related to the fact China pays for steel in dollars, not Chungking haircuts or Szechuan real estate. So that's what total nominal GDP is good for.
comment by Raythen · 2014-05-07T17:50:05.183Z · LW(p) · GW(p)
Hi, I wonder how you would use your rationality skills to solve this problem.
I'm very sensitive to cold and have been for at least 2-3 years. (I'm a 25 year old male). This is manageable with (really) warm clothes, but sometimes very inconvenient.
I've seen multiple doctors about this, and the response I've got was basically "our tests indicate there's nothing wrong with you, so there's nothing I can do". I've left multiple blood samples, and all the things that were tested are within normal (well, my trombocyte count is a bit low. Doubt it's related to this).
I'm slightly underweight, and have a history of fatigue and depression.
I'm looking for both practical advice and general rationality advice on how to deal with a confusing health problem.
Replies from: CAE_Jones, ChristianKl, mare-of-night, Richard_Kennaway, ChristianKl, Raythen, ChristianKl, moridinamael, Lumifer↑ comment by CAE_Jones · 2014-05-07T18:29:40.243Z · LW(p) · GW(p)
I'm in a similar situation, and am leaning toward it being a circulation issue. Would you happen to know what your last blood pressure measurements were?
My previous lead candidate was proto-diabetes, but the most recent tests suggest otherwise. The only comment made about my bloodpressure was by the trainee EMT, saying "I wish my bloodpressure was that low!". I've been suspicious that the safe range for bloodpressure might be shifted a bit too far downward, since most people suffer from high bloodpressure-related conditions, but I'll need to refind the evidence that pushed me in that direction.
Anyway, my current strategy is to try and get more/better exercise, fresh air and sunlight. Those are good ideas in general, and should have an impact if it's circulation-related. It's too early and I'm still struggling to get good exercise, and I didn't think to try and quantify changes until... just now, so right now, this solution is experimental on my end.
Replies from: Raythen↑ comment by ChristianKl · 2014-05-11T15:20:07.965Z · LW(p) · GW(p)
There are a bunch of ways temperature is regulated. Blood circulation is on of the main ways the body regulates the temperature of the extremities. Blood moves very fast through the body and has therefore a relatively constant temperature. The blood in your hand is warmer than the rest of the hand.
If there's more blood in the capillaries in your hand than your hand gets warmer. Low blood pressure in the arterioles means that less blood flows into the capillaries. If muscle tissue is tense that also usually makes it harder for blood to flow into it.
I personally used to often feel cold five years ago but solved the issue for myself. There are days where something emotional is going on and my thermoregulation is messed up but that's not my default. I do have done a bunch of different things, so I can't give you a single solution.
Firstly an easy suggestion. Drink a lot. Drinking can increase blood pressure. There were weeks where I needed to drink 4-5 liters a day for my body to work at it's peak. I would recommend you to try drinking 4 liters a day for a week and see whether that changes how you feel.
On of the main things I personally did was dancing a lot of Salsa. Salsa gave me a new relationship with my body. Part of Salsa is also having body contact and that allows me to feel which parts of the body of the woman I'm dancing with are warm and relaxed and which aren't.
Good Salsa dancers are usually well circulated. On the other hand I do know woman who danced for years and didn't solve issues like that in their body. Knowing dancing patterns doesn't seem to be enough. In the Salsa sphere body movement classes seem like the produce such results but I don't know whether they are optimal.
I do personally think that there's a case that 5 Rhythms or Contact Improvisation is better for your purpose than Salsa. But to be open, the theory based on which I make that recommendation are not academic in origin.
Another thing that I believe but which does not come from an academic source is that the problem is likely emotional in origin. I consider it to be a self defense mechanism of the body. If they get removed I consider it likely that emotions will come up and that have to be dealt with. Based on what you wrote about severe trauma, I would recommend you to have professional help.
Replies from: Raythen, Raythen↑ comment by Raythen · 2014-05-22T14:39:20.044Z · LW(p) · GW(p)
I appreciate you bringing attention to my blood circulation. My hands and feet rarely freeze (I do wear warm socks and gloves in winter, though). My ears are very sensitive to cold, though, which could well be a symptom of poor circulation.
↑ comment by Raythen · 2014-05-22T14:26:56.698Z · LW(p) · GW(p)
I personally used to often feel cold five years ago but solved the issue for myself. There are days where something emotional is going on and my thermoregulation is messed up but that's not my default.
Another thing that I believe but which does not come from an academic source is that the problem is likely emotional in origin. I consider it to be a self defense mechanism of the body. If they get removed I consider it likely that emotions will come up and that have to be dealt with. Based on what you wrote about severe trauma, I would recommend you to have professional help.
The link between emotions and blood pressure as well as thermoregulation you describe sounds a lot like vasovagal response
In that case I doubt that is what I'm experiencing, since I haven't noticed ANY correlation between my day-to-day emotional state, and how hot or cold I'm feeling.
So unless there's a possibility of very long-term correlations, on the scale of months/years (which doesn't seem to be what you're describing), I doubt this particular mechanism is causing my cold sensitivity.
I am receiving therapy. Thanks for the suggestion.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-28T05:08:41.337Z · LW(p) · GW(p)
The link between emotions and blood pressure as well as thermoregulation you describe sounds a lot like vasovagal response
I think the fact that vasovagal responses exist illustrate one well documented instance where there's interplay between those forces.
In that case I doubt that is what I'm experiencing, since I haven't noticed ANY correlation between my day-to-day emotional state, and how hot or cold I'm feeling.
I speak about repressing certain things for longer periods of time. Not something where you repress your trauma one day and don't do it the next. You can do the change in a single day. Even in a minute but that's not what happens most of the time.
↑ comment by mare-of-night · 2014-05-08T13:47:38.779Z · LW(p) · GW(p)
My first tactic with confusing health problems is adjusting my diet, but I seem to be more affected by diet than the typical person, so your mileage may vary Taking a very complete multivitamin for a few days and seeing if you feel any different is an easy way to check for nutrition deficiencies, if your blood tests didn't check for that (or only checked for a few usual suspects). If you do feel different, then you at least know you were deficient in something. You could also do an elimination diet for the most common food allergies, but that takes a lot of effort, so it might not be worth it if you and your family don't have a history of food issues.
If you're more sensitive to cold at some times than others, try to notice the fluctuation and see if it correlates with anything (especially stress, based on ChristianKi's comment). Maybe try writing down how cold you felt and what you did that day? (I usually don't write this sort of thing down, even though I know I should.)
Replies from: Raythen↑ comment by Raythen · 2014-05-09T14:36:00.328Z · LW(p) · GW(p)
Interesting perspective, thanks.
I am taking vitamins and have been for some time.
My diet has had a random drift over time due to practical concerns, taste changing etc... and random diet adjustments don't seem to have a noticeable effect. There might some specific nutritional strategies that would help - I don't have enough information to choose one, though.
More data and more detailed observations seem like a good idea. There might have been some fluctuations, but I'm not noticing any obvious correlations (besides, you know, exposure to cold temperatures).
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-11T17:59:38.748Z · LW(p) · GW(p)
This is a long shot, but is there a chance you're eating less than you need?
Replies from: Raythen↑ comment by Raythen · 2014-05-14T17:03:40.291Z · LW(p) · GW(p)
It's possible. I don't know. I eat when I'm hungry, which is quite regularly (once per 3-4 hours, maybe 5), so I'm definitely not starving myself. And if I try to eat more, I feel unpleasantly full, and I feel less hungry later - so I don't think it makes a difference.
I'm not sure how to check whether I'm eating enough save for counting calories (which seems complicated and unreliable).
I'm hoping I'll gain some muscle mass by exercise, both for its own sake and because weight gain by other means doesn't seem to be working for me (I suspect I naturally have a slim build).
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-16T14:23:43.768Z · LW(p) · GW(p)
At this point, I'd say it's unlikely that you're eating so little as to lower your temperature.
If you still want to test the hypothesis without counting calories, you could try a higher fat diet and see what happens.
Does your temperature ever get higher or lower?
↑ comment by Richard_Kennaway · 2014-05-07T20:51:51.086Z · LW(p) · GW(p)
Long underwear. Even if your legs don't specifically feel cold, adding more insulation there helps the whole body. Your legs are a pair of huge heat exchangers, and there's a limit to how useful it is to pile more layers on your torso if all your body heat can still leak out through your legs.
I've had something like that for the last 35 years or so. I just live with it. I suspect a connection with a serious illness I had back then, but I've never bothered to raise the matter with a doctor, because it doesn't seem like the sort of thing that a doctor is likely to have any remedy for. I am also slightly built (BMI 19 to 20) and have occasional attacks of great fatigue, but not depression.
Thick woolly hats are good too. A lot of heat is lost through the head.
↑ comment by ChristianKl · 2014-05-07T21:47:20.112Z · LW(p) · GW(p)
Did something happen 3 years ago? Maybe a major emotional trauma?
Replies from: Raythen↑ comment by Raythen · 2014-05-08T07:56:43.835Z · LW(p) · GW(p)
I've had a really bad childhood and experienced a lot of severe emotional trauma throughout my life since then, including at that time.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-08T13:07:58.205Z · LW(p) · GW(p)
I do think that what you have can be caused by severe emotional trauma. If that's the case it basically explains why the tests that doctors run come up empty.
There are defense mechanisms that the body can use in cases of trauma that lead to reduced blood circulation which in turn messes up temperature regulation and shows itself as low blood pressure.
That means that the first step would be to move to a safe environment where you aren't constantly exposed to severe emotional trauma. Did you already make that step?
Replies from: Raythen, Eugine_Nier↑ comment by Eugine_Nier · 2014-05-11T22:21:38.344Z · LW(p) · GW(p)
There are defense mechanisms that the body can use in cases of trauma that lead to reduced blood circulation which in turn messes up temperature regulation and shows itself as low blood pressure.
And these mechanisms don't involve anything that would show up on medical tests?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-11T23:41:17.974Z · LW(p) · GW(p)
His low blood pressure does show up in medical tests. The question of why the body set blood pressure at a certain point is largely unsolved.
In our academic system mainstream medicine doesn't investigate psychological issues and psychology generally doesn't investigate physiological issues like body temperature.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-05-20T03:47:17.980Z · LW(p) · GW(p)
His low blood pressure does show up in medical tests.
Yet, for some reason the intervening mechanisms don't?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-27T07:32:53.332Z · LW(p) · GW(p)
That would require running to see studies with big enough sample sizes to gather proxies for those proxies. There no money to run those studies.
Things that happen through complex patterns of neuron interactions are also not easy to study.
↑ comment by Raythen · 2014-05-22T11:49:46.400Z · LW(p) · GW(p)
Thank you for all the comments and suggestions. :)
At this point I have made an appointment to have my hormone levels checked (as suggested by Lumifer and NancyLebovitz).
I also think my blood pressure and circulation is worth looking into.
I'm still processing a lot of the suggestions and ideas, and might make another thread on this in the future.
↑ comment by moridinamael · 2014-05-07T19:35:34.626Z · LW(p) · GW(p)
I'm similar. I have found scarves to be both stylish and practical. The neck area is highly sensitive to cold. I've taken to toting a scarf if I am going to bring a jacket.
↑ comment by Lumifer · 2014-05-07T18:37:24.123Z · LW(p) · GW(p)
First question: what's your blood pressure?
Second question: did you do a thyroid panel and what did it show?
Third question: did you measure your body temperature in controlled settings (e.g. first thing upon waking up before getting out of bed)?
Common causes of sensitivity to cold are low blood pressure and hypothyroidism.
Replies from: Raythen↑ comment by Raythen · 2014-05-07T20:12:31.684Z · LW(p) · GW(p)
90/60 mmHg according to what a doctor told me during a measurement a month ago (though my journal says 98/60 for some reason). 105/60 in an another measurement a week before that.
Thyroid panel:
P-TSH mIE/L 1.5 (0.3-4.2)
P-T4, free pmol/L 15 (12-22)
P-T3, free pmol/L 5.2 (3.1-6.8)
S-Ak, (IgG) TPO kIE/L 8 (<34)
The last one is TPO antibodies. The parentheses are the reference ranges at my lab.
All values are within what is considered normal range. I've also had the thyroid physically examined (though palpation) and it appears there are no abnormalities (it's not swollen or enlarged).
I have not measured my body temperature.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-08T14:45:19.195Z · LW(p) · GW(p)
Your systolic is low, but I'm sure you're well aware of that.
The thyroid panel looks normal, but there exists a bunch of people (including a few doctors one of whom, I believe, wrote a book) who think that hypothyroidism is seriously underdiagnosed and that it will not necessary show up in the TSH/T3/T4 tests. Google it up. I have no opinion on their claims.
There is also, of course, the non-answer that your thermoregulation set point just happens to be very low :-/
Replies from: Raythen↑ comment by Raythen · 2014-05-08T20:56:28.516Z · LW(p) · GW(p)
I doubt it's a "thermoregulation set point" issue, since I haven't always felt this way.
Thanks for pointing out the blood pressure thing. I hadn't considered it might be related to cold sensitivity.
I have considered it might be a thyroid issue, and I am familiar with the controversy around thyroid disease. Not completely trusting all the alternative claims - but I think there's enough evidence to believe something might be going on. I think I might try to get a prescription for thyroid hormone medication, and see if it improves my condition. I'll probably try other options first, since there are potential side effects.
Replies from: Lumifer, NancyLebovitz↑ comment by Lumifer · 2014-05-09T04:50:07.402Z · LW(p) · GW(p)
If you're male you might also want to check your testosterone levels. And if your doctors and insurance are amenable, run a thorough hormones check in general.
Replies from: Raythen↑ comment by Raythen · 2014-05-09T14:02:01.981Z · LW(p) · GW(p)
Thanks, these seem like good suggestions.
I've made a list of what I'll try to have checked. Any comments?
DHEA-S
DHT
Estradiol
Estrone
PSA
Pregnenolone
Total and Free Testosterone
Sex Hormone Binding Globulin (SHBG)
Insulin like Growth factor (IGF-1)
↑ comment by NancyLebovitz · 2014-05-09T03:27:03.568Z · LW(p) · GW(p)
Your thermoregulation set point could have moved. In fact, I'd say that's exactly what happened, since I get the impression your temperature is fairly stable. The problem is that it's too low.
Very tentatively-- maybe you should get your hormones checked. This is based on a weak hypothesis that if menopause can send body temperature too high erratically, maybe there's a hormone problem which is keeping yours too low.
Replies from: Raythencomment by Gunnar_Zarncke · 2014-05-06T21:38:28.590Z · LW(p) · GW(p)
Effective parenting advice: Babys names affect life outcomes:
Names Race and Economists on Baby Name Wizard.
I'd guess that means choosing names that are
used in high status circles (sample celebrity babies names)
probably matching your ethnicity
sufficiently popular; best just starting to climb in popularity (not topping or declining)
or altzernatively timeless (e.g. old roman or biblical names)
Also choose multiple names because
it allows to later choose the best fit
allows for easier compromises with your spouse
allows to satisfy more relatives
a high number or surnames indicates higher status in itself
↑ comment by ChristianKl · 2014-05-07T15:27:40.524Z · LW(p) · GW(p)
used in high status circles (sample celebrity babies names)
You don't want hollywood celebrities. Low status people name their kids after hollywood celebrities. In Germany given kids Anglosaxon names is a sign of low status. http://www.sueddeutsche.de/leben/studie-kindernamen-und-vorurteile-von-wegen-schall-und-rauch-1.44178
According to that article good names for German children that make teachers think the child is high performing are: "Charlotte, Sophie, Marie, Hannah, Alexander, Maximilian, Simon, Lukas and Jakob". On the other hand bad names are: "Kevin, Chantal, Mandy, Justin and Angelina".
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-07T15:51:48.500Z · LW(p) · GW(p)
Gunnar said to name children after the children of celebrities, not directly after celebrities. But certainly using foreign celebrities is a very bad idea.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-07T16:27:31.454Z · LW(p) · GW(p)
Gunnar said to name children after the children of celebrities, not directly after celebrities.
The kind of person who follows magazine that tells them about the name of celebrities still isn't high status.
It's been a while till I researched the topic in more detail. Artists don't wear suits to appear high status and the don't give their children high status names. Royals and aristocrats might be a valid choice if you live in a country that has them.
In the US the way that people who go to Harvard and Yale name their children is what's counts as high status signal.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-05-07T22:11:32.280Z · LW(p) · GW(p)
But certainly using foreign celebrities is a very bad idea.
Yes. Use the childrens names of your local high status people.
In the US the way that people who go to Harvard and Yale name their children is what's counts as high status signal.
I second that. Celebrity is misleading. I wanted to give a concrete example "high status people" is too abstract.
↑ comment by Gunnar_Zarncke · 2014-05-28T20:43:41.417Z · LW(p) · GW(p)
Nice infographic for naming children:
http://www.informationisbeautifulawards.com/2013-longlist/infographic/
comment by [deleted] · 2014-05-06T00:22:55.332Z · LW(p) · GW(p)
Anyone else doing the course Functional Programming Principles in Scala ? It started last week, but still should be time to join and get the first assignment done.
Replies from: Markas, Viliam_Bur↑ comment by Viliam_Bur · 2014-05-06T20:59:38.101Z · LW(p) · GW(p)
OK, I'll try. Signed in, but will look at it deeper on Thursday.
comment by NancyLebovitz · 2014-05-07T15:29:12.873Z · LW(p) · GW(p)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-07T15:58:54.740Z · LW(p) · GW(p)
A better example of selective quotation is that Slate article.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-07T16:24:19.132Z · LW(p) · GW(p)
Specifically?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-07T16:38:39.062Z · LW(p) · GW(p)
All of the quotes are chosen for the purpose of deceit. The whole article is nonsense.
Replies from: gwern↑ comment by gwern · 2014-05-07T18:58:54.831Z · LW(p) · GW(p)
That's a reiteration of your original claim and in no way a reply to Nancy's question.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-07T20:31:49.382Z · LW(p) · GW(p)
I thought Nancy was asking me to identify a specific article that was selectively quoted. I did: all of them.
I'm just saying that the headline is a lie, supported by selective quotation: this is not a finding of sexual similarity and none of the popular coverage claims that it is a study of sexual difference. I'm not saying anything fancy about subtle connotations that are smuggled in and not explicitly identified. This is a very simple claim and anyone interested in more detail should read the popular articles themselves. What else could I have possibly meant?
Replies from: gjm, gwern↑ comment by gjm · 2014-05-08T16:17:24.867Z · LW(p) · GW(p)
OK, so I took a look.
First popular article linked from Slate. Slate says this article trumpets the study as revealing "differences in men’s and women’s bodies, differences found as deep down as the cellular level".
I think the article is nominally about the study, but ends up saying rather little about the study. It quotes an author of the study about the findings of the study. Then it goes on to quote him talking about genes on the Y-chromosome more generally, and has a couple of paragraphs that so far as I can tell are unrelated to the study, about differences in conditions like autism between males and females. Then it quotes "researchers" (doesn't say what researchers or give any context) saying that those differences may reveal "differences ... as deep down as the cellular level". And then it jumps back with a couple of paragraphs about what "the scientists" (now meaning the authors of the study, rather than those unspecified "researchers") plan to do next.
The article doesn't say that the study reveals exciting differences between men and women or that it's evidence of difference. But it does take a study that (as the Slate article says) finds chromosomal similarities between men and women, and then use something like half the space it has to discuss it to talk about how men and women are biologically different because of their different chromosomes. I think the Slate article is at least half right here.
Second popular article linked from Slate. Slate says the article says that the 12 genes looked at by the studies "may represent a fundamental difference in how the cells in men’s and women’s bodies read off the information in their genomes". The article says, near the start, exactly this: "the Y chromosome includes genes required for the general operation of the genome, according to two new surveys of its evolutionary history. These genes may represent a fundamental difference in how the cells in men’s and women’s bodies read off the information in their genomes." Which I think is exactly what the Slate article says it says.
The third article is like the first. It quotes someone -- in this case actually the study author -- saying that genes on the Y chromosome (but, it seems clear, not the one the study found) may represent a fundamental difference in, etc. It doesn't say that the new research has found that. But it leaves readers to draw their own conclusions, and a careless reader could very easily get the wrong impression about which the Slate article complains.
My overall impression is tha tthe Slate article has correctly identified an interesting (and complaint-worthy) phenomenon: anything to do with sex and biology tends to get turned into a story about differences between men and women, perhaps because readers love reading stories about differences between men and women -- but it's overstated its case and suggested that those kinda-misdirected articles are wronger than they actually are. (By claiming that they say sex differences were actually found by the studies they're reporting on, which at least in two of the three cases they don't quite say.)
It seems to me that Douglas needs a lot more evidence than he's given any sign of having, if he's going to claim that "all the quotes are chosen for the purpose of deceit", and I think he's flatly wrong to say that "the whole article is nonsense".
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-08T17:14:13.741Z · LW(p) · GW(p)
If you accept Slate's interpretation of the Nature articles, then the popular coverage isn't very good. But, as I said, the Nature articles aren't a discovery of sexual similarities or differences. The topic is sex differences, but the study says nothing new about how big or small the human difference is.
↑ comment by gwern · 2014-05-07T22:26:18.257Z · LW(p) · GW(p)
I thought Nancy was asking me to identify a specific article that was selectively quoted. I did: all of them. I'm just saying that the headline is a lie, supported by selective quotation: this is not a finding of sexual similarity and none of the popular coverage claims that it is a study of sexual difference.
And again, you reiterate your original claim and respond to neither Nancy nor myself.
Here is an article. You say it is wrong and not only is its thesis wrong, every quote is misleading. When asked for elaboration, you go on saying that. If 'this is a very simple claim', it should be easy to elaborate how the popular articles are correct, the quotations of them misleading, and the revisionist interpretation 'nonsense'.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-07T23:40:03.286Z · LW(p) · GW(p)
Could you be more specific?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-08T00:05:26.376Z · LW(p) · GW(p)
Take at least a few of the quotes and tell us what you think is wrong with them.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-08T00:31:27.474Z · LW(p) · GW(p)
There's nothing wrong with the individual quotes, just that they aren't representative of the article. Have you never heard of selective quotation?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2014-05-08T01:44:59.705Z · LW(p) · GW(p)
Could you give us some better quotes?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2014-05-08T02:07:59.665Z · LW(p) · GW(p)
If I had to choose representative quotes, I'd choose the headlines or first sentences. But what's the point? The only reason we are talking about this is the claim of distortion, not to summarize the science, let alone the science coverage.
comment by iarwain1 · 2014-05-05T13:36:25.017Z · LW(p) · GW(p)
Anybody else taking the Coursera / Johns Hopkins Data Science 1 course?
Replies from: None↑ comment by [deleted] · 2014-05-05T16:04:39.957Z · LW(p) · GW(p)
Yes, I am taking that course - I did the previous version of it, but started late so didn't complete the requirements.
It is a short course focused mainly on getting your development environment setup with github
Replies from: iarwain1↑ comment by iarwain1 · 2014-05-05T16:17:47.058Z · LW(p) · GW(p)
Are you also taking the R programming course then? Based on your experience, would it be fine to take both courses simultaneously?
Replies from: None↑ comment by [deleted] · 2014-05-06T00:15:59.650Z · LW(p) · GW(p)
I took the R programming course last year - it was a good short course if you already have some programming background, though I wouldn't have wanted to do it as a complete beginner.
It didn't feel like a normal course though - more like one of those week long corporate training modules. Useful, but I imagine doing the complete Data Science specialization in short blocks would leave some people with a patchy result.
comment by [deleted] · 2014-05-18T13:41:26.308Z · LW(p) · GW(p)
Short question, is Newcomb's Problem still considered an open issue, or has the community settled on a definite decision theory that will yield the right answers yet?
Replies from: army1987, ChristianKl↑ comment by A1987dM (army1987) · 2014-05-18T18:38:35.505Z · LW(p) · GW(p)
From the 2013 survey results:
- Don't understand/prefer not to answer: 92, 5.6%
- Not sure: 103, 6.3%
- One box: 1036, 63.3%
- Two box: 119, 7.3%
- Did not answer: 287, 17.5%
↑ comment by [deleted] · 2014-05-18T19:39:07.884Z · LW(p) · GW(p)
Well that's nice, but I had meant: have we come to a consensus on what sort of decision theory will auto-generate the right result, rather than merely writing down the result of the decision theory preinstalled in our brains and calling it correct? Has the "Paradox" part been formally resolved?
Because, you know, I don't want to post about it and then get told my thoughts were already thought five years ago and didn't actually help solve the problem.
Replies from: Douglas_Knight, army1987↑ comment by Douglas_Knight · 2014-05-19T23:20:58.658Z · LW(p) · GW(p)
Incarnations of UDT sufficient for this problem have been made completely formal.
Replies from: None↑ comment by A1987dM (army1987) · 2014-05-19T07:39:09.272Z · LW(p) · GW(p)
Not for any mathematically rigorous value of “what sort”, as far as I can tell.
Replies from: None↑ comment by ChristianKl · 2014-05-27T12:22:29.441Z · LW(p) · GW(p)
A good decision theory performs well in many problems and not only in one problem. Having a decision theory that solves Newcomb's Problem but that doesn't perform well for other problems isn't helpful.
There isn't yet the ultimate decision theory that solves everything so I don't see how individual problems can be declared solved.
comment by JoshuaFox · 2014-05-06T16:49:48.843Z · LW(p) · GW(p)
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-05-06T18:08:50.515Z · LW(p) · GW(p)
"Wisdom" and "pickup artistry" do not belong in the same sentence.
comment by Lumifer · 2014-05-08T15:10:17.434Z · LW(p) · GW(p)
Soylent to food is what a blow-up doll is to sex.
Replies from: gjm, TylerJay↑ comment by gjm · 2014-05-08T15:51:44.957Z · LW(p) · GW(p)
Isn't it rather the reverse? What in vitro fertilization is to sex, perhaps. It purports to offer the underlying biological benefits, but you have to give up the pleasurable sensations that normally attach to eating food.
I suppose really it's more complicated than that. You have (1) the biological need, which via evolution gives rise to (2) the pleasant sensations, and then cultural processes produce (3) all sorts of other stuff -- culinary traditions, sexual taboos, etc. And also (4) the usual way of satisfying the need and/or getting the pleasant sensations may be time-consuming or expensive or inconvenient.
Soylent removes 2, 3, 4. Blow-up dolls remove 1, 3, 4. IVF removes 2, 3, 4. (You might want to add "mostly" to some of those.) Make of that what you will.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-08T16:01:47.915Z · LW(p) · GW(p)
None of the above removes the biological need. Both Soylent and blow-up dolls satisfy the biological need.
Do note that humans have the biological need to have sex, not to impregnate (or be impregnated). Otherwise birth control would be a non-starter.
Replies from: army1987, gjm↑ comment by A1987dM (army1987) · 2014-05-17T09:00:12.369Z · LW(p) · GW(p)
Do note that humans have the biological need to have sex, not to impregnate (or be impregnated).
IIRC, stereotype has it that some childless women above a certain age do have such a biological need, popularly known as "hearing one's biological clock ticking" or something like that.
↑ comment by gjm · 2014-05-08T16:27:04.201Z · LW(p) · GW(p)
The "biological need" I have in mind is (obviously?) nutrition for eating, and reproduction for sex. Of course it's an individual need for eating, and a species-level (or gene-level) need for sex.
The biological need to have sex (as opposed to, say, "to reproduce") is parallel to the biological need to eat (as opposed to, say, "to be nourished"). Soylent doesn't do anything for that.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-08T17:18:26.700Z · LW(p) · GW(p)
The need for nutrition or reproduction exists only in the outside view.
From the point of the inside view, however, there is the need to eat things which will satisfy hunger and produce a feeling of satiation. There is no hardwired instinct for nutrition.
In the same way, from the inside view, there is the need to have sex and the impulse to care for children. Evolutionary speaking, that's sufficient because birth control is a recent invention.
Replies from: gjm, Eugine_Nier↑ comment by gjm · 2014-05-08T17:47:40.545Z · LW(p) · GW(p)
There is, indeed, no hardwired instinct for nutrition (which Soylent provides) but there is a hardwired instinct for eating tasty food (which Soylent doesn't provide).
How does the parallel with blow-up dolls go? There is no hardwired instinct for reproduction (which blow-up dolls don't provide), but there is a hardwired instinct for having orgasms (which blow-up dolls do provide, or at least may help some people with).
Seems almost exactly opposite to me.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-08T18:28:45.554Z · LW(p) · GW(p)
there is a hardwired instinct for eating tasty food
There is a strong hardwired instinct for eating food, tasty or not. As I said, the criteria is that it stops you being hungry and makes you feel satiated. Soylent satisfies this instinct.
Whether Soylent provides adequate nutrition remains to be seen.
Replies from: DanielLC, army1987, army1987↑ comment by DanielLC · 2014-05-12T20:25:07.416Z · LW(p) · GW(p)
Soylent provides ideal amounts of every known nutrient. It's possible that there's some obscure nutrient that people who live solely on Soylent haven't gone without long enough to have noticeable effects. Many people guard against this by having a regular meal once a day.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-12T20:32:18.031Z · LW(p) · GW(p)
Soylent provides ideal amounts of every known nutrient.
coughbullshitcough
Soylent provides the currently available estimates of the needed amounts of known essential nutrients for an average person of an average metabolism with no metabolic quirks.
Replies from: None, DanielLC↑ comment by [deleted] · 2014-05-12T21:11:35.269Z · LW(p) · GW(p)
Grandparent's comment is perhaps optimistic, but yours is downright FUD. The truth lies somewhere in the middle.
Replies from: DanielLC, Lumifer↑ comment by DanielLC · 2014-05-12T22:01:35.628Z · LW(p) · GW(p)
I don't think calling it FUD is a good idea. There is good reason to automatically fear something like this. I just feel like he's not taking into account the extent to which these fears have already been addressed. If he said this when Soylent was first made, he would have been right. It took a few tries to get it right. Even as it is, there is still room for error.
↑ comment by DanielLC · 2014-05-12T21:49:30.418Z · LW(p) · GW(p)
Do you plan your diet using future estimates of the amounts needed? Do you account for your metabolic quicks? Do you even have enough detail in your plans that these would have an effect?
I don't know the details, but I'd bet that in the case of not having good estimates for what's needed, they use the much easier to find amounts for what's normally eaten. Again, do you have a way of doing better?
Replies from: Lumifer↑ comment by Lumifer · 2014-05-13T14:30:31.762Z · LW(p) · GW(p)
Not to restart the Soylent debate again, but yes, I account for metabolic quirks and yes, I think I can (and do) better than Soylent. Soylent is both one-size-fits-all and same-thing-each-day-every-day.
Note that we didn't talk about criteria and likely have different ones in mind. For example, "Will you die if you eat nothing but Soylent for a couple of years?" is a very different question from "Is Soylent optimal food for me (or anyone)?".
Replies from: DanielLC↑ comment by DanielLC · 2014-05-13T20:17:42.409Z · LW(p) · GW(p)
I don't think Soylent is optimal, but I do think it would be very difficult to beat, unless they did leave something out or something like that.
Comparing Soylent to a blow-up doll is at best a huge exaggeration.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-13T20:48:22.156Z · LW(p) · GW(p)
I don't think Soylent is optimal, but I do think it would be very difficult to beat
Depends on what you compare it to. For complete-nutrition liquids it competes against a few expensive hospital products. But for food it competes against things like WholeFoods and farmers' markets -- and loses handily (IMHO).
Replies from: DanielLC↑ comment by DanielLC · 2014-05-13T22:03:37.010Z · LW(p) · GW(p)
It doesn't compete against individual foods. It competes against diets. If you went through the work to make sure the diet was perfectly balanced, than it probably wouldn't be that hard to beat Soylent, although I'm not sure the margin it's possible to beat it by would do much. I don't think things like WholeFoods and farmers' markets would be necessary. On the other hand, if you were trying to make a diet just by looking at a few major nutrients, or worse, whatever you happen to crave, you're not going to beat Soylent, regardless of the quality of food.
Taken as a single meal, it's not hard to beat Soylent. After all, Soylent has a third your recommended daily value of calories. Given how much Americans tend to eat, food with less than a third would be healthier.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-14T01:09:49.770Z · LW(p) · GW(p)
It doesn't compete against individual foods. It competes against diets.
No, that doesn't seem to be true. Let's take me. I can drink Soylent or I can eat a variety of food, these are the two choices I am facing. There doesn't have to be any "diet" involved.
Soylent doesn't compete against individual foods. It competes against food, in all its variety.
Replies from: DanielLC↑ comment by DanielLC · 2014-05-14T06:30:07.325Z · LW(p) · GW(p)
That's what a diet is, isn't it?
Or were you thinking that I meant "diet" as in reducing your food to lose weight or something like that? I guess that is the more common use. Sorry if I caused a misunderstanding.
In any case, the specific choice of foods you use is more important than the set they're chosen from. It doesn't seem right to say it competes against a farmers' market. It competes against specific selections of food that may be from a farmers' market.
↑ comment by A1987dM (army1987) · 2014-05-09T08:32:48.991Z · LW(p) · GW(p)
We eat food/have sex for (among others) one or more of the following reasons: 1. fitness purposes (very roughly speaking, ensuring survival/reproduction), 2. hedonic purposes (very roughly speaking, relieving hunger/horniness), 3. eudaimonic purposes (very roughly speaking, enjoying great food/sex). Masturbation (incl. using dolls) only achieves 2, in-vitro fertilization only achieves 1 (well, actually 2 too when you gather the sperm to be used), and protected sex achieves 2 and 3 but not 1; with food the difference between 1 and 2 is less clear-cut, but IIUC foods with much more fructose than fibre can provide energy without really satiating you and vice versa, so we can say that eating dessert when you're not really hungry (and not trying to gain weight) achieves 3 but not 2. Soylent achieves 1 and 2, so it's kind-of analogous to masturbating and using the semen for IVF.
But analogies are like ropes: if you pull them too far they will break down.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-09T14:50:08.261Z · LW(p) · GW(p)
Oh dear. It's funny how an inferential distance can pop up in the simplest things.
OK, let's me get explicit then.
The main parallel between Soylent and blow-up dolls lies within the concept of impoverished experience.
Food and sex have the capability of being very rich, deep, complex, engaging, intense experiences. There is potential for much, from simple sensual pleasures to complicated philosophies. It seems a waste to give up on such richness in favor of satisfying only the lowest, crudest demands of your body so that it would just shut up and go away.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-05-11T08:35:03.127Z · LW(p) · GW(p)
OK, thanks, I get your point now.
On the other hand, I get the impression that Soylent is mainly intended to substitute junk food, rather than gourmet meals, so, hoping this rope doesn't snap if I pull it this far... Are blow-up dolls better or worse than low-end prostitutes? Meh. What is it to me? De gustibus non est disputandum. Let the market decide! (Of course we don't know the market will locate the optimal result, because imperfect information/externalities/irrationality/etc., but if anything I'd expect these to favour the junk food.)
↑ comment by A1987dM (army1987) · 2014-05-09T08:17:57.936Z · LW(p) · GW(p)
There is a strong hardwired instinct for eating food, tasty or not. As I said, the criteria is that it stops you being hungry and makes you feel satiated. Soylent satisfies this instinct.
Do you only ever buy the cheapest available food that stops you being hungry and makes you feel satiated? Why or why not?
Granted, some preferences may not be “hardwired”, but it doesn't make them any less real.
Replies from: wedrifid↑ comment by Eugine_Nier · 2014-05-11T22:36:54.396Z · LW(p) · GW(p)
Evolutionary speaking, that's sufficient because birth control is a recent invention.
I'm not convinced that's true. I believe something resembling condoms, made of cotton or animal intestine, goes as far back as ancient Egypt.
Replies from: Jayson_Virissimo, None, Lumifer↑ comment by Jayson_Virissimo · 2014-05-12T01:50:36.862Z · LW(p) · GW(p)
Avicenna's medical encyclopedia (available in Europe starting in the High Middle Ages) lists dozens of birth control methods, many of which probably even "worked".
↑ comment by Lumifer · 2014-05-12T02:22:48.895Z · LW(p) · GW(p)
I believe something resembling condoms, made of cotton or animal intestine, goes as far back as ancient Egypt.
Evolutionary speaking, pharaonic Egypt is recent.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-05-12T11:27:48.680Z · LW(p) · GW(p)
If birth control had been as widespread as it is among present-day non-religious WEIRD people, the time since ancient Egypt to today would have been more than enough.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-05-20T03:44:57.152Z · LW(p) · GW(p)
Yes, it's also interesting to look at the reasons why it wasn't widespread for much of the time in question.
My guess is that memetic evolution suppresses birth control faster than genetic evolution can adept to it. Periodically we get outbreaks, like present-day non-religious WEIRD culture, where the suppressing memes collapse due to events in the larger memetic ecosystem.
↑ comment by TylerJay · 2014-05-17T00:16:39.958Z · LW(p) · GW(p)
Your analogy actually seems more plausible to me than gjm's. With a blowup doll and with Soylent, you get a less pleasurable version of the action (sex and eating), while also fulfilling your needs/impulses. People feel more of a "need" to have sex than they feel a "need" to procreate.
Even if gjm's was better, I've never seen someone receive 5 downvotes for an analogy that they think isn't quite as good as another, so it seems more likely that you've been downvoted because people who like Soylent didn't like you talking mean about it.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-05-17T22:58:41.251Z · LW(p) · GW(p)
Even if gjm's was better, I've never seen someone receive 5 downvotes for an analogy that they think isn't quite as good as another, so it seems more likely that you've been downvoted because people who like Soylent didn't like you talking mean about it.
Are you implying that's a bad thing? I didn't downvote Lumifer's comment, but if someone thought that a comment amounting to little more that ‘boo $thing!’ doesn't belong on LW, even in the Open Thread, even if it happens to be denotationally correct (e.g. “The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve”), I could see where they're coming from.
Replies from: TylerJay↑ comment by TylerJay · 2014-05-18T20:15:05.296Z · LW(p) · GW(p)
Are you implying that's a bad thing?
No, not necessarily; that's a good point. I just thought it was interesting that all of the subsequent discussion centered around the merits of the analogies, but it seems that what most people really cared about was the pro- or anti-Soylent positions.