Posts

LW Biology 101 Introduction: Constraining Anticipation 2011-05-25T00:32:25.088Z
Seeking suggestions: Less Wrong Biology 101 2011-05-20T15:28:03.283Z

Comments

Comment by virtualAdept on lessannoying.org · 2011-05-27T16:07:59.131Z · LW · GW

Yup, sounds about right. The phrases 'snide intellectualism' and 'ivory tower' are things I've heard more than once. From my significant other, no less. I know his response is an aversion to the site and not to intellectualism in general, or else, well, he wouldn't be my significant other, but it's incredibly frustrating. I try to bring up topics in a general sense instead of 'I read this really great article on Less Wrong...' but it's always difficult to avoid using references from people here if it's a topic that LW deals with often.

I suppose this would be a good point to say I'm interested in advice from anyone who has successfully converted a friend or family member's opinion of the site from knee-jerk negative to neutral or positive, given that I spent most of yesterday fuming about something absolutely ridiculous and insulting that was said in response to me bringing up the topic of cryonics.

Comment by virtualAdept on The 48 Rules of Power; Viable? · 2011-05-27T14:37:32.069Z · LW · GW

Fair. That's how I took it at first, and why I liked it more then.

Comment by virtualAdept on Measuring aversion and habit strength · 2011-05-27T14:35:06.617Z · LW · GW

The injunction to measure aversion strength by effect on behavior is one I think I will find particularly useful - in particular because I already consider myself good at dealing with strong feeling aversions. If an aversion feels strong, it tends to make me question myself rather pointedly about why I feel that way, whereas those that feel only like a mild preference or a case of 'have better things to do' have not, in the past, set off those alarm bells. I quite enjoyed this post.

Comment by virtualAdept on The 48 Rules of Power; Viable? · 2011-05-27T14:19:17.187Z · LW · GW

Do you really think saying less than necessary is good advice? That one seemed intuitively good to me at first glance, but then I thought about it a bit more. If I seek to communicate clearly, I should definitely say as much as necessary.

Otherwise, I heartily agree with you.

Comment by virtualAdept on lessannoying.org · 2011-05-27T14:13:18.176Z · LW · GW

Very few of my friends will read anything from LW that I link to them, and I suspect that they would find this link absolutely hilarious. I have never managed to get any of them to give a generalized account of exactly what they think is so systematically annoying about LW, though - they call the whole site 'pompous' and stop there.

Comment by virtualAdept on Requesting advice · 2011-05-27T14:02:34.460Z · LW · GW

I have noticed that I become more tense when reading effective arguments for Christianity and more relaxed when reading good arguments against it

What do you consider an effective argument for Christianity, and what sorts of thoughts do you find yourself thinking when you encounter such an argument? It might be useful to write them down.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-27T12:58:01.044Z · LW · GW

I agree. I didn't actually expect it to get promoted, since it doesn't fit the pattern of things I've seen on the very front. I'll show how new I am here and ask, though - Eliezer's comment read like he had been presented with some expectation that this be promoted. Is that because posts that get upvoted this far typically (or always) are?

Since I didn't ask, or state that I thought it should be, it seemed a bit out-of-the-blue, which did then and is still causing me to try to figure out whether his objection was only to the idea of promotion, or if he objected to promotion because he thought it shouldn't be here at all.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-27T05:36:26.432Z · LW · GW

On the title - the idea was, for this post specifically, to sketch the general principles that define both the space of reasonable approaches and likely outcomes in biological problems. I do think I did an underwhelming job demonstrating that link, and if that is what you mean or close to it, then I agree with you and will take it as a reminder to work on cohesion/full clarity of purpose in future posts. (If it's not, I invite further clarification.)

As for whether it's appropriate for LW... well, since I have a fairly good idea of what I'm going to write on the subject in the future, I think it is, because I intend to keep it targeted and relevant to issues the community has interest in - offering either another angle from which to consider them, or more background information from which to evaluate them, or ideally both. As I've said before, I've no desire to write a textbook, and there's plenty of other places on the internet we could go if we wanted to read the equivalent of one.

However, if you don't think that is enough to be relevant here, I would very much like to hear what, if anything, would make such a set of posts relevant to you (not trying to shift the reference frame - I mean relevant to you in the context of LW). The large positive response I received previously and in this post indicates to me that it's worth continuing in some form.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T22:44:52.226Z · LW · GW

Yes! Thank you for linking that thread; I hadn't seen it.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T19:40:55.120Z · LW · GW

To the best of my knowledge - and that deserves a disclaimer, since I'm a grad student in science and not yet completely versed in the legal gymnastics - it is changing, but any loosening of policy restrictions only comes with exceptional evidence that current norms are grossly unnecessary. In a general sense, bioengineering and tech started out immersed in a climate of fear and overblown, Crighton-esque 'what-if' scenarios with little or no basis in fact, and that climate is slowly receding to more informed levels of caution.

Policy also assuredly changes in the other direction as new frontiers are reached, to account for increased abilities of researchers to manipulate these systems.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T13:58:26.065Z · LW · GW

Hah, no, that does sound like a real course title, although usually they call it "cellular engineering" to sucker in more people who would be turned off by an explicit mention of math in the title.

(I kid. Mostly.)

It is only a small subset of what I want to cover, though. I shall continue to think on it.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T12:34:40.672Z · LW · GW

I can't really argue with that. I've been going back and forth with myself over whether I should call it something different. Suggestions?

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T12:33:34.459Z · LW · GW

It's a foundation - it's easiest to illustrate the patterns I'm describing on a molecular/cellular level, but they apply across the board. My current intent for the actual series is to start with a group of posts on molecular/cellular systems, both because a basic understanding of genetics and metabolism is extremely useful to understanding everything else, and because it's the area I'm most familiar with.

However, recognizing that about half the interest expressed in the suggestions thread was for topics above the molecular level, I'm trying to figure out how to do some posts on them earlier without making things disjointed/difficult to follow. I might settle for weaving in short bits about how molecular topics will apply to macroscale ones later.

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T04:16:09.380Z · LW · GW

I'm hoping that I'll be able to keep the posts within the realm of reasonable understanding for most people on this site by focusing on principles, patterns, and analogies to other fields; however, if at any point I'm failing to do so, I will ardently welcome that being pointed out.

The assumptions I made when constructing my tentative post outline were that readers here were likely to have some general scientific background, and at least a high school level of chemistry. I recognize that the latter might not be a good assumption.

(If you, or anyone else has suggestions at any point on how to improve the usefulness of these posts for those without a background in related fields, please let me know!)

Comment by virtualAdept on LW Biology 101 Introduction: Constraining Anticipation · 2011-05-25T02:52:16.700Z · LW · GW

If you mean my opinion on whether it's worth being afraid of - I don't think it is. Any powerful new technology/capability should be implemented with caution and an eye to anticipating risk, but I don't view bioengineering in a different capacity than any other scientific frontier in terms of risk.

On a practical level, the oversight on manipulation of organisms beyond your run-of-the-mill, single-celled lab workhorses (bacteria, yeast) is massive. In the not-too-distant past, it was an uphill climb just to be able to do genetic engineering research at all.

I got a lot of questions about 'bacteria FOOM,' if you will, around the time the synthetic bacterium paper came out. The short version of my answer then is worth repeating - if we want to make super-germs or other nasty things, nature/Azathoth does it quite well already (ebola, smallpox, plague, HIV...). Beyond that, this sort of research is exceptionally time- and resource-consuming; the funding bottleneck reduces the chances of the lone mad scientist creating a monster essentially to nil. Beyond even that, putting some DNA in a cell is not hard, but designing an idealized, intelligent organism on the level of strong AI is at least as hard as just designing the AI.

So my stance is one of.... let's call it exuberant caution. Or possibly cautious exuberance. Probably both.

Comment by virtualAdept on On the Anthropic Trilemma · 2011-05-24T21:13:23.313Z · LW · GW

Ahh, that makes sense. Thank you.

Comment by virtualAdept on On the Anthropic Trilemma · 2011-05-24T20:42:38.815Z · LW · GW

I think I'm hung up on the lottery example in Eliezer's original post - what is meant by a quantum lottery? He said 'every ticket wins somewhere' - does that mean that every ticket wins in some future timeline (such that if you could split yourself and populate multiple future timelines, you could increase your probability of winning)? If not, what does it mean? Lacking some special provision for the ticket, the outcome is determined by the ticket you bought before you queued up the split, rather than the individual probability of winning.

If anyone could clarify this, I'd be grateful.

Comment by virtualAdept on Seeking suggestions: Less Wrong Biology 101 · 2011-05-20T19:44:15.263Z · LW · GW

What constitutes 'more?' I ask because it seems to be a fairly frequent topic on the site (people trying to do less of it), and I don't want to write a primer post that ends up being rehash for 90% of readers.

Comment by virtualAdept on Seeking suggestions: Less Wrong Biology 101 · 2011-05-20T16:54:23.667Z · LW · GW

Are there a handful of broad principles that constrain anticipation about biological systems or processes that you could highlight?

There are. My thought about the current event idea for the topics would be simply to use those as a jumping-off point to talk about the foundational aspects, since otherwise I'd feel somewhat aimless as to where to start. But the way you phrased that made me think about more about how to structure a foundations only-type post, and I think I could pull at least some of that off in a way that would be useful.... I shall continue to think on this. Thanks for the suggestion!

Comment by virtualAdept on Seeking suggestions: Less Wrong Biology 101 · 2011-05-20T16:36:26.142Z · LW · GW

Oh, your suggestion makes me grin. Systems biology is essentially the theme of my academic career. I will definitely write about those things; the hard part will probably be shutting up about them.

Comment by virtualAdept on I want to save myself · 2011-05-20T15:14:51.739Z · LW · GW

I'm trying to think how to frame my response to this. I will essentially never say that something shouldn't be studied (unless the act of studying would cause more harm than good to intelligent test subjects), and I don't know with certainty that vitamin C megadoses would not be helpful. I know a lot of reasons why they probably wouldn't be, but that's all I have.

My major problems with the book itself (from what I can see of it online, and what I've read of the studies on the subject) are:

1) It suggests ('Cancer patients deserve to be offered this opportunity') that some wrong is being done to cancer patients by not chasing this idea farther than it has already been chased. This is both somewhat sensationalist, and revealing that the authors either don't know much about or have chosen to ignore the cognitive environment of cancer research. Cancer researchers would love to find a silver bullet, or even a reasonably effective bronze one. Quite aside from the good it would do for humanity, it would bring them an awful lot of immediate prestige. Existing biases in the field are therefore very much in favor of pursuing avenues of research that might be a bit of a stretch, if there's any hope that it might lead to a breakthrough. This makes blind rejection of potentially useful ideas very uncommon, which strongly downregulates my estimate of the idea's merit.

2) If the authors think that this research has a high potential for payoff, why are they not conducting it themselves, instead of imploring others to do so? There is certainly a much higher personal payoff to be had if it were to actually work, if they were to do it themselves. (I do rather intimately realize that 'doing research onesself' is much simpler said than done, and therefore would accept it as an answer that they are making serious, concerted, and persistent efforts to begin the clinical trials they're calling for.)

Comment by virtualAdept on I want to save myself · 2011-05-20T13:42:58.722Z · LW · GW

I'm a graduate student studying metabolomics, and my lab mate is actually doing her thesis research on cancer metabolism. My knowledge base is strong in the biology involved, and weak in the politics of medical studies and treatment preferences, as I have no direct interface with MDs.

Cancer has no 'silver bullet;' as is generally recognized in medicine nowadays, it is actually a collection of diseases with differing causes, that respond in different ways to various treatments because the mechanisms which promote cancer development, growth, and metastasis differ between forms. There is a consistent cycle in cancer research that pays homage to this fact - someone has good lab results with a new drug, everyone gets excited, and then it's found that its utility is extremely limited (or more often, impossible due to deleterious side effects). This knowledge causes me to have a very low probability estimate for the truth (of the magnitude, at least) of these claims.

Another red flag: If this was such a medical breakthrough, it would be backed up by controlled studies, and it would be a paper in Cell or Nature, not a self-published book announcing boldly that it has The Answer.

If you would like more specific information about cancer, I can either answer questions or send links, later, but at the moment, I need to leave my computer.

Comment by virtualAdept on Should I be afraid of GMOs? · 2011-05-19T16:43:13.297Z · LW · GW

Genetic engineering is simply a tool. A particularly malicious individual with an absurd amount of independent resources, ingenuity, and time on eir hands could use it to make something dangerous - but such a comic book supervillain aspirant could be far more effectively evil simply by making a lot of bombs and using them on densely-populated areas.

In the non-comic book world where we live, genetic engineering is done in a veritable regulatory straightjacket. Development of products for human consumption and/or those that will have contact with non-modified organisms must be exhaustively evaluated for risk potential and its expected benefits justified before the research even gets funded, in most if not all cases.

So, no green goo, and no, you should not be afraid. (Interested in the regulatory practices that keep somewhat bullying-inclined corporations such as Monsanto in check, perhaps, but that has little to do with genetic engineering and much to do with corporate politics and asshattery.)

Comment by virtualAdept on Should I be afraid of GMOs? · 2011-05-19T16:29:27.261Z · LW · GW

Food allergies tend to be a response to one compound, or a very small set of compounds. With respect to using genes from one organism to confer hardiness on another, the chances of conferring the production of a deadly allergen are diminishingly slim, but you'd better believe that if such a thing was to be done, the FDA (or analogous organizations outside the US) would have warnings plastered all over the derivative organism.

The level of justification and background research showing how you're NOT going to destroy the world that is required to even get funding for this sort of thing is.... large.

Comment by virtualAdept on Ask LessWrong: Design a degree in Rationality. · 2011-05-14T02:28:39.372Z · LW · GW

Exactly that. Being able to think in explicit algorithms is extremely useful for decoding your own thoughts and being able to actually change your mind.

Comment by virtualAdept on Ask LessWrong: Design a degree in Rationality. · 2011-05-13T17:27:56.031Z · LW · GW

Since we're taking students from varied and heterogeneous backgrounds and it's an advanced degree, I'd have a list of required topics, with the students being able to place out of the area of their undergraduate study (if their undergrad major covered one of the topics).

Core areas would include:

  • Probability/statistics

  • Mathematics (at least through basic calc and linear algebra)

  • Computer science (at least basic programming, algorithms, and software architecture)

  • Natural science (chemistry OR biology OR physics)

  • Research experience in a natural science or engineering lab of choice

  • Psychology (emphasis on cognitive biases and memory)

  • Anthropology

  • Philosophy (overview course on historical perspectives)

Also, added seminar courses with mini-units to tie subjects together and place them in context.

Electives would be open-ended, pending an essay to justify their selection.

Anyone have thoughts on whether a business or economics course should be included? I considered that, but I have not taken a formal course in those topics myself, and so don't have a good estimate of their actual utility.

Comment by virtualAdept on The elephant in the room, AMA · 2011-05-13T16:24:23.350Z · LW · GW

I've read your conversion story on your blog, and the answers you've posted here so far. The most salient question, to me, has become 'what led you to alter your belief about the existence of a deity,' specifically. Everything I have seen thus far has apparently relied on good feelings when you have participated in services and been around Mormons (and how nice they were/are).

I don't think you could give a less convincing account of why you should believe a god exists than that. The Mormon student I know in the lab is a kind, helpful, delightful person to be around, but so are my Catholic labmate and my atheist friends. If the general Warm Fuzzies you felt are a major part of your reasoning, how do you control for other possible sources of Warm Fuzzies?

If there are other reasons that caused you to believe in a god, those would be what I am reading this thread to hear.

And of course, if I have incorrectly understood the point of your story on your blog, please correct me.

Comment by virtualAdept on The elephant in the room, AMA · 2011-05-13T15:56:06.273Z · LW · GW

What has led you to anticipate (for brevity, some of) these things? Including some benefits for you and the predicted detriments for your fraternity brothers.

Comment by virtualAdept on You'll die if you do that · 2011-05-12T17:08:16.702Z · LW · GW

Small electronic appliances often have some sort of safety warning tag that includes, in large text, "DO NOT REMOVE." I remember being a bit horrified the first time I saw my mother cut one off a power cord, and only later actually thought through the logic that the hairdryer or whatever it was would be staying in our house, and none of us were going to try to use the thing underwater or something similarly unhealthy.

Comment by virtualAdept on The elephant in the room, AMA · 2011-05-12T16:58:50.418Z · LW · GW

To what extent do you agree with the official precepts and practices of the religion - i.e., what do you actually believe? (I'm interested in both the abstract affirmation-of-faith-you-say-in-a-service beliefs and how they apply in a social and day-to-day context.)

Comment by virtualAdept on The elephant in the room, AMA · 2011-05-12T16:45:05.922Z · LW · GW

I don't think that examples of people with fundamental, irrational beliefs being good at other things are relevant - calcsam has invited questions specifically about the belief whose rationality is being examined. If he was starting a discussion about mathematics and his points were dismissed due to his Mormon affiliation, your comment wold make more sense to me.

Comment by virtualAdept on "I know I'm biased, but..." · 2011-05-10T20:35:48.481Z · LW · GW

This common use of "I know I'm biased, but..." and its equivalent phrases is definitely a good thing to point out and work to avoid.

The proposed catch-and-analyze method for when you say such things yourself would also be useful from the other side of the conversation, as a more explicit exercise: Your conversational companion says 'I know I'm biased...' and that's a signal right there for you to ask 'how/why?' and get them thinking and talking about it. I actually think that done right, it could be turned from an unproductive 'please ignore my bad argument' signal into a pretty good jumping-off point for a double-teamed analysis of the issue.

In this vein, I actually find it useful to state my biases sometimes in conversation, as a sort of assisted sanity check - my friend might be able to catch connections from some of those biases to my arguments better than I, and in stating them, I explicitly remind myself what they are and that I should be dealing with them. If for instance I'm biased in favor of Idea X by my Inherent Trait Y (e.g. something like being a student, white, female, etc) Trait Y (hence the potential for bias) isn't going to go away; therefore the most productive path is to a) acknowledge it and b) apply a correction factor to the weighting of arguments that link to that bias.

Comment by virtualAdept on Optimizing Sleep · 2011-05-10T19:47:37.176Z · LW · GW

I have the same general proclivities that you describe. I've got some flexibility in my schedule (grad school is kinda awesome), but realistically speaking it's not reasonable to go with a full schedule inversion - while sleeping during the day is not difficult for me, my lab and occasional classes make it necessary to be up in the morning sometimes.

I have tried two extremes in how I handle sleep, and liked neither of them: forcing myself to a slightly abbreviated 'normal' schedule of 7 continuous hours of sleep from ~12-1 to 7 or 8 AM, and burning the candle at both ends and just existing on 3-4 hours a night, and 'catching up' for a 12 hour binge on Saturdays.

The first, as you might expect, annoyed me because I feel like my evening's only just starting by 10 or 11 PM; the second doesn't work well because it brings down my baseline functionality during the week. I've considered trying out one of the popular polyphasic schedules, but my work is variable enough as to make that difficult or infeasible to implement.

The best solution I've found is this: Starting from the basic knowledge that human sleep 'cycles' (in which you go into and then back out of REM) are considerably shorter than the ~8-9 hours that's considered a 'full night's sleep,' I experimented with different shortened sleep amounts (or, more accurately, in undergrad when I was often getting only 3-4 hours/night anyway, I kept track of the exact times I slept and how I felt the next day). I found that, for me, I feel like absolute crap if I wake up after 3 hours, or after 4.5, but somewhere between 3.5 and 4 there's a sweet spot where I can wake up fully and have high functionality through most of the day afterward.

Since that 'high functionality' doesn't last all day, it works best if I grab a half hour (in my case) nap in the late afternoon/early evening.

Staying slightly sleep-deprived (this is fairly slight for me; I just never have slept a lot when left to my own devices) allows me to shift the schedule essentially as needed (whether for work or to account for social activities), since I can always fall asleep when I want to, and I make sure I get one or two 8-hour periods (essentially inserting one extra cycle) a week to keep the debt from climbing.

Lately I've lengthened the 'night' to more like 5.5 hours, which seems to work well, as successive REM cycles tend to be shorter, but still generally running on less sleep than most people. YMMV, of course; the flexibility afforded by a short 'night' has always been worth the slight energy hit for me, with the trick being to avoid losing mental clarity as well. (Hence where figuring out what shortened cycle is best for you comes in.)

Comment by virtualAdept on But Butter Goes Rancid In The Freezer · 2011-05-09T21:24:27.377Z · LW · GW

Microbial interaction is only responsible for some instances/types of rancidification. Oxidation and hydrolysis reactions can occur without microbes, although again the question becomes one of how quickly these reactions would occur at cryogenic temperatures (very slowly, but we are looking at potentially very long timespans here) and availability of species.

Comment by virtualAdept on Nonmagical Powers · 2011-04-26T23:37:03.334Z · LW · GW

Like how I quoted you?

Wow. Yeah. My brain remembers looking for something like that, but I think it's only attempting to justify its embarrassment. Thanks!

Comment by virtualAdept on Nonmagical Powers · 2011-04-26T21:46:20.209Z · LW · GW

For whatever reason, I've always had a very strong memory for sounds - it's a relatively common occurrence for me to express knowledge of what a friend or family member had done on a particular day and time, based on hearing them bang about from another room. This tends to surprise them since I was not physically there to observe. The only other person I know who does this often is, fittingly, my mother.

More humorously, my office mates and I have jokingly accused our PI of teleportation; while it's usually extremely easy to hear someone coming down the hall long before they reach our door, he always manages to appear with no warning (even when someone's anticipating his arrival). He walks very quickly and wears quiet shoes, and is apparently the only person in the department who does the latter.

Comment by virtualAdept on Nonmagical Powers · 2011-04-26T21:26:14.453Z · LW · GW

[The solution is to approach new situations critically.]

This is a skill that can be honed in reading rather easily - I became explicitly aware of doing exactly as you've described when I began to have to offer up explanations and critiques of scholarly papers whose topics I wasn't innately familiar with on short notice. And it was just as surprising to my peers when I could come up with quick, cogent answers to complex questions about them on the spot.

Edit: Damnit, I fail at quote tags - is there a list somewhere of the tags the site uses?

Comment by virtualAdept on Elitist Jerks: A Well-Kept Garden · 2011-04-25T20:37:20.162Z · LW · GW

Are you posting about this here looking for input/ideas, or simply as a case study of what Eliezer described?

What kind of answers are being given to "is this the community we meant to create"?

I'm a retired (feels funny to say that in regard to anything at 23...) mod of a large-scale, cross-guild raiding community, and that kind of question comes up in relation to policy issues, but seldom in concern about a lack of liveliness on our boards. But then, our boards serve more of a social and organizational function than anything else - the players who want to read up on their classes, unsurprisingly, do that over in your garden.

WoW theorycraft is definitely not a difficult problem. Is there any talk of expanding EJ to go beyond number-crunching? Since my organization (Leftovers of Silver Hand) is such a prolific breeding ground for leadership styles to be honed and compared (thanks to our semi-independent charter group system), I've always been curious to see some kind of organized discussion of the human engineering aspect of running a raid/raiding guild.

Comment by virtualAdept on Things That Shouldn't Need Pointing Out · 2011-04-22T00:42:18.068Z · LW · GW

Fair enough - I tend to look for excuses to play with fire, so it seemed like the perfect solution to me. I think the oven probably does a better job of it, though.

Upshot of this: I now desire marshmallows.

Comment by virtualAdept on Things That Shouldn't Need Pointing Out · 2011-04-21T20:01:40.024Z · LW · GW

I'd be willing to bet that if you had at some point found yourself with an active (and at least moderately strong) desire to have a toasted marshmallow, you would have sought and found a way (oven, toaster, etc) to toast one in the kitchen... mostly because once upon a time, I found myself with a bag of marshmallows and some chocolate, and wanted s'mores, and decided since "toasting" to me at that time mostly meant "torching," a candle would suffice. And it did.

Toasting a marshmallow without a campfire wasn't a difficult problem; it was just one you didn't consciously try to solve. Maybe this marshmallow incident you've related is as simple as a recommendation for us to more actively identify little questions like that in our daily lives. If those questions can be converted to some form of desire ("I want X," or "I wish that X..."), it seems that we'd be more likely to see the simple-but-not-obvious solutions.

Over the course of working in a research lab at first part-time in undergrad, and now as a full-time grad student, I've run into a lot of little mental connections like that that tend to make me want to slap myself when I hear them, so I've started to make it a game to notice when something's unreasonably difficult, finicky, or irritating and, try to change something about it rather than just grumbling quietly to myself.

(One good example: re-papering bench space where ethidium bromide is used. Gloves are mandatory when handling EtBr-contaminated material. Labeling tape sticks to latex gloves like CRAZY, making it difficult to tape the bench paper down. It's almost embarrassing how long it took me to think of just wetting the fingertips of my gloves a bit to keep the tape from sticking to them.)

Comment by virtualAdept on Interest in video-conference discussion about sequences and/or virtual meetups? · 2011-04-07T04:08:03.098Z · LW · GW

I'm definitely interested; similar to others, late evenings (EST) work on weekdays for me, or afternoon+ on weekends.

I'd be interested in discussing the sequences and people's day-to-day experiences with applying the more nonintuitive aspects of rationality.

Comment by virtualAdept on Science Fiction Recommendations · 2011-04-06T14:50:44.262Z · LW · GW

I'm sure these have already come up, but I'll add my voice in enthusiastically recommending the following -

  • The Diamond Age by Neal Stephenson
  • Accelerando by Charles Stross
  • City at the End of Time by Greg Bear
  • Neuromancer by William Gibson

Accelerando will have rather familiar themes and ideas to anyone who's spent significant time on LW, in particular, although that goes for the others to a slightly lesser extent. City at the End of Time was probably the most "work" for me to read out of the four - I enjoyed it greatly, but it's a book best savored slowly. Neuromancer is pretty much the grand icon of cyberpunk, and Gibson's facility with densely evocative language makes me jealous. The Diamond Age is probably the one I had the most lighthearted fun in reading, the easiest to follow, even though it's a very thought-provoking story.

Comment by virtualAdept on Designing serious games - a request for help · 2011-03-25T00:15:37.298Z · LW · GW

after we've got that working, we could then figure out how to get the user to describe the ruleset to the computer in a flexible way. That's actually a Tough Problem, BTW. It's basically forming a mini-language... so definitely on the books, but probably not the first iteration. :)

Yeah, I realized that as I was writing the longer example, and also that it wasn't strictly necessary. Interesting, but not necessary. =)

Your description of phase 1 prediction coding is very close to what I was picturing, and having a randomized set of questions rather than just saying "predict the final state" (in entirety) would give more game repeatability for less code if I understand correctly.

I actually really like the idea of having them just give a probability estimate the first time, or the first few times. I'm betting that will make for an increased effect of confirmation bias in those stages, and that their scores will improve when they're forced to itemize evidence weights - which illustrates a point about confirmation bias as well as tying into the kind of thought process needed for Bayesian prediction.

(If you were to get as far as trying to code the user-described ruleset bit... I'd suggest finding someone who's played Dragon Age and ask about the custom tactics options. I think that sort of format would work, as long as the number of total types of game objects and operators stayed relatively small.)

Comment by virtualAdept on The trouble with teamwork · 2011-03-23T20:23:23.385Z · LW · GW

First, something not-particularly-useful-now but hopefully comforting: group projects in school, even ones that mimic real world problems, very often are not comparable to Real World projects in the sense of group composition and motivation. In school, you just can't get away from the fact that your ultimate goal is a grade, which is intangible and at least partially arbitrary. Because of that fact, you will nearly always have less total group motivation and more total disagreement on how much work is required for an "acceptable result" on a project for school than you will once you are out of school. As you noted in your lifeguarding example, being paid for your work helps no small amount. I'd also rather think that the fact that someone's life rides on the group's performance (as it certainly will in situations you encounter in nursing) takes motivation to a whole new height.

I had your problem all through high school and most of my undergraduate education. I attribute its fading primarily to learning to trust my group members more, which was facilitated through

1) picking group members carefully when I got the chance, to maximize potential for # 2

2) realizing that in my field, chances were high that at least half of any group I was in would be in the same ballpark as I was where motivation was concerned.

Each time I had to work in a group, I made a conscious decision to trust my group members to do a decent job, which helped me remember not to let my control freak tendencies make me objectionable. Sometimes this involved noting members I didn't trust and resolving to watch for slack from their end, as well, but quietly.

I also became a good leader by creating and administrating a very diverse group of players in World of Warcraft for almost three years, but I really don't recommend that route unless you have either an absurd amount of free time or a very low regard for sleep.

Comment by virtualAdept on Designing serious games - a request for help · 2011-03-23T19:36:40.171Z · LW · GW

This is the simplest sort of example that I was picturing as I wrote the suggestion - it might not be sophisticated enough as described below to be sufficiently challenging.

I also changed my mind a bit about how phase 1 should be structured, so I'll work that in.

A "scenario" is a box on the screen that is populated by colored shapes that move around like paramecia on a microscope slide, and interact with each other according to the rules for the current round of the game. The scenario ends after a short time period (20-40 seconds) and freezes in its End State. This is what the player will be trying to predict.

Phase 1: Several scenarios are presented in sequence. Each scenario consists of colored shapes interacting with one another - they might bounce off one another and change colors; they might split after a collision or spontaneously; they might eat one another, etc. The interactions show a pattern over the multiple scenarios, such that an observer will eventually start to form predictions about the end state of the system in each scenario. After the pattern has been demonstrated, the player could be asked to code a decision tree for prediction of the end state based on the pattern observed (or this step could actually be skipped, and the Phase 3 predictions just compared to the implicit ruleset used for the pattern without ever making sure the player knows it). Several more scenarios are presented where the player is asked to predict the final state (following the same ruleset as the earlier patterns).

A very simple example of such a ruleset could be as follows:

  • If there are a circle and a square of the same color, they will collide.
  • If a red circle collides with a red square, they will each split into two of themselves.
  • If a blue circle collides with a blue square, the circle will 'eat' the square.
  • If a circle and square of any other color collide, their states will not change after collision.

Phase 2: A given number of scenarios are presented (including the end state). This number is available to the player (ie, the player does not have to keep count of the total numbe (+ evidence). Some explicitly violate these rules (with varying degrees of blatancy - using the ruleset above, one scenario might contain only one pair of shapes that did follow the applicable rule, while another scenario might contain five pairs that misbehaved) (- evidence). Some contain shape/color combos that simply do not contain the right combinations to illustrate the rule (null evidence).

Phase 3: The player is asked to report the relative amounts of (+), (-), and (null) evidence presented in Phase 2.

There is one underlying ruleset per round of the game. Rounds can and should sometimes have rules that contradict rules from previous rounds. The rulesets increase in complexity each time a new round is begun.

Difficulty would increase with complexity of rulesets. Requiring the player to explicitly state the ruleset inferred in Phase 1 would probably make it easier. Introducing interacting symbols that have meaning beyond the bounds of the game (words or pictures) instead of the shapes would likely increase difficulty by requiring the player to ignore prior associations and biases attached to the symbols being used.

Does that make the idea a bit clearer?

Comment by virtualAdept on Designing serious games - a request for help · 2011-03-22T18:51:52.188Z · LW · GW

Here's an idea for a game to train awareness of/resistance to confirmation bias:

The game would consist of three phases, that could then be repeated for however many iterations (levels!) were desired.
1) Presenting and habituating the "theory." Basically, give a set of rules for making some kind of decision/prediction, and then have the player apply those rules to a series of scenarios that clearly illustrate the supposed predictive (or score-increasing, if you will) properties of the Theory.
2) "In the wild" - Now present a series of scenarios that each either offer evidence that the Theory from phase 1 is useful (+), evidence that the Theory is incorrect(-), or no clear evidence in either direction (null).
3) "Assessment" - Have the player estimate the relative frequencies of (+), (-), and (null) evidence given in phase 2. Player receives an iteration score based on the accuracy of these estimates, and a cumulative score over all iterations completed.

Later iterations (higher levels) could perhaps re-use multiple Theories for the same round, and then in phase 3 ask for evidence estimates for all the Theories at once, possibly even throwing in Theories for which no evidence was presented in the second phase. Higher levels of complexity bring higher stakes (larger increases for accuracy and larger decreases for inaccuracy), so a player who could continue to improve the cumulative score with increases in difficulty would be doing very well indeed.

I've spoken of the Theories and Evidence in the purely abstract here, but I'm picturing either color/shape patterns and movements, or plausibly realistic word problem-type scenarios. The former would be preferable since it would not involve importing the player's biases about situations that might be found in the real world... or actually, come to think of it, it might be interesting and/or useful to make use of realistic-seeming examples precisely for that reason. Huh.

Anyway. The scoring algorithm would reward players who most aggressively sought out (-) evidence for the active Theory or Theories.

Comment by virtualAdept on A Rationalist's Account of Objectification? · 2011-03-22T02:18:54.688Z · LW · GW

I've read through the comments thus far, but relatively quickly, so please point out and forgive if any of this is exact rehash.


First, and directly concerning text in the post: one of the listed Ways to Objectify is denial of autonomy, and that is discussed briefly after the list. In later examples, lukeprog describes how we...

"...all use each other as means to an end, or as objects of one kind or another, all the time. And we can do so while respecting their autonomy."

The post implicitly casts denial of autonomy as the defining Bad Thing about objectification. On the surface, I'd agree that that is one of, if not the most inherently negative aspect of objectification, but I need to think about it some more.


Ultimately, I do not think objectification (action with one or more of the listed traits) is necessarily a Bad Thing; if I did it would place me in the anti-pornography, anti consensual sadomasochism camp of feminism, which of course involves a desire to restrict the autonomy of adults... and while that circle isn't usually trotted out as an argument for why objectification isn't inherently bad, the symmetry is worth noting, at the least. It also lends some sense to the idea that denial of autonomy is, in fact, the major problematic factor out of those listed.

On the broad scale, I'm inclined to agree that the feminist argument against objectification is primarily utilitarian rather than categorical (and utilitarian for all the reasons that various people have already explained). The feminist utilitarian arguments (of which the rape culture argument is one) also usually depend on the unequal circumstances of women in current society. The takeaway message should then be to be aware of and understand how and to what extent you're interacting with, and yes - objectifying - people you meet. If you're a photographer who hires a model for a photoshoot, the resultant photos are going to involve several aspects of objectification, but (presumably) no harm or attack on the model. If you're treating a woman who works with you in some manner that is not dependent on her appearance with any of the listed behaviors beyond instrumentality, you're committing harm.


Having said that, it should also be fairly obvious that I don't consider instrumentality a problem.

Comment by virtualAdept on Admit your ignorance · 2011-03-17T18:59:57.158Z · LW · GW

This jives with my experience. Also, the grading I've done for various professors (and specifically the interaction that goes along with the grading) has exposed me to a lot of variations on the attitude of "officially, there are no stupid questions... but there are definitely stupid questions, and I'm tired of them." It's not ubiquitous, but it's common enough to make worrying about the prof's opinion pretty reasonable if you expect them to have any say in your future success beyond the grade you get in their class.

Comment by virtualAdept on Admit your ignorance · 2011-03-17T18:53:31.148Z · LW · GW

The problem with "does this make sense?" is that one to whom a topic/explanation makes sense cannot necessarily reproduce the principle. You're more likely to get an honest answer asking if it makes sense, but I think that's probably because "making sense" requires a less rigorous facility than "understanding."

Several of my graduate professors have a habit of pulling intuitive leaps into problems that make perfect "sense" when presented, but they aren't the sort of thing that many, if any, of the students are going to be able to make on their own due to lack of such intimate familiarity with the material. It really shows on the problem sets.

Comment by virtualAdept on NASA scientist finds evidence of alien life [link] · 2011-03-06T16:55:18.011Z · LW · GW

PZ's his own special brand of abrasive and dismissive, but I went and read most of the paper, and while he's not exactly rigorous with explaining his criticisms, I think they're based in good ones.

While the design of the JoC website shouldn't affect assessment of the article, the fact that a paper on such a potentially high-impact subject isn't in a mainstream journal at all does and should send up some red flags that there might be issues with the paper that would keep it from getting past peer review.

My biggest issue with the paper is that the study isn't controlled. They took appropriate steps to prevent contamination of their samples, but they don't have any reasonable negative control set up that would give them some perspective on their comparison to the living bacteria.

Their "conclusions" are suggestive, rather than conclusive - it all depends on holding up these meteorites to pictures of actual bacteria and saying "Look! They look alike! And there's some enriched carbon and stuff in these fossils!" Which could certainly be interesting, but for the paper to pass muster with mainstream science, they would need to offer a convincing test that would disprove their hypothesis were it to come out a certain way. (Hey, that sounds familiar!) As it stands, they can't. They can only say that their observations look interesting.

Given this, the whole thing reads like they went out looking for whatever evidence they could fit to their prior hope of finding extraterrestrial life, which doesn't immediately disprove their findings, but it certainly holds them back from credibility.